Enterprise block storage is typically associated with primary workloads. This is due to resiliency and performance consistency, though the marketplace is in transition. Block storage vendors have been increasingly shifting from traditional architectures toward innovative new designs. Through the introduction of technologies such as flash memory and high-speed Ethernet networks, performance has been optimized and costs have been reduced. This allows for more freedom in system design.
As a result, our editors have compiled this list of four key things to evaluate when searching for enterprise block storage vendors. For an even deeper comparison of the top options for your organization, consult our Data Storage Buyer’s Guide.
Flash memory has significantly shifted the storage space when compared to hard disks. Flash offers higher speed with minimal latency, but also strengthens resiliency and durability. Flash data storage vendors are closing the pricing gap with hard drives as well. Other technologies that have made for better performance include high-speed networks and faster CPUs that have specific instruction sets to speed up storage operations, while protocols designed specifically for flash memory (NVMe) are causing latency to shrink even more. Today’s challenge in the storage field is to achieve consistent performance while simultaneously serving a range of different workloads on the same system.
The majority of organizations buy storage systems for specific projects or infrastructure needs with support contracts that can span three-to-four years or more. Because storage systems can last longer than that (sometimes up to seven years), the forced refresh cycle every three years can create financial and operational problems. A properly updated system that remains in-line with the rest of the infrastructure will help to keep costs down while also helping organizations avoid data migrations and forklift upgrades.
Modern data storage systems are typically shared by many servers, with a growing number of VMs and applications as well. There are also containers and more recently a return to bare metal Linux for some big data and AI/ML use cases. Storage systems are reconfigured at a higher rate and must provide integrations and tools with a wider variety of software stacks in the upper layer, in addition to an automation platform. You should also be aware that not all applications have the same needs in regard to latency, speed, or priority. Quality of service mechanisms could assist you in avoiding issues in crowded environments.
Ease of Use
In many IT organizations, particularly smaller ones, administrators manage many components of the infrastructure. This means that on the whole, administrators have more generalized skills and may have trouble with complex and non-intuitive systems. Graphical User Interfaces (GUIs) and dashboards can simplify management, especially when they are supported by predictive analytics for capacity planning and troubleshooting. CLIs and APIs that include specific integrations with other tools allow for resource and management provisioning directly from the product that uses them (hypervisor or container orchestrator for example).
Latest posts by Tess Hanna (see all)
- Nutanix Adds Ransomware Capabilities to Cloud Platform - February 25, 2021
- Red Hat Completes its Acquisition of StackRox - February 22, 2021
- Zadara Acquires NeoKarm to Bolster Edge Cloud Services - February 17, 2021