It used to be that the major roadblock to flash storage adoption was cost – the price-point of flash made it cost prohibitive for all but the most mission-critical high-performance applications. Now with the cost of flash rivaling HDD and continuing to fall, those days are thankfully behind us.
As a result, I’m sure you’re noticing that more and more organizations are turning to flash to keep up with the accelerating demands of enterprise applications. In the process, they’re discovering that flash media changes the performance balance between servers, networks and storage, requiring them to re-think their data center environment.
While flash storage can enhance the performance of your customer’s applications, there are three potential roadblocks you will need to help your customers break down in order for them to realize the full value from their flash investment:
- Storage network capacity
- Storage architecture
- Resiliency
Network capacity
Picture yourself on the freeway at 5:00 in the morning. Traffic is relatively light, and everyone is moving along at or close to the speed limit. Now add more cars as the morning commute kicks in, and things gradually slow down. Add more cars, and eventually you’re approaching gridlock. That’s what flash storage can do to a customer’s network.
Flash media is fast – up to hundreds of thousands of I/O operations per second (IOPS) at sub-millisecond latency. That’s orders of magnitude beyond the performance of spinning disk. But that ability to generate more read and write operations means more traffic for your customer’s storage network, moving data back and forth between storage and servers. And as network traffic piles up, latency increases. The end result is a traffic jam that slows down application performance.
For example, a common online transaction processing (OLTP) workload connected to flash storage can quickly saturate 8 Gb/s fiber channel (FC) network components like host bus adapters (HBAs), network switches, and target adapters. Your customer’s storage network can become a bottleneck, preventing them from fully utilizing their compute and storage resources.
To get the most from their flash investment, your customers may need to consider a network upgrade. In our OLTP example, upgrading from 8Gb/s FC to 16 Gb/s FC can increase bandwidth and IOPS by at least 35% and improve storage latency by 2.5X or more. That’s the equivalent of adding extra lanes to a freeway to support more traffic. A network upgrade provides the added benefit of requiring fewer components (switches, adapters, etc.) to achieve bandwidth and latency targets, resulting in lower costs.
Storage architecture
It’s not just the network that can slow them down; the architecture of their flash array itself could prevent them from realizing the full benefits of flash media.
Some vendors have entered the flash market by simply re-equipping existing disk storage arrays with flash media. On the surface that might sound like a good idea, but in reality it’s like dropping a finely tuned racing engine into the family minivan. Sure it will run faster, but the minivan has no hope of getting the full performance benefit from the horsepower the racing engine can generate because if you look end-to-end to all the elements that make up the performance, the engine only represents a fraction of the elements.
Similarly the characteristics of flash require re-thinking performance through the end-to-end I/O path, including server connectivity, switches, storage controllers and backend connectivity to the solid-state drive (SSD) media. Much like a racecar is optimized to get the full performance from its racing engine, a flash array and the supporting architecture should be optimized to support flash media.
Storage controllers and algorithms not designed specifically for the rigors of flash will not deliver the desired latency and I/O performance. Having sufficient bandwidth in their array is another consideration.
Typical dual-controller storage designs suffer from an inability to effective scale to keep up with flash performance as storage performance in flash arrays is a function of the controller performance.
Resiliency
Some vendors have taken a different approach, designing flash optimized storage arrays from the ground up. While this can alleviate the bottlenecks associated with legacy storage architectures, it can pose another set of challenges. Often the redesign comes at the expense of the Tier-1 resiliency and data services that your customers rely on, which can be a little like driving that racecar without a helmet – everything is fine, until it’s not. Features like hardware and software redundancy, non-disruptive upgrades, transparent active-active failover, and remote synchronous/asynchronous replication are critical to their data center but are not yet always standard offerings with all-flash arrays.
Deploying one of these systems can also mean accepting another separate and distinct storage architecture into their data center, creating an additional storage silo and complicating your customer’s data protection strategy. To provide best value, flash storage arrays should integrate with the tools they already use, enabling hypervisor and application owners to control backup and recovery processes directly from their preferred system management consoles. To achieve true data protection, they will also need to go beyond snapshots to create fully independent backup volumes that can be restored at the volume-level in the event of disaster.
Flash storage changes the performance balance between servers, network and storage, requiring you to re-think your customer’s architecture. Helping them realize the full benefits from their flash investment requires a balance of the right storage, the right storage architecture, the right data services and features, and the right network solution.