SSD has established its position in the data center. Nearly all major vendors specify a Tier 0 in their best-practice architectures. Server-side SSD is being used to enhance server performance and storage-side SSD eliminates the boot-storm bottleneck. As with most technologies, though, it’s as important to know when not to use it as it is when to use it. Here are some cases where not to use SSD.
Don’t use SSD when applications are not read-intensive. SSD is brilliant for read-access times. It can outperform HDD by 10X or more. There is no free lunch, however, as SSD loses all of its benefits in the write category. Writes not only lag, but they also wear out the SSD memory cells. Memory cells have an average write life after which the cells begin to burn out (see your vendor for details of its specific system). As cells fail, overall performance degrades. Eventually, the SSD must be replaced to restore full performance and we all know SSD is not cheap. Some vendors do offer extensive warranties.
So what is the magic line for a read/write ratio? There probably isn’t one, but start with 90/10 as ideal. Application requirements may dictate a compromise in this regard, but knowing permits IT managers to make a conscious decision. If the ratio is below 50/50, then obviously an HDD would be a better choice. Here, from an application performance perspective, the SSD read performance is being offset by the inferior write performance.
Finally, if SSD is needed for read performance but writes are an issue, consider some of the vendors that employ wear-leveling mechanisms and minimize write-amplification to reduce the impact. SSD size will also be a factor. Going cheap on the SSD increases thrashing as it reduces the chances of a recursive read.
Don’t use SSD when data access is highly random. SSD is sometime referred to as “cache-tier” and the name is apropos. Fundamentally, it is a cache that eliminates the need to perform a “fetch” to a hard-drive when the data is cache-resident. Applications with highly random access requirements simply won’t benefit from SSD – the read will be directed by the array controller to the HDD and the SSD will be an expense with little benefit.
Don’t use general-purpose SSD in highly virtualized environments. OK, this one will generate some controversy, because there are some really good use cases for SSD with virtual machines, such as boot storms. However, many VMs accessing the same SSD results in highly random data patterns, at least from a storage perspective. When a hundreds of VMs are reading and writing from the same storage, one machine is constantly over-writing the other. However, there are SSD solutions designed specifically for virtual environments, which is why there’s a “general purpose” caveat above.
Don’t use server-side SSD for solving storage I/O bottlenecks. Server-side SSD is fundamentally server cache, which solves a processing problem and even a network bandwidth problem. Spreading SSD across hundreds of physical servers, equipping each server with its own SSD, may indeed help with I/O bottlenecks, but not nearly as effectively as the same aggregate capacity in a storage tier.
Don’t use Tier 0 for solving network bottlenecks. If data delivery is inhibited by the network, it’s obvious that optimizing the storage system behind the network will do little good. Server-side SSD may reduce the need to access the storage system and thereby reduce the network demand.
Don’t deploy consumer-grade SSD for enterprise applications. SSD is manufactured in three grades: single-layer cell (SLC), multi-layer cell (MLC) and enterprise multi-layer cell (eMLC). MLC is considered consumer-grade and found in most off-the shelf applications. It has a life of 3,000-10,000 write operations per cell. SLC, or enterprise grade, has a life of up to 100,000 write operations per cell. eMLC attempts to strike a balance between price and performance, offering around 30,000 writes per cell but at a lower price point than SLC. Caveat emptor, as you get what you pay for.
Phil Goodwin is a storage consultant and freelance writer.
This was first published in June 2012
Join the conversationComment
Share
Comments
Results
Contribute to the conversation