Introduction of VMware vSphere Flash Read Cache
August 26, 2013 Leave a comment
This post is part of a series of blogpostings on VMworld 2013 announcements. See here for a complete overview of all announcements.
vSphere Flash Read Cache is a new vSphere feature introduced in version 5.5. The feature was previously known in the vSphere Beta as Virtual Flash or vFlash. It aggregates local flash devices to provide a clustered flash resource for VM and vSphere hosts consumption (Virtual Flash Host Swap Cache)
VMware vSphere 5.5 introduces new functionality to leverage flash storage devices on a VMware ESXi host. The vSphere Flash Infrastructure layer is part of the ESXi storage stack for managing flash storage devices that are locally connected to the server. These devices can be of multiple types (primarily PCIe flash cards and SAS/SATA SSD drives) and the vSphere Flash Infrastructure layer is used to aggregate these flash devices into a unified flash resource. You can choose whether or not to add a flash device to this unified resource, so that if some devices need to be made available to the virtual machine directly, this can be done.
The flash resource created by the vSphere Flash Infrastructure layer can be used for two purposes: (1) read caching of virtual machine I/O requests (vSphere Flash Read Cache) and (2) storing the host swap file. This paper focuses on the performance benefits and best practice guidelines when using the flash resource for read caching of virtual machine I/O requests.
This feature is available in vSphere 5.5 Enterprise Plus edition only!
Flash Read Cache provides very similar features to cache located inside storage arrays. The problem with this SAN-based cache is that the cache is behind the controllers. Performance of controllers could be an issue to get the max out of the cache. Also the application requests for data need to go over several network hops (SAN switches) before it reaches the cache. This adds to the latency.
The idea of Flash Read Cache is to decouple performance (IOPS) from capacity (GB). Performance is brought to the server while capacity is still on the SAN, or locally when VMware VSAN is used.
Some of the key features of Flash Read Cache are:
- Hypervisor-based software-defined flash storage tier solution.
- Aggregates local flash devices to provide a clustered flash resource for VM and vSphere hosts consumption (Virtual Flash Host Swap Cache)
- Leverages local flash devices as a cache
- Integrated with vCenter, HA, DRS, vMotion
- Scale-Out Storage Capability: 32 nodes
Why buy capacity (more spindles) to get performance like done now in traditional SAN? Flash Read Cache and other server-side flash software solutions use server-side caching to minimize the I/O traffic load on central storage.
Benefits of Flash Read Cache are:
- Cache is a high-speed memory that can be either a reserved section of main memory or a storage device.
- Supports Write Through Cache Mode
- Improve virtual machines performance by leveraging local flash devices
- Ability to virtualize suitable business critical applications
This is a server based flash tier. One of the main customer benefits is acceleration for business critical applications. Examples of applications which can benefit from Flash Read Cache are Oracle, Exchange Server and SQL Server, IBM DB2 and Sharepoint
Another use case for Flash Read Cache is VDI.
Hardware requirements: needs SSD for read cache. Not every node in a vFlash enables cluster needs to have SSD storage.
Management of Flash Read Cache can only be done using the vSphere Web client.
Until vSphere 5.5 VMware did not utilize local SSD devices for VMs. SSD could be used by ESXi hosts for swap to SSD. ESXi has the ability to utilize up to 4TB of vSphere Flash Resource for vSphere Flash Host Swap caching purposes.
it virtualizes server flash in a resource pool just like CPU and memory.
Supported SSD devies will be listed in the Hardware Compatibility List. As a guideline use any eMLC class or better Flash device with reasonable reliability (at least 10 writes/cell per day) and performance (~20K random write IOPs).
Applications and virtual machines are unaware they were using flash storage. The Flash storage sits between the Virtual machine and datastore presented by SAN or local storage. It is very much like how SAN’s are using DRAM or SSD in their SAN’s to provide more IOPS.
Flash Read Cache is a platform which is open to third party vendors .
The max capacity for vFlash per host is 2TB. It cannot use SSD drives when those drives are already in use by VSAN. A Flash Read Cache cluster can scale to maximum 32 nodes.
Flash based devices are pooled into a new file system called VFFS.
Flash Read Cache resources can be allocated to virtual machines and to VMDKs
Flash Read Cache Resource has been created, its capacity is available for consumption by virtual machines as well as hosts for swap cache.
The Flash Read Cache works best with workloads that are read intensive, with a high rate of repeat accesses to the same blocks, locality of access to blocks. And the workload should be such that the working set (accessed via read IOs) should fit within the cache to ensure the maximum cache hit
A VM can have reservations configured for Flash Read Cache. However a vMotion of such a VM will fail if the destination host does not offer Flash Read Cache.
A VM which is using a VSAN datastore (which already has read cache and write cache by design) cannot be enable for Flash Read Cache.
Flash Read Cache and HA work in co-ordination with each other. When HA is invoked for a VM, it will be restarted on a host with sufficient resources to honor the Flash Read Cache reservation for the VM. In vSphere 5.5 , Flash Read Cache will only support hard reservations. Thus, a VM with a Flash Read Cache reservation that cannot be satisfied on another host will not be restarted on another host.
In vSphere 5.5, DRS can manage virtual machines that have Flash Read Cache reservations.
Flash Read Cache capacity appears as a statistic that is regularly reported from the host to the vSphere Web Client.
Each time DRS runs, it uses the most recent capacity value reported.
You can configure one Flash Read Cache resource per host. This means that during virtual machine power-on time,
DRS does not need to select between different vFlash resources on a given host. DRS selects a host that has sufficient available Flash capacity to start the virtual machine. If DRS cannot satisfy the Flash Read Cache reservation of a virtual machine, it cannot be powered-on. DRS treats a powered-on virtual machine with a Flash Read Cache reservation as having a soft affinity with its current host. DRS will not recommend such a virtual machine for vMotion except for mandatory reasons, such as putting a host in maintenance mode, or to reduce the load on an over utilized host.
Compareable solutions are Fusion-io ioTurbine, Infinio Systems Inc.’s Accelerator, QLogic’s Mt. Rainier FabricCache and Pernixdata FVP . PernixData however goes one step further and creates a clustered pool of server-side flash across multiple servers. It can be used to accelerate both reads and writes..To compare, PernixData FVP license costs about $ 7500 per host.
Duncan Epping has a post about vSphere Flash Read Cache here.
VMware published a whitepaper titled Performance of vSphere Flash Read Cache in VMware vSphere 5.5 – Performance Study
VMworld sessions on vSphere Flash Read Cache: VSVC5603 – Extreme Performance Series: Storage in a Flash. The outline of this session is:
Flash-based storage has been gaining traction in the enterprise storage world and almost every major storage vendor has come up with new products that leverage flash technology in their respective storage systems. While storage array-side enhancements with flash is interesting, embracing flash technology natively at the server can pave the way for more holistic management and performance optimization of resources. Servers that make use of flash storage technology to improve overall IO performance do ease storage management by means of software-defined storage. Come to this session and explore flash technologies, practices and performance.