An introduction to VMware Virtual SAN or VSAN 1.0

This post is part of a series of blogpostings on VMworld 2013 announcements. See here for a complete overview of all announcements.

At VMworld 2013 VMware announced a new storage product named Virtual SAN or VSAN in short. VSAN version 1.0 software is embedded in the ESXi 5.5 hypervisor and needs an additional license to be able to activate it. At the moment it released as a public beta with an estimated general availability in the first half of 2014.

Registration for the Public Beta of VSAN can be done here.

This post is part of a series of posting on the VMworld 2013 announcements. See this post for an overview of what has been announced.

Virtual SAN uses local attached storage to create a cost effective high available and high performance shared storage for VM’s. SSD drives are used for read cache and write buffer. Spinning HDD drives are used for storage of virtual disk files (VMDK) , snapshots and swap.

Writes are written to two SSD drives which are located in two different ESXi hosts. Data is buffered in SSD and later written to HDD.

Benefits of VSAN are:

  • Reduce investment costs by using cheap low cost storage instead of expensive SAN
  • Pay as you grow model instead of large upfront investments. If you need more storage capacity simply add SDD or HDD instead of having to buy a new SAN
  • It lowers operational costs because it is simple to use, does not require a storage administrator and has increased automation

Virtual SAN provides the following capabilities:

  •  Automated Storage Management via Intelligent VM based policies• Dynamic scalability to grow storage and compute dynamically on an as-needed basis• Integrated with vSphere and managed in vCenter• Built-in resiliency with protection from multiple hardware failures• Per-VM with SLA management via intelligent data placement• Instant storage provisioning without complex workflows

It differs from the already available vSphere Storage Appliance in that VSA is a virtual appliance while VSAN is built into the ESXi hypervisor. It is only availalbe in ESXi 5.5.

VSAN uses SSD as a read cache and write buffer in front of magnetic disks. VSA does not have SSD support.  VSAN has a policy management at VMDK level which can control capacity, performance and availability.

VSA presents datastores as NFS volumes which can be accessed by other servers than ESXi. VSAN presents datastores which can only be access by the ESXi servers part of the VSAN-cluster. In the future VSAN might support NFS.

Virtual SAN or VSAN 1.0 will go into  a public beta at August  2013.  The expected General Availability is at the release of vSphere 5.5 Update 1 somewhere in H1 of 2014.

VSAN is an converged storage architecture which is very similar to solutions of vendors like Nutanix, Simplivity.

VSAN 5.5 will be available in two editions: standard ($ 995) and advanced ($ 1495). The feature set is the same between the two editions. Standard Edition is however  limited to maximum of 300 GB SSD cache per host, Advanced does not have a limit.

Hardware Compatibility / Setup Requirements

  • Minimum 3 vSphere Hosts with storage
  • vCenter Server 5.5
  • At least 1 SAS/SATA/PCIe SSD drive (which presents itself as a SCSI device, like Intel 910 serie)
  • At least 1 SAS/SATA harddisk
  • SD/USB or SATA disk to boot ESxi host
  • A VSAN Vmkernel network port for VSAN traffic
  • •SAS/SATA RAID Controller: Must work in “pass-thru” or “HBA” mode (e.g. LSI 2008)
  • •ESXi boot: SD card/USB/SATADOM is preferred (ESXi boot partition and Virtual SAN cannot co-exist on the same SSD/HDD)
  • •Network: 10G (preferred and best practise ) but 1 Gbps is supported as well. The full 10 Gb will not be used by VSAN. Part of the bandwidth can be used for vMotion and management traffic as well.

FusioIO PCI-e SSD cards present themselves as blockdevice and will not work/are not supported  with VSAN. The limit on the number of disks per host is 25. VMware recommends a 10 % ratio between SSD and HDD. So for each GB on SSD have 10 GB of storage on HDD in the same server.

A single SDD and up to 6 HDD in each host  are grouped in so called disk groups. A single host can have a maximum of 6 host groups

Passthrough mode is required because VSAN ‘talks’ directly to the SSD and HDD.

Asymetric clusters are supported. This means that the storage characteristics per node in the cluster can be different. So host A can have 1 SSD and 3 HDD while host B has 2 SSD and 3 HDD for example. The best practice is however to have a balanced cluster with each of the nodes having the same amount of storage available.

The lower limit of nodes is 3, the upper limit is 8 nodes at the moment. This limit is more because of testing reasons. VMware just did not have time to test larger sized VSAN clusters  . In the very near future the maximum number of nodes will be increased.

SSD offer the lowest price per IOPS, but the highest price per GB. HDD offers the lowest price per GB but the highest price per IOPS. VSAN is designed to use best of both world of HDD and SSD.

It allows to set policies per VMDK. For example quality of service.  So  a single datastore can have different quality of service levels set per VM or per VMDK.

VSAN is integrated with vCenter Server, HA, DRS and vMotion. You can do thin provisioning, snapshots, cloning, backup and replication with virtual machines located on VSAN. Also Site Recovery Manager  and vSphere Replication is supported.

VSAN uses a RAIN architecture or redundant array of independent nodes. Currently VSAN is a RAIN-10 architecture at object level rather than a disk level. This means it creates copies of objects stripped across host in the cluster. VSAN can tolerate a failure of a HDD, a network or a host.

Per VM rules can be set on performance, capacity and availability.

How to create a Virtual SAN.

Configuration of VSAN is done using the vSphere Web client. Go to Manage->settings-VSAN and click on Edit. Turn on VSAN and set the policy of manual or automatically add disks. Automatic (auto claim mode) will add all disks in the host to a disk group. In manual mode the admin needs to add SSD and HDD to a disk group.

Then navigate to Manage-Settings>VSAN and edit the disk group. Add at least one SSD and at least one HDD to the disk group.

For each host in a VSAN cluster having local storage, one or more disk groups need to be created.

After the datastore has been created, VM storage profiles (VMware used to call these Storage Service Levels in early beta releases) needs to be created. Each host has a VSAN Storage provider. This is a piece of software which uses VASA to ‘talk’ to the VSAN storage layer to understand its capabilities.

There are 5 capabilities to set in the VM Storage Profile. A storage profile is also known as a Storage Service Level. The storage profiles has a set of characteristics (capabilities) which together form a service level.

VM storage Policies

The characteristics of a VSAN datastore which are used to compose a storage profile are:

  1. Stripe width = The # of physical disks across which each replica of a storage object are striped. The higher the value specified, the higher the performance (throughput and bandwidth) realized from the underlying physical disks by the storage object.
  2. Component Failures To Tolerate= Defines the # of hosts and/or disk failures a storage object can tolerate. For “n” failures tolerated, “n+1” copies of the object are created and “2n+1” hosts are required. Default is 1. If set to 0 no data will be mirrored outside the host (not recommended obviously)
  3.  Proportional Capacity= % of the logical size of the storage object that should be reserved during initialization
  4. Read Cache Reservation= Flash capacity reserved as read cache for the storage object. Specified as a % of the logical size of the storage object
  5. Force Provisioning If set to a non-zero value, the object will be provisioned even if the policy specified in the storage service class is not satisfied by the datastore.

An example of a VM storage Profile named Silver is:

  • Component failures to tolerate: 2
    stripe width: 1
    flash read cache reservation: 5

Typical use scenario’s for VSAN are:

VDI, test/dev , big data, tier 2 and tier 3 production environments and DR target.
VMware wants to mature the VSAN technology and will recommend to use VSAN in a production environment in the future.

In the future also the snapshot and clone technology of Virsto will  be added to VSAN. Also other technology of Virsto will be added to VSAN

VSAN uses a witness/tie breaker in a split brain scenario. VMware HA will restart VM’s in the VSAN cluster that has the majority.

How VSAN reacts to a failure depends on the type of failure. If a SSD or HDD fails, VSAN wil start to rebuilt the data on the remaining nodes.

If a host is put in maintenance mode, VSAN will wait about 15 to 20 minutes before starting to rebuilt the data which was on the host.

In vSphere 5.5, VSAN will not work with the vSphere Auto deploy feature . VMware is  looking to support auto deploy in a future release. The best practice would be to use a SD card, USB stick or Flash on motherboard  for the ESXi boot partition. One could also use disk(s) on the server for the ESXi boot partition. A key point here is that ESXi and VSAN cannot share the same disk.

More information on Virtual SAN at the VMware website.

Advertisements

About Marcel van den Berg
I am a technical consultant with a strong focus on server virtualization, desktop virtualization, cloud computing and business continuity/disaster recovery.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: