Did you know VMware Elastic Sky X (ESX) was once called ‘Scaleable Server’?

VMware has been a vendor of  servervirualization software for a long time. This story will inform about the history of VMware ESX(i) and uncover the various names used in the history of ESX(i).

VMware was founded in 1998. The first ever product was VMware Workstation released in 1999. Workstation was installed on client hardware.

For server workloads VMware GSX Server was developped. This is an acronym for Ground Storm X. Version 1.0 of GSX Server was available around 2001. It was required to install on top of Windows Server or Linux making GSX a Type 2 hypervisor just like Workstation. The product was targeted at small organizations. Remember we are talking 2001!

GSX 2.0 was released in summer 2002. The last version available of GSX Server was 3.2.1 released in December 2005.Thereafter GSX Server was renamed to ‘VMware Server’ available as freeware. VMware released version 1.0 of VMware Server on July 12, 2006. General support for VMware Server ended in June 2011.

However VMware realized the potential of server virtualization for enterprises and was working on development of a type 1 hypervisor which could be installed on bare metal.

VMware Scaleable Server’ was the first name of this product currently known as ESXi.  See this screenshot provided by  Chris Romano ‏@virtualirishman on his Twitter feed. This must be around 1999 or 2000.

After a while the name was changed to  ‘VMware Workgroup Server‘. This was around year 2000. Hardly no reference can be found on internet for these two early names.

In March 2001 ‘VMware ESX 1.0 Server‘ was released. ESX is short for Elastic Sky X. A marketing firm hired by VMware to create the product name believed Elastic Sky would be a good name. VMware engineers did not like that and wanted to add a X to sound it more technical and cool. 

VMware employees later started a band named Elastic Sky with John Arrasjid being the most well known member.

ESX and GSX were both available for a couple of years. ESX was targeted at enterprises. It was not untill around 2005/2006 before ESX got some traction and organizations started to use the product.

ESX had a Service Console. Basically a Linux virtual machine which allows management of the host and virtual machines. Agents could be installed for backup software or other third party tools.

Development started for a replacement of ESX. The replacement would not have the service console. In September 2004 the development team showed the 1st fully functional version to VMware internal staff. The internal name was then VMvisor (VMware Hypervisor). This would become ESXi 3 years later. 

At VMworld 2007 VMware introduced VMware ESXi 3.5. Before that the software was called  VMware ESX 3 Server ESXi edition but this  was never made available. This screenshot shows the name ‘VMware ESX Server 3i 3.5.0’ . ESX and ESXi share a lot of similar code.

ESXi has a much smaller footprint than ESX and can be installed on flash memory. This way it can be almost seen as part of the server. The i in ESXi stands for Integrated.

At the release of vSphere 4.0 (May 2009) the ‘ESX Server’ was renamed to just VMware ESX.

Up to vSphere 4.1  VMware offered two choices for customers: VMware ESX which has the Linux console and ESXi which had and still has the menu to configure the server. Mind you have still access to a limited command line by pressing Alt-F1.

Since vSphere 5 ESX is not longer available.

Similar to the hypervisor  the management software changed names a couple of times as well. VMware VMcenter is so old Google cannot find any reference for it. It might also be used as an internal name only. Here is a screendump taken from here

At December 5 2003 VMware released VirtualCenter. It was used to manage multiple hosts, (ESX Server 2.1) and virtual machines .

In May 2009 VMware released ‘vCenter Server 4.0‘ as part of vSphere 4.0. vCenter Server from now on was the new name for VirtualCenter. The last version released of VirtualCenter was 2.5

Sources used for this blog:

Wikipedia VMware ESX

vladan.fr VMware ESXi was created by a French guy !!!
VM.blog What do ESX and ESXi stand for?
yellow-bricks.com vmware-related-acronyms/

Some more images of old VMware products are here

Visio Stencil Set for 2014 VMware vSphere and Horizon

Ray Heffer shares a new Visio stencil set for 2014 which contains some popular shapes for VMware Horizon View, Workspace and vSphere. These are not official VMware stencils, but have been created from existing external content (thanks to Maish Saidel-Keesing at TechnoDrone) in addition to using Adobe PhotoShop and pulling images from public PDF documentation.

Download at www.rayheffer.com

New free VMware app monitors your vSphere infrastructure remotely

VMware released a free, new iOS & Android app called VMware vSphere Mobile Watchlist which allows you to monitor the virtual machines you care about in your vSphere infrastructure remotely on your phone. Discover diagnostic information about any alerts on your VMs using VMware Knowledge Base Articles and the web. Remediate problems from your phone by using power operations or delegate the problem to someone on your team back at the datacenter.
It even allows you to see a screenshot of the console of virtual machine.

The iOS app requires iOS 7 or later and supports iPad, iPhone and iPod Touch.
The Android app requires Android 4.0.3 or later

IMPORTANT NOTE: A VMware vSphere installation (5.0 and above) is required to use VMware vSphere Mobile Watchlist. Access to your vSphere infrastructure may need a secure access method like VPN. Contact your IT department for further assistance.


FEATURES:
EASILY CREATE VM WATCHLISTS
Select a subset of VMs from your VMware vSphere VM inventory to tell the app what VMs to track. Use multiple lists to organize these important VMs.

VMS AT A GLANCE
Review the status of these VMs from your device including: their state, health, console and related objects.

SUGGESTED KB ARTICLES
Got an alert on your VM? Let VMware vSphere Mobile Watchlist suggest what KB Articles might help you or search the web to gather more information.

REMEDIATE REMOTELY
Use power operations to remediate many situations remotely from your device.

DELEGATE TO YOUR TEAM
For those situations where you are not able to fix an issue from the device, VMware vSphere Mobile Watchlist will enable you to share the VM and alert information along with any suggested KB articles and other web pages relevant to the current problem. Your colleagues back in the datacenter can use this context to resolve the issue.

SUPPORT:

Support for VMware vSphere Mobile Watchlist is provided via VMware Communities (https://vmware.com/go/vspheremobile) and also included in the support contracts sold with VMware vSphere. If vSphere Mobile Watchlist customers encounter a technical issue, only Support Administrators listed on the support contract for VMware vSphere may log a service ticket with VMware Technical Support. Individual users of the vSphere Mobile Watchlist should contact their internal IT help desk or visit https://vmware.com/go/vspheremobile.

download iOS app here and more info at VMware Community here.
Download the Android app from Google Play

Free conversion of VMware VMs to Hyper-V VMs with 5nine V2V Easy Converter v2.0

5nine Software has released the 2.0 version of its free V2V conversion tool ‘Easy converter’. It is able to convert VMware virtual machines to Hyper-V. Other conversion types are not supported.

New is this 2.0 release is added support for Hyper-V 2012 R2 as a target. It also supports simultaneous conversions of multiple VMs.

V2V Converter is a simple to use, basic converter tool targeted at the Small and medium business market. During the conversion process the source VMware virtual machine will needs to be shutdown.

The 2.0 version of 5nine V2V Converter supports conversion of VMware Virtual Machines with Windows Server 2008, Windows 7, Windows Server 2003 (x86 and x64), and most of Advanced Server 2000, Ubuntu and CentOS configurations into VHD and VHDX file formats for Microsoft Hyper-V 2008 R2 SP1 and Microsoft Hyper-V 2012/R2.

Download 5nine V2V Easy Converter v2.0 from the company website at http://www.5nine.com/vmware-hyper-v-v2v-conversion-free.aspx.

Serious issue on Dell EqualLogic firmware v6.0.6 and VMware VMFS (UPDATE)

UPDATE: according to a post on Cormac Hogan blog this issue is not limited to version 6.0.6. All firmware versions 5 and 6 can have this issue.

VMware released a knowledgebase article on a serious issue. Customers are advised to upgrade to firmware v6.0.6-H2 as soon as possible.

When using Dell EqualLogic Storage arrays, particularly when running PS Series firmware v6.0.6 within your ESXi/ESX environment, you may experience one or more of these symptoms:

  • Unable to power on virtual machines.
  • Virtual machines report as Invalid in the vCenter Server inventory.
  • Virtual machine files appear to be locked even though the affected virtual machine is powered off.
  • VMFS volume may be inaccessible.
  • Messages in the vmkernel log or vmkwarning log files indicate METADATA corruption or Lock Corruption.
  • Attempting to migrate a virtual machine using vMotion fails with the error:
    The operation is not allowed in the current state of the host
In this scenario, there has been no other failure affecting storage in the data center such as:
  • Hard disk failure in the array
  • Controller failure in the array
  • Unexpected power outage
  • Network connectivity issues

More information here.

VMware vSphere 5.5 available for download!

At September 22 2013 VMware made vSphere 5,5 generally available. VMware ESXi 5.5.0 and vCenter Server 5.5 can be downloaded here.

Documentation can be downloaded here.

The free vSphere Hypervisor is available for download here. Mind there is no limit anymore in the amount of addressable physical memory. Before version 5.5 you were limited to 32 GB internal memory.

The 5.5 release was announced at August 26 during VMworld 2013 US.

What is new in vSphere 5.5 can be read in this post.

Also available for download is:

  • vSphere Replication 5.5
  • vSphere Data Protection 5.5.1
  • vCenter Orchestrator Appliance 5.5.0
  • vCloud Networking and Security 5.5.0
  • vCenter Operations Manager Foundation 5.7.2
  • vSphere Big Data Extensions 1.0
  • vSphere App HA 1.0.,0
  • Cisco Nexus 1000V Virtual Ethernet Modules for vSphere 5.5.0
  • VMware vSphere CLI (vCLI)
  • vSphere PowerCLI

Also released at September 22 are;

Storage Replication Adapters for VMware vCenter Site Recovery Manager
VMware vCenter Site Recovery Manager 5.5
VMware vCenter Server Heartbeat 6.6
VMware vCenter Infrastructure Navigator 5.7.0
VMware vCenter Operations Manager Advanced 5.7.2
VMware vCenter Operations Manager Enterprise 5.7.2
VMware vCenter Operations Manager Standalone 5.7.2
VMware vCloud Director 5.5.0
VMware vFabric Application Director 5.2.0
vSphere Storage Appliance 5.5

At the release of vSphere 5.1 VMware announced this would be the last release of the full Windows based vSphere client. In future releases all features would be only available in the vSphere Web Client.

However in vSphere 5.5 we still need the vSphere C+ client. For connection directly to the ESXi host, for VMware Update Manager, for Site Recovery Manager and for some third party solutions.

The vSphere 5.5 client can be dowloaded here

vsphere5.5-ga

VMworld 2013 breakout sessions recordings available

VMware made recordings of breakout sessions of VMworld 2013 US available.

Recorded  are only accessible by paid attendees from VMworld 2013 or by VMworld 2013 Subscription owners. However, VMworld 2013 General Sessions and all VMworld 2004 – 2012 Sessions & Labs are free.

Sessions of VMworld 2013 are at the moment only accessible for VMworld 2013 attendees. After VMworld Europe those sessions will be available for VMworld Europe  attendees.

VMware vExperts have free access to the recordings. Go to this page the find your name and copy the Subscription/redeem code. Login at vmworld.com , go to vmworld.com/community/subscription/redeem and fill in the Subscription/redeem code.

An overview of all recorded sessions can be seen here.

To download the session for offline viewing you can use one of the many tools for downloading video. I use StreamTransport  which is free to use.

vmworld2013-sessions

VMware vCloud Director is approaching End of Life for Enterprises but remains for service providers

Today Techtarget.com reported in this article that functionality of VMware vCloud Director will be split into vSphere and vCloud Automation Center.
What many insiders already suspected is now confirmed; vCloud Director has had its day.

Enterprise customers are recommended to use vCAC for future projects. If they are on vCD or vCAC does not cover requirements, the support of vCD 5.5 has been extended.

Indications of the approaching end of life are:

  • the support for VMware LabManager has been extended by an additional year. Support now ends in May 2014 instead of May 2013.
  • vCloud Director 5.5 cannot be purchased individually  (no SKU). It is only available as part of the vCloud Suite
  • no mentioning of vCD in both keynotes at VMworld 2013. Instead lots of attention for vCAC during the Tuesday keynote.
  • the lack of new features in vCD 5.5 released in August 2013. Site Recovery Manager still is not able to protect VM’s running inside tenant environments. See this article for more information
  • vCAC being available now in three editions of vCloud Suite, previously only in 1 edition. Every edition of the vCloud Suite now has a vCAC edition.

At September 3 VMware published this article which made a lot more clear on the direction VMware is going with vCD:

vCD has been widely adopted by service providers and enterprises. It has also proven to be a foundational component of service providers offering including the VMware hybrid cloud service know as vCHS. Moving forward, vCD will be even more oriented towards service provider requirements.  VMware’s enterprise customers, on the other hand, have expressed a strong requirement for a more simplified cloud stack.  As a result, VMware will move forward with a plan to converge vCD functionality into the vSphere and vCloud Automation Center (vCAC) product lines. vCAC, in particular, has proven to be particularly well suited at meeting customer’s needs for governance and policy combined with self-service and ease of use. This combination of products will provide a simpler solution to enterprises.

At September 4, Matthew Lodge of VMware wrote the lines below in this post

1) Development of vCloud Director continues at VMware, now 100% focused on the cloud service provider market.
2) vCloud Director will continue to be available in the VMware Service Provider Program (VSPP) and also continues to be a foundational component of vCloud Hybrid Service, VMware’s IaaS offering.
3) The next release of vCloud Director will be version 5.6, in the first half of 2014, available through VSPP to cloud service providers.
4) VMware continues to develop and enhance the vCloud API, to provide API access to new capabilities, and to make the API faster and easier to use.

Troubles for vCD started with the acquisition of DynamicOps Cloud Automation Center 4.5 by VMware in 2012. Untill then vCD and Cloud Automation Center were competing solutions with quite some overlap in features.

VMware later renamed the product vCloud Automation Center and added it to the vCloud Suite.

Some customers ran into limitations of vCD and are looking for alternatives. vCD is not an easy to use solution and is suited for vSphere and public clouds running VMware software only. The infrastructures of service providers and large enterprises is sometimes multi-hypervisor so vCD is not always the right match.

VMware is probably working hard making sure components required in a cloud management platform are compatible with vCAC. For example currently vCenter Chargeback is not compatible with vCAC.

In this VMware KB article the end of availability of vCD 5.1 is published.

VMware is moving toward more simplified packaging and unified licensing of our cloud stack. Driven by this commitment, we’re implementing an integrated approach to the packaging of all of the essential capabilities for building a software-defined data center.
As part of this key initiative, VMware is making these packaging changes to make this technology more easily accessible:
  • vCloud Director 5.5 will be available only as part of vCloud Suite 5.5.
  • VMware has announced the End of Availability (“EOA”) of the VMware vCloud Director 5.1 for sale as a standalone product effective September 19, 2013.
  • Existing customers can maintain their vCloud Director through either the entitlement program or while converting to vCloud Suite with the Fair Value Conversion Program.

VMware will be extending the Support for vCloud Director 5.5. Generally, as per the VMware Enterprise Application Support Policy, VMware will support the current release of software for 2 years from the general availability of the Major Release, or the latest released version for 12 months. However, VMware will provide extended support for vCloud Director 5.5 with support available for 4 years from general availability. Customers will be able to get telephone and Internet support for vCloud Director 5.5 until their current contract expires or until Q3, 2017, whichever is earlier.

The End of Support date for vCloud Director 5.1 remains as September 10, 2014.

Functionality of vCD will be moved to vCAC and to vSphere.

This VMware slide shows which functions will move to which solution.


VMware will provide tools to migrate and a transition plan.

Jason Nash has written an interesting blog on vCD. Many customers are wondering how VMware will support the transition from vCD to vCAC/vSphere. Many remember the migration from Lab Manager to vCD. VMware customers are also considering non VMware solutions to manage their cloud platform.

The slide below shows the feature overlap of vCAC and vCD. The slide was shown at a recent Gartner presentation by Alessandro Perilli. In this presentation Cloud Management Platforms of VMware and Microsoft are compared. For a report on this presentation see my post here.

Kendrick Coleman posted his thoughts on vCD in a post vCD To Die A Slow and Painful Death after 5.5

More details taken from the VMware blogpost.

As a result of this announcement, VMware is making the following recommendations:

ENTERPRISE CUSTOMERS –  VMware recommends that customers use the combination of vCloud Automation Center and the vSphere platform to support their private cloud architectures and use cases. vCD 5.5 is available starting in Q3 if required to support use cases not currently covered by vCAC. Projects already in-flight with vCD should also remain in place as vCD 5.5 support will be extended beyond its normally 2-year window (out to 2017).

SERVICE PROVIDERS – vCD will continue to be available through the VMware Service Provider Program (VSPP) in the cloud bundle and is still the recommended solution for service providers. VMware will continue on-going  development for vCD to meet the specific needs of service providers and will provide further details at a later date.

Update: Mathew Lodge (Vice President Cloud Services of VMware) sent out a couple of Tweets. vCD will continue to be available for Service Providers. About 250 public clouds use vCD at the moment.

vcloud-vs-sc-slide3

Recordings of top 10 VMworld breakout sessions are online!

VMware published recordings of the top10 breakout sessions of VMworld 2013 US online.

All other of the VMworld sessions will be available for viewing at September 19 for all VMworld attendees or those who bought a subscription to view recordings only.

 

The sessions available now for free viewing are;

 

VSVC4944 – PowerCLI Best Practices – A Deep Dive
BCO5129 – Protection for All – vSphere Replication & SRM Technical Update
STO5715-S – Software-defined Storage – The Next Phase in the Evolution of Enterprise Storage
PHC5605-S – Everything You Want to Know About vCloud Hybrid Service – But Were Afraid to Ask.
NET5847 – NSX: Introducing the World to VMware NSX
VCM7369-S – Uncovering the Hidden Truth in Log Data With vCenter Log Insight
VAPP4679 – Software-Defined Datacenter Design Panel for Monster VM’s: Taking the Technology to the Limits for High Utilisation, High Performance Workloads
EUC7370-S – The Software-Defined Data Center Meets End User Computer
OPT5194 – Moving Enterprise Application Dev/Test to VMware’s Internal Private Cloud- Operations Transformation
SEC5893 – Changing the Economics of Firewall Services in the Software-Defined Center – VMware NSX Distributed Firewall

 

click here to view the sessions.

 

 

VMworld 2013 US Tuesday keynote online

For some reason VMware dediced not to stream the Tuesday keynote. The Monday keynote was available but this is always more focussed on business, strategy and less on tech.

Tuesday keynote was presented by Carl Eschenbach, Kit Colbert and Joe Baguley.

Over 24 later the keynote was put online by VMware. Watch the 1,5 hour session here.

A report of what was presented at the Tuesday keynote can be read at the VMGuru.nl site. Alex Muetstege wrote a comprehensive report on what he heard and saw.

keynote

vSphere 5.5 Host Power Management

In vSphere 5.5 VMware made enhancements for CPU-C states to save on power consumption.

VMware released a performance study titled Host Power Management in VMware vSphere® 5.5

The tekst below is a summary of that document. Download for the complete report.

vSphere Host Power Management (HPM) is a technique that saves energy by placing certain parts of a computer system or device into a reduced power state when the system or device is inactive or does not need to run at maximum speed. The term host power management is not the same as vSphere Distributed Power Management (DPM) [1], which redistributes virtual machines among physical hosts in a cluster to enable some hosts to be powered off completely. Host power management saves energy on hosts that are powered on. It can be used either alone or in combination with DPM.

 

vSphere handles power management by utilizing Advanced Configuration and Power Interface (ACPI) performance and power states. In VMware vSphere® 5.0, the default power management policy was based on dynamic voltage and frequency scaling (DVFS). This technology utilizes the processor’s performance states and allows some power to be saved by running the processor at a lower frequency and voltage. However, beginning in VMware vSphere 5.5, the default HPM policy uses deep halt states (C-states) in addition to DVFS to significantly increase power savings over previous releases while still maintaining good performance.

HPM Power Policy Options in ESXi 5.5
ESXi 5.5 offers four different power policies that are based on using the processor’s ACPI performance states, also known as P-states, and the processor’s ACPI power states, also known as C-states. P-states can be used to save power when the workloads running on the system do not require full CPU capacity. C-states can help save energy only when CPUs have significant idle time; for example, when the CPU is waiting for an I/O to complete. ESXi 5.5 offers the following power policy options:
• High Performance: This power policy maximizes performance, using no power management features. It keeps CPUs in the highest P-state at all times. It uses only the top two C-states (running and halted), not any of the deep states (for example, C3 and C6). High performance is the default power policy for ESXi releases prior to 5.0.
• Balanced: This power policy is designed to reduce host power consumption while having little or no impact on performance. This is the default power policy since version 5.0. ESXi has used P-states in the Balanced power policy since 5.0. Beginning in ESXi 5.5, we now also use deep C-states (greater than C1) in the Balanced power policy. Formerly, when a CPU was idle, it would always enter C1. Now ESXi chooses a suitable deep C-state depending on its estimate of when the CPU will next need to wake up.
• Low Power: This power policy is designed to save substantially more power than the Balanced policy by making the P-state and C-state selection algorithms more aggressive, at the risk of reduced performance.
• Custom: This power policy starts out the same as Balanced, but it allows individual parameters to be modified.

 

What is new in VMware Site Recovery Manager 5.5

This post is part of a series of posting on the VMworld 2013 announcements. See this post for an overview of what has been announced at VMworld 2013.

AT VMworld 2013 VMware announced Site Recovery Manager 5.5
Part of SRM is the ability to replicate on a per VM basis. Information on What is New on vSphere Replication can be found here.

VMware vCenter Site Recovery Manager (SRM) is a business continuity and disaster recovery solution that helps you to plan, test, and run the recovery of virtual machines between a protected vCenter Server site and a recovery vCenter Server site.

You can configure SRM to work with several third-party disk replication mechanisms by configuring arraybased replication. Array-based replication surfaces replicated datastores to protect virtual machine workloads. You can also use host-based replication by configuring SRM to use VMware vSphere Replication to protect virtual machine workloads.

You can use SRM to implement different types of recovery from the protected site to the recovery site.

What is new:

this information is based on VMware vCenter Site Recovery Manager 5.5 Release Candidate| 19 JUL 2013 | Build 1228390

  • Ability to test your DR non-disruptively from any site, and view both sites
  • SRM 5.5 allows you to protect virtual machines with disks that are larger than 2TB
  • support for Windows Server 2012 for installation of SRM
  • New configuration option to support vSphere Replication
  • Storage DRS and Storage vMotion  supported when moving virtual machines within a consistency group.
  • Protect virtual machines in Virtual SAN environments by using vSphere Replication. You can use Virtual SAN datastores on both the protected site and on the recovery site.
  • Preserve multiple point-in-time (MPIT) images of virtual machines that are protected with vSphere Replication. Advanced settings include an option to recover all vSphere Replication PIT snapshots.
  • Protect virtual machines that reside on vSphere Flash Read Cache storage. Flash Read Cache is disabled on virtual machines after recovery.
  • SRM 5.5 no longer supports IBM DB2 as the SRM database, in line with the removal of support for  DB2 as a supported database for  vCenter Server 5.5.

SRM 5.5 still needs the C# (full vSphere client) for management. So no support for the vSphere web client.
The operational limits for using SRM 5.5 with vSphere Replication 5.5 are the same as for using SRM 5.1 with vSphere Replication 5.1.

•MPIT retention is turned off by default, but can be enabled in advanced settings within SRM.  This is the default behaviour as only the recent point in time will have any SRM failover customizations such as scripts, network changes, etc. applied to it during failover.  If the administrator reverts to an earlier snapshot these changes will be lost.  Enable this setting in advanced features in SRM if retention of MPIT is desired.
•Compatibility with Storage vMotion of primary objects with vSphere Replication is retained when using SRM, completely transparently.  There is no real restriction on where or when users may migrate VMs.
•SRM has always supported multiple vSphere Replication servers, but be aware that topologies are more restrictive when using SRM vs. using stand alone vSphere Replication. I.e. VR supports many topologies depending on where VR appliances and VR servers are deployed.  SRM still supports only 1-to-1 pairing or Many-to-1 shared recovery.
•VSAN support is also maintained in SRM if using vSphere Replication
•Migrating VR based VMs with SDRS or sVmotion is fully supported within SRM as well.

Secondly, the support for Storage vMotion has been expanded to include migration of VMs within a consistency group on an array when using array based replication and vCenter 5.5 with vSphere 5.1 and above *only*:

•If VMs are moved out of a disk consistency group the replicated VMX file may not be created in the right location rapidly enough, causing the VM to be unrecoverable
•Therefore Datastore Clusters *must* be made and *must* contain only the datastores provided by the consistency group on the array, i.e. each datastore cluster may contain only datastores from the same consistency group on the array.
•Each datastore in the consistency group will have the same availability characteristics (RPO, speed, etc.) and therefore each datastore in the datastore cluster/pod will have the same characteristics.
•When that is the case, storage vMotion and storage DRS are supported within the datastore cluster/pod.  This will ensure a valid VMX and VMDK file is always present and available for recovery.
•SRM will scan all datastores in a protection group for a valid VMX file for any given VM.  If a VM is migrated on the primary site, the VMX will be created on the new datastore and deleted on the old.  This means the replication of that VMX will happen and the deletion of the old one in parallel with the action on the primary site.
•This means the migration may not have completed when failover occurs, in which case we can still use the old VMX, or it will have completed in which case we can use the new one.  In any case the VM remains recoverable as long as consistency groups are used.

Further explanation on storage vmotion:

“Fundamentally what we’re doing is now scanning all the directories in the replica datastores of a datastore group in a PG on the recovery site.  We look for the vmx everywhere now instead of just looking for the vmx in the last place it was put when the vmx was protected.
So the caveat is the VM must be placed in the primary site on a storage cluster that contains only datastores that contain only disks/luns/disk groups that are part of the same consistency group on the backend.

It is a manual process to set this up and there is no error checking, so the administrator must know the storage layout well.

If all the replicated files reside in a SRM protection group backed by a storage cluster backed by datastores backed by only disk groups/LUNs/etc. in a consistency group, then you can manually migrate or turn on storage DRS for those VMs.

This is because we now look for the files for that VM in all directories associated with that protection group.
Since those disks in the consistency group are all replicated with the same schedule and write order fidelity is maintained, we can therefore allow them to move because there will always be a recoverable set of files either at the source location (if a crash/recovery occurs during migration) or at the target location (if it completes successfully before the crash/recovery).

This is the scenario we’ve had to avoid in the past, an incomplete migration leading to the deletion of the primary VMX before the replication engine has placed the new VMX in the appropriate directory at the recovery site, or before SRM has been notified about the new location at the recovery site.

Not there are no checks to ensure it doesn’t move out to somewhere incorrect, so there is still the risk of moving it into an unprotected area or outside of the consistency group if they are not careful, and that can still lead to a non-recoverable VM.  We do send alerts if vmotion has moved it out of protection, but do not stop the migration.

Caveats and Limitations

  • SRM 5.5 RC does not support upgrading from a previous release. Only a fresh installation of SRM 5.5 RC is supported.
  • No storage replication adapters (SRA) are  provided for SRM 5.5 RC. Existing SRM 5.1 SRAs should work, but SRM 5.5 RC  does not officially support array-based. Protecting virtual machines by using  vSphere Replication is supported.
  • Using vSphere Replication with VMware Virtual SAN environments is supported but is subject to certain limitations in this release.

    Using SRM and vSphere Replication to replicate and recover virtual machines on VMware Virtual SAN datastores can results in incomplete replications when the virtual machines are performing heavy I/O. ESXi Server can stop  unexpectedly.

    SRM and vSphere Replication do not  support replicating  or recovering virtual machines to the root folders with user-friendly names on Virtual SAN datastores. These  names can change, which causes  replication errors.  vCenter Server does not register virtual machines from such paths. When selecting Virtual SAN datastores, always select  folders with UUID names, which do not change.

  •  SRM 5.5 RC offers limited  support for vCloud Director environments.  Using SRM  to protect virtual machines  within  vCloud resource pools (virtual machines deployed to an Organization) is not supported. Using SRM to protect the  management structure of vCD is supported. For information about how to use SRM to protect the vCD Server instances, vCenter Server instances, and databases  that provide the management infrastructure for vCloud Director, see VMware vCloud Director Infrastructure Resiliency Case Study.
  • SRM does not accept certificates signed using MD5RSA signature algorithm.
  • Windows Server 2003 is not a supported platform for  SRM Server but the SRM installer  allows  you to install SRM on Windows Server 2003.

In the future SRM will be deployed as a virtual appliance.

This post is part of a series of posting on the VMworld 2013 announcements. See this post for an overview of what has been announced at VMworld 2013.

What is new in VMware vCenter Orchestrator 5.5

At VMworld 2013 VMware announced VMware vCenter Orchestrator 5.5

This post is part of a series of posting on the VMworld 2013 announcements. See this post for an overview of what has been announced at VMworld 2013.

VMware vCenter Orchestrator is an IT Process Automation engine that helps automate your cloud and integrate the VMware vCloud Suite with the rest of your management systems. Orchestration saves time, removes manual errors, reduces operating expenses, and simplifies IT management. VMware vCenter Orchestrator allows administrators and architects to develop complex automation tasks within the workflow designer, then quickly access and launch workflows directly from within the vSphere Client or via various triggering mechanisms.

With this release, vCenter Orchestrator is greatly optimized for growing clouds because of significant improvements in scalability and high availability. Workflow developers can benefit from a more simplified and efficient development experience provided by the new debugging and failure diagnostic capabilities in the vCenter Orchestrator client.

• New Workflow debugger Workflow developers are now able to re-run their workflows in debug mode without necessity to type the last known values for the workflow input parameters. The user inputs are automatically stored and populated for the consequent workflow execution.

• New Workflow Schema Auto-scaling and auto-placing capabilities have always been the great charm of vCenter Orchestrator Client. In addition to these, experience workflow developers can also use non-stick placement while designing the workflow activity diagram.

• New Scripting API Explorer Consistent navigation is an essential component of the overall workflow development efficiency. Based on this. the Scripting API Explorer is now enhanced with out-of-the-box browsing history. The new Back button, available in the explorer will allow workflow developers to navigate, in reverse chronological order, through the history of scripting objects they have recently worked with.

• New Security Improvements The new build of the vCenter Orchestrator Appliance contains a complete set of security improvements, including Operating System updates and security hardening script enhancements.

• Improved scalability and high availability Datacenter architects are now able to plan vCenter Orchestrator deployments with cloud scalability in mind, by using the out-of-the-box clustering capabilities of the Orchestrator platform. The new Orchestrator cluster mode provides much greater availability of the engine and enables dynamic scale-up and scale-down of orchestration capacity when used in conjunction with external load balancer. If an Orchestrator server were to become unavailable mid-way through a workflow run, another Orchestrator node can now take over and complete the workflow with no service interruption.

• More efficient workflow development experience The new debugging feature enables workflow developers to troubleshoot and test their automated use cases quickly and easily, making for a more efficient development experience. Workflow developers are now able to set breakpoints on workflow activities, step into them and watch variable values at various steps of the debugging procedure. In addition, they can also resume a workflow from a failed state for the consequent execution of their custom workflows. Finally, new libraries of workflow icons also help make the vCenter Orchestrator client experience more intuitive and customizable as ever.

• Improved integration with the vSphere Web Client Beside the auto-discovery options for vCenter Orchestrator into vSphere Web Client, virtual infrastructure administrator are allowed to manage and monitor the vCenter Orchestrator instances or add them on demand directly from the vSphere Web Client.

• REST API enhancements Release 5.5 facilitates the usage of vCenter Orchestrator REST API because of enhancements in JSON support and simplified integration with vCenter Single Sign-On. The Orchestrator environment can now be programmatically configured to more easily deploy Orchestrator instances not only for test and development purposes but also to scale up automation capacity as demand increases. Beyond this, it also provides the major capability to leverage Orchestrator workflows in a localized environment if the dedicated property files are used for the specific language.

more info on new features here.

Introduction of VMware vSphere Flash Read Cache

This post is part of a series of blogpostings on VMworld 2013 announcements. See here for a complete overview of all announcements.

vSphere Flash Read Cache is a new vSphere feature introduced in version 5.5. The feature  was previously known in the vSphere  Beta as  Virtual Flash or vFlash. It aggregates local flash devices to provide a clustered flash resource for VM and vSphere hosts consumption (Virtual Flash Host Swap Cache)

VMware vSphere 5.5 introduces new functionality to leverage flash storage devices on a VMware ESXi host. The vSphere Flash Infrastructure layer is part of the ESXi storage stack for managing flash storage devices that are locally connected to the server. These devices can be of multiple types (primarily PCIe flash cards and SAS/SATA SSD drives) and the vSphere Flash Infrastructure layer is used to aggregate these flash devices into a unified flash resource. You can choose whether or not to add a flash device to this unified resource, so that if some devices need to be made available to the virtual machine directly, this can be done.

The flash resource created by the vSphere Flash Infrastructure layer can be used for two purposes: (1) read caching of virtual machine I/O requests (vSphere Flash Read Cache) and (2) storing the host swap file. This paper focuses on the performance benefits and best practice guidelines when using the flash resource for read caching of virtual machine I/O requests.

 

This feature is available in vSphere 5.5 Enterprise Plus edition only!

Flash Read Cache provides  very similar features to cache located inside storage arrays. The problem with this SAN-based cache is that the cache is behind the controllers. Performance of controllers could be an issue to get the max out of the cache. Also the application requests for data need to go over several network hops (SAN switches) before it reaches the cache. This adds to the latency.

The idea of Flash Read Cache is to decouple performance (IOPS) from capacity (GB). Performance is brought to the server while capacity is still on the SAN, or locally when VMware VSAN is used.

Some of the key features of Flash Read Cache are:

  • Hypervisor-based software-defined flash storage tier solution.
  • Aggregates local flash devices to provide a clustered flash resource for VM and vSphere hosts consumption (Virtual Flash Host Swap Cache)
  • Leverages local flash devices as a cache
  • Integrated with vCenter, HA, DRS, vMotion
  • Scale-Out Storage Capability: 32 nodes

Why buy capacity (more spindles) to get performance like done now in traditional SAN? Flash Read Cache and other server-side flash software solutions use  server-side caching to minimize the I/O traffic load on central storage.

vFlash-architecture

Benefits of Flash Read Cache are:

  • Cache is a high-speed memory that can be either a reserved section of main memory or a storage device.
  • Supports Write Through Cache Mode
  • Improve virtual machines performance by leveraging local flash devices
  • Ability to virtualize suitable business critical applications

This is a server based flash tier. One of the main customer benefits is acceleration for business critical applications. Examples of applications which can benefit from Flash Read Cache are Oracle, Exchange Server and SQL Server, IBM DB2 and Sharepoint

Another use case for Flash Read Cache is VDI.

Hardware requirements: needs SSD for read cache. Not every node in a vFlash enables cluster needs to have SSD storage.

Management of Flash Read Cache can only be done using the vSphere Web client.

Until vSphere 5.5 VMware did not utilize local SSD devices for VMs. SSD could be used by ESXi hosts for swap to SSD. ESXi has the ability to utilize up to 4TB of vSphere Flash Resource for vSphere Flash Host Swap caching purposes.

it virtualizes server flash in a resource pool just like CPU and memory.

Supported SSD devies will be listed in the Hardware Compatibility List. As a guideline use any eMLC class or better Flash device with reasonable reliability (at least 10 writes/cell per day) and performance (~20K random write IOPs).

Applications and virtual machines are unaware they were using flash storage. The Flash storage sits  between the Virtual machine and datastore presented by SAN or local storage. It is very much like how SAN’s are using DRAM or SSD in their SAN’s to provide more IOPS.

Flash Read Cache is a platform which is open to third party vendors .

The max capacity for vFlash per host is 2TB. It cannot use SSD drives when those drives are already in use by VSAN. A Flash Read Cache cluster can scale to maximum 32 nodes.

Flash based devices are pooled into a new file system called VFFS.

Flash Read Cache resources can be allocated to virtual machines and to VMDKs

Flash Read Cache Resource has been created, its capacity is available for consumption by virtual machines as well as hosts for swap cache.

The Flash Read Cache works best with workloads that are read intensive, with a high rate of repeat accesses to the same blocks, locality of access to blocks. And the workload should be such that the working set (accessed via read IOs) should fit within the cache to ensure the maximum cache hit

A VM can have reservations configured for Flash Read Cache. However a vMotion of such a VM will fail if the destination host does not offer Flash Read Cache.

A VM which is using a VSAN datastore (which already has read cache and write cache by design) cannot be enable for Flash Read Cache.

Flash Read Cache and HA work in co-ordination with each other. When HA is invoked for a VM, it will be restarted on a host with sufficient resources to honor the Flash Read Cache reservation for the VM. In vSphere 5.5 , Flash Read Cache will only support hard reservations. Thus, a VM with a Flash Read Cache reservation that cannot be satisfied on another host will not be restarted on another host.

In vSphere 5.5, DRS can manage virtual machines that have Flash Read Cache reservations.

Flash Read Cache capacity appears as a statistic that is regularly reported from the host to the vSphere Web Client.

Each time DRS runs, it uses the most recent capacity value reported.

You can configure one Flash Read Cache resource per host. This means that during virtual machine power-on time,

DRS does not need to select between different vFlash resources on a given host. DRS selects a host that has sufficient available Flash capacity to start the virtual machine. If DRS cannot satisfy the Flash Read Cache reservation of a virtual machine, it cannot be powered-on. DRS treats a powered-on virtual machine with a Flash Read Cache reservation as having a soft affinity with its current host. DRS will not recommend such a virtual machine for vMotion except for mandatory reasons, such as putting a host in maintenance mode, or to reduce the load on an over utilized host.

Compareable solutions are Fusion-io ioTurbine, Infinio Systems Inc.’s Accelerator, QLogic’s Mt. Rainier FabricCache  and Pernixdata FVP . PernixData however goes one step further and creates a clustered pool of server-side flash across multiple servers. It can be used to accelerate both reads and writes..To compare, PernixData FVP license costs about $ 7500 per host.

Duncan Epping has a post about vSphere Flash Read Cache here.
VMware published a whitepaper titled Performance of vSphere Flash Read Cache in VMware vSphere 5.5 – Performance Study

VMworld sessions on vSphere Flash Read Cache:  VSVC5603 – Extreme Performance Series: Storage in a Flash. The outline of this session is:

Flash-based storage has been gaining traction in the enterprise storage world and almost every major storage vendor has come up with new products that leverage flash technology in their respective storage systems. While storage array-side enhancements with flash is interesting, embracing flash technology natively at the server can pave the way for more holistic management and performance optimization of resources. Servers that make use of flash storage technology to improve overall IO performance do ease storage management by means of software-defined storage. Come to this session and explore flash technologies, practices and performance.

An introduction to VMware AppHA

This post is part of a series of blogpostings on VMworld 2013 announcements. See here for a complete overview of all announcements.

In VMware vSphere 5,5 VMware introduced a new feature called AppHA. AppHA is part of VMware HA and monitors the status of services running in the guest. This is done using vFabric Hyperic agents which needs to be installed in each guest. Also you need a AppHA virtual applance and a Hyperic server

 vSphere App HA provide application aware high availability. While vSphere HA provides protect your application, it is not application aware and cannot detect and remediate software failures. vSphere App HA provides application protection by detecting application availability issues and automatically remediating them.

It is possible to set policies like:

  • When the applications fails, restart the application service
  • If the service restart fails, reset the VM
  • Trigger a vCenter Server alarm
  • Send an email notification

Main features

1. Autodiscover application services and their availability status
2. Simple 3 click creation of remediation policies
3. Safe VM restart (through HA API) of VMs in case of application restart failure
4. Integration with VC alarms to provide visibility to application downtime

The services supported by vSphere App HA are limited at the moment:

  • MSSQL 2005, 2008, 2008R2, 2012
  • Tomcat 6.0, 7.0
  • TC Server Runtime 6.0, 7.0
  • IIS 6.0, 7.0, 8.0
  • Apache HTTP Server 1.3, 2.0, 2.2

vsphere-App HA architecture

This post is part of a series of blogpostings on VMworld 2013 announcements. See here for a complete overview of all announcements.