Veeam announces Backup as a Service offering Veeam Cloud Connect

Any organization using IT knows: backup data should be stored offsite. This is done using replication or using removeable media like backup tapes. These are typically collected by a tapehandling firm or stored at homes of employees, in garages or sometimes not take offsite.

Cloud is a perfect place for storing backup data. No need for tapestreamers, tape handling, second locations etc. However this has some issues as well. Restore over the internet of lots of data can take a long time. Who has access to the data stored in the cloud?

Today Veeam announced a new technology: Veeam Cloud Connect. It enables Service Providers to offer cloud backup to their customers. It offers a  shared repository for backup data created by Veeam Backup & Replication. It also offers SSL encryption of backup traffic.

At the customer site there is no need for installation of additional software. A customer simply selects a service provider, types in name and password of the service and then is ready to use the cloud as a backup target.

Cloud Connect will be available at the same time Veeam Backup & Replication v8 will be GA.

Veeam Cloud Connect is explained in detail in this post. 

VMware releases patch for vSphere 5.5 Update 1 NFS All Paths Down condition

After installation of VMware vSphere 5.5 Update 1 connectivity to NFS storage could be randomly lost with volumes reporting All Paths Down state. This issue was noticed around mid April and reported here.VMware documented the issue in KB article  2076392 Intermittent NFS APDs on VMware ESXi 5.5 U1

At June 10 VMware released a patch for this issue as described in KB 2077360. The patch can be downloaded using VMware Update Manager or using  VMware download page

Synchronized identity (Dirsync) can now be used as backup for Federated identity in Office 365 and Azure

In May 2014 Microsoft made available an important update for federated authentication to Office 365 and Microsoft Azure. This update is not very known so far so I hope this blog helps to understand the benefit of this update!

Lets start with some basics about authentication for Microsoft cloud based services.

Authentication for Microsoft online cloud based services like Office 365 and Microsoft Azure can be performed using three models:

  • Cloud Identity (also known as Standard authentication) using cloud based username and password stored in Windows Azure Active Directory. Usernames are typically useraccount@<organization>.onmicrosoft.com
  • Synchronized Identity (also known as Managed Authentication) allowing synced usernames and passwords with on-premises Active Directory as source
  • Federated identity (Federated Authentication) allowing Single Sign-On to Microsoft Office 365 & Azure because of a federation with an on-premisis Active Directory

A very good post by Microsoft on identify is titled  Choosing a sign-in model for Office 365.

Federated identity enables users which are authenticated to on-premises Active Directory to access Office 365 and Azure without additional authentication (Single Sign-On). To enable such a scenario several servers are required to be installed in the on-premises environment. In many cases redundancy of roles like Active Directory Federation Services (AFDS) is required. The reason for this is that if ADFS is not available users will not be able to authenticate to Office 365 and Azure. Active Directory is the single source for authentication in this scenario.

This has changed now! Since May 2014 Synchronized Identity can be used as a backup in case Federated identity is not available because of a failure (server crash, internet connection to on-premises failed, power failure etc).

Introducing Synchronized Identity

Synchronized Identity enables users to use their AD username and password to sign in to Office 365, Azure etc.
Synchronized Identity  is enabled by installing a free Microsoft tool called Dirsync. Dirsync will synchronize useraccounts plus passwords from AD to Windows Azure Active Directory (WAAD). WAAD is a multi-tenant implementation of Active Directory. It is used for authentication services by Office 365 and Microsoft Azure. Dirsync hashes the password. Dirsync is very easy to install and straightforward to use. There are hardly any issues reported.

Dirsync however does not allow Single Sign on.

To make life easier for users Federated Identity can be used. Configuration for this model is not as easy as using Dirsync. Several servers are required to be installed on-premises.You need dedicated IPs, proxies, certificates, load balancers which are not free, and set quite a few security policies.
Before Microsoft added the Password sync option in Dirsyc the only userfriendly way tp authenticate to Office 365 was using ADFS.

Dirsyc is required for Federated Identity. At sign in a check is performed if a valid useracount is used for authentication. Federated Identity does not check the password to WAAD. WAAD trusts the on-premises ADFS as a ‘password’ provider.

You now understand  availability of ADFS is critical! If it does not work your users will not be able to authenticate as there is no way to check the password. Even when the hashed password of the user is stored in WAAD, it would  not be used if the domain was configured for Federated Identity.

However in May 2014 Microsoft made a change in WAAD. User accounts can now be configured for BOTH Single Sign-on as well as for password synchronization (. This enables a fall back if ADFS Federated Identity is not available.

 

 

How to perform a temporary switch to  Synchronized Identity

WAAD will not automatically fall back to Synchronized Identity when Single Sign-On is not possible because of a failure in connecting to ADFS. Administrators will have to manual switch back to Synchronized Identity.

The temporary switch from Federated Identity to Synchronized Identity takes two hours plus an additional hour for each 2,000 users in the domain.

You will need to use the Windows Azure Active Directory Module for Windows PowerShell to switch a namespace from Federated (Single Sign-On) to Managed (password sync).

Use this PowerShell command for a temporary switch to Synchronized Identity
Convert-MSOLDomainToStandard –DomainName <federated domain name> -SkipUserConversion $true -PasswordFile c:\userpasswords.txt

For detailed instructions read this post

 

Veeam Backup & Replication 7.0 Patch 4 now available

Veeam released Patch 4 for Veeam Backup & Replication 7.0.

This patch contains new features and resolved issues.

Make sure you are running version 7.0.0.690, 7.0.0.715, 7.0.0.764, 7.0.0.771, 7.0.0.833, 7.0.0.839 or 7.0.0.870 prior to installing this patch. You can check this under Help | About in Veeam Backup & Replication console.

After upgrading, your build will be version 7.0.0.871

The release notes are here. Link to the download at the bottom of the release notes.

New features and enhancements:

VMware Virtual SAN (VSAN)

  • In addition to adding basic support (as provided by other vendors), the intelligent load-balancing engine was enhanced to account for VSAN specifics. As the result, for each VM the job will pick backup proxy running on VSAN cluster node with most of the virtual disks’ data available locally. This significantly reduces backup traffic on VSAN cluster network, resulting in minimum possible impact on production environment from backup activities.

Microsoft SQL Server 2014

  • Added support for Microsoft SQL Server 2014 both as the protected guest workload (including application-aware processing functionality), and the back-end database for backup and Enterprise Manager servers.

License key auto update

  • Added automated license key update option to the License Information dialog. With auto-update enabled, the product will check Veeam licensing server periodically for an updated license key, and download and install the key automatically as soon as it becomes available. This feature is particularly useful to the Service Providers and subscription-based customers, and it removes the need to download and install the license key manually each time when the license extension is purchased.

Backup Copy

  • The maximum allowed amount of restore points in the Backup Copy job has been increased to 999.
  • Backup Copy will now resume the transfer after network connection drop to a Linux-based backup repository.
  • Backup Copy jobs should no longer report errors during the days when source backup jobs are not scheduled to run – for example, during the weekend.

Hyper-V

  • Added support for certain Hardware VSS Providers that previously could not be detected by the storage rescan process, and as such could not be used by the jobs.
  • Jobs will now retry failed snapshot creation when another shadow copy of the same volume is already in progress, instead of immediately failing to process a VM.

Zerto releases Virtual Replication 3.5

Today Zerto released Zerto Virtual Replication (ZVR) 3.5. Virtual Replication does per VM replication and recovery orchestration  and is targeted at enterprises and service providers using VMware vSphere. Especially in the US a growing number of service providers use Zerto VR to offer Disaster Recovery as a Service (DRaaS) to their customers.

Zerto partners with Cisco. Cisco is using ZVR in their blueprints for service providers. Cisco customers can use these blueprints to offer DRaaS services to their customers to enhance their business model.

New major features in ZVR 3.5 are:

  1. Offsite backup
  2. VMware VSAN support
  3. new action APi’s
  4. alerts and notification enhancements
  5. tolerant failover

This release also includes fixes.

Virtual Replication 3.5 is available for immediate download from the Zerto portal.

Zerto-VR-35JPG

A video explaining and demoing the features can be seen here.

New in ZVR 3.5 is the ability for Offsite backup. Offsite backup is basically a copy of a regular replica stored in a safe place for a longer period.
The name ‘Offsite Backup’ can be a bit confusing. The functionality can be compared to NetApp SnapVault. Bascially Offsite Backup is a vault (or backup) for replica’s. The purpose is to make sure the replica can be retrieved over a longer time window.

The regular ZVR replica has a protection of max. 5 days. Offsite backup allows to restore data of up to one year back in time. Offsite backups can be stored on SMB/CIFS shares. Storing offsite backup data in Amazon cloud storage is supported as well using a tool which presents Amazon storage a fileshare over SMB.

The offsite backup is wrapped in a so-called Zerto Backup Package. The package contains a full backup of all virtual machines part of a Zerto Virtual Protection Group. The package is portable. It does not need the exact Zerto Virtual Manager installation of version to be restored.

Mind offsite backup is not a replacement for regular backup software. It does not have deduplication, is not an archiving solution and is not able to recover single files.

Use cases for Offsite backup are:

  • compliance use:
  • archiving of test/dev virtual machines. Think about a software company usung VM’s for development
  • 3rd site for storage of backup
  • cost reduction

Tolerant failover means a failover is still regarded succesfull even when some of the  VMs are recovered but cannot be not turned on. Causes for a VM not being able to start could be for example an IP conflict, MAC address conflict or not enough resources in a resource pool.

 

 

VMware vCenter Heartbeat End of Availability starting June 2

VMware vCenter Heartbeat is not available anymore for purchase starting June 2. All support and maintenance for the removed versions will be unaffected and will continue on per VMware Life Cycle policy through the published support period until September 19, 2018.

VMware vCenter Server Heartbeat is a software product (OEM of Neverfail) that protects vCenter Server against outages–from application, operating system, hardware and network failures to external events–regardless of whether vCenter Server is deployed on a physical or virtual machine.

vCenter Server Heartbeat creates a clone of both the vCenter Server and the SQL server database used by vCenter, and then keeps both the primary and secondary vCenter Servers in sync through continuous asynchronous replication. Administrators can use vCenter Server Heartbeat in a virtual configuration, physical configuration or a hybrid model.

The reason for the end of life is that VMware believes current available protections like VMware HA, vMotion and Storage vMotion ensure availability of managed resources.

I *guess* a couple of reasons to stop selling vCenter Heartbeat could be:

1. most customers are using vCenter running in a virtual machine. They are happy with HA
2. sales of vCenter Heartbeat are reducing because of point 1
3. vCenter Heartbeat is a complex product
4. vCenter Heartbeat was a pretty expensive solution not many customer were interested in.
5. It might give VMware too many support headaches because of upgrade issues vSphere 5.0 -> 5.5 combined with SSO.

6. As of vCenter Server 5.5 in vSphere 5.5, VMware introduced support for using Microsoft SQL Cluster Service for use as a back end database. See this KB for instructions.
7. VMware is working on a new way to protect vCenter Server

More information in the VMware blog here. Answers to frequently asked questions are here.

Checking hardware recommendations might prevent VSAN nightmare.

<update June 4>

Jason Gill posted the Root Cause Analysis done by VMware on his issue with VMware described below. Indeed the issue was because of the usage of the Dell PERC H310 controller which has a very low queue depth. A quote:

While this controller was certified and is in our Hardware Compatibility List, its use means that your VSAN cluster was unable to cope with both a rebuild activity and running production workloads. While VSAN will throttle back rebuild activity if needed, it will insist on minimum progress, as the user is exposed to the possibility of another error while unprotected. This minimum rebuild rate saturated the majority of resources in your IO controller. Once the IO controller was saturated, VSAN first throttled the rebuild, and — when that was not successful — began to throttle production workloads.

Read the full Root Cause Analysis here at Reddit

Another interesting observation while reading the thread on Reddit is that the Dell PERC H310 actually is an OEM version of the LSI 2008 card. John Nicholson wrote a very interesting blog about the H310 here.

Dell seems to use H310 with old firmware. When using the latest firmware the queue depth of the Dell PERC H310 can be increased to 600!

We went from 270 write IOPS at 30 ms of write latency to 3000 write iops at .2ms write latency just by upgrading to the new firmware that took queue depth from 25 to 600

This article explains how to flash a Dell PERC H310 with newer firmware. I am not sure if a flashed PERC H310 is supported by VMware. As a HBA with better specs is not that expensive I advise to only flash Dell PERC H310 when used in non-production environments.

————————————————————-

June 02, 2014

An interesting post appeared on Reddit. The post titled My VSAN nightmare describes a serious issue in a VSAN cluster. When one of the three storage nodes failed displaying a purple screen, initially all seemed fine. VMware HA kicked in and restarted VM’s on the surviving nodes (two compute and two storage nodes). The customer was worried about redundancy as storage was located on just two nodes now. So SSD and HDD storage was added to one of the compute nodes. This node did not have local storage before.

However exactly 60 minutes after adding new storage,  DRS started to move VM’s to other hosts, lots of IO were seen, all (about 77) VM’s became unresponsive and all died. VSAN Observer showed that IO latency had jumped to 15-30 seconds (up from just a few miliseconds on a normal day).

VMware support could not solve the situation and basically said to the customer: “wait till this I/O storm is over”. About 7 hours later the critical VM’s were running again. No data was lost.

At the moment VMware support is analyzing what went wrong to be able to make a Root Cause Analysis.

Issues on VSAN like the one documented on Reddit are very rare.  This post will provide some looks under the cover of VSAN. Hope this helps to understand what is going on under the hood of VSAN and it might prevented this situation happening to you as well.

Lets have a closer look at the VSAN hardware configuration of the customer who wrote his experiences on Reddit.

VSAN hardware configuration
The customer was using 5 nodes in a VSAN cluster: 2x compute nodes (no local storage )  and 3x storage nodes, each with 6x magnetic disks and 2x SSD’s, split into two disk groups each.
Two 10 Gb nics where used for VSAN traffic. A Dell PERC H310 controller was used which has a queue depth of only 25. Western Digital WD2000FYYZ HDDs were used with a capacity of 2 TB, 7200 rpm SATA drives. SSD’s are Intel DC S3700 200 GB.

The Dell PERC H310  is interesting as in Duncan Epping post here it is stated:

Generally speaking it is recommended to use a disk controller with a queue depth > 256 when used for VSAN or “host local caching” solutions

VMware VSAN Hardware Guidance also states:

The most important performance factor regarding storage controllers in a Virtual SAN solution is the supported
queue depth. VMware recommends storage controllers with a queue depth of greater than 256 for optimal
Virtual SAN performance. For optimal performance of storage controllers in RAID 0 mode, disable the write cache, disable read-ahead,
and enable direct I/Os.

Dell states about the Dell PERC H310

 Our entry-level controller card provides moderate performance.

Before we dive into the possible cause of this issue lets first provide some basics on VMware VSAN. Both Duncan Epping and Cormac Hogan of VMware wrote some great posting about VSAN. Recommended reads! See the links at the end of this post.

VSAN servers 
There are two ways to install a new VSAN server:

  1. assemble one yourself using components listed in the VSAN Hardware Compatibility Guide
  2. use one of the VSAN Ready Nodes which can be purchased. 16 models are available now from various vendors like Dell and Supermicro.

Dell has 8 different servers listed as VSAN Ready Node. One of them is the PowerEdge R720-XD which is the same server type used by the customer describing his VSAN nightmare. However the Dell VSAN Ready Node has 1 TB NL-SAS HDD while the Reddit case used 2 TB SATA drives. So likely he was using servers assembled himself.

Interesting is that 4 out of the 8 Dell VSAN Ready Node server use the Dell PERC H310 controller. Again, VMware advises a controller with a queue depth of over 250 while the PERC H310 has 25.

Dell-vsan-ready-node

VSAN storage policies
For each virtual  machine or virtual disk active in a VSAN cluster an administrator can set ‘virtual machine storage policies’. One of the available storage policies is named ‘number of failures to tolerate’. When set to 1, virtual machines to which this policy is set will survive a failure of a single disk controller, host or nic.

VSAN provides this redundancy by creating one or more replica’s of VMDK files and stores these at different storage nodes in a VSAN cluster.

In case a replica is lost, VSAN will initiate a rebuild. A rebuild will recreate a replica of VMDKs.

VSAN response to a failure. 

VSAN’s response to a failure depends on the type of failure.
A failure of SSD, HDD or the diskcontroller results in an immediately rebuild. VSAN understand this is a permanent failure which is not caused by for example planned maintenance.

A failure of the network or host results in a rebuild which is initiated after a delay of 60 minutes. This is the default wait. The wait is because the absense of a host or network could be temporary (maintenance for example) and prevents wasting resources. Duncan Epping explains details in this post How VSAN handles a disk or host failure .
The image below was taken from this blog.

If the failed component returns within 60 minutes only a data sync will take place. Here only the data changed during the absence will be copied over to the  replica(s).

A rebuild however means that a new replica will be created for all VMDK files being not compliant. This is also referred to as a ‘full data migration’.

To change the delay time see this VMware KB article Changing the default repair delay time for a host failure in VMware Virtual SAN (2075456)

Control  and monitor VSAN rebuild progress
At the moment VMware does not provide a way to control and monitor the progress of the rebuild process. In the case described at Reddit basically VMware advised ‘wait and it will be alright’. There was no way to predit for how long the performance of all VM’s stored on VSAN would be badly affected because of the rebuild. The only way to see the status of a VM is by clicking on a VM in the vSphere web client. Then select its storage policies tab, then clicking on each of its virtual disks and checking the list – it will tell you “Active”, “Reconfiguring”, or “Absent”

For monitoring  VSAN Observer provides insight on what is happening.

Also looking at the clomd.log could give indication of what is going on. This is the logfile of the Cluster Level Object Manager (CLOM)

It is also possible to use command line tools for administration, monitoring and troubleshooting. VSAN uses Ruby vSphere Console (RVC) command line. Florian Grehl wrote a few  blogs about managing VSAN using RVC

The VMware VSAN Quick Troubleshooting and Monitoring Reference Guide has many details as well.

Possible cause
It looks like the VSAN rebuild process which started exactly 60 minutes after having added extra storage initiated the I/O storm. VSAN was correcting an incompliant storage profile and started to recreate replica’s of VMDK objects.

A possible cause for this I/O storm could be that the rebuild of almost all VMDK files in the cluster was executed in parallel.  However according to Dinesh Nambisan working for the VMware VSAN product team;

 “VSAN does have an inbuilt throttling mechanism for rebuild traffic.”

VSAN seems to use  a Quality of Service system for throttling back replication traffic. How this exacty works and if this is controlable by customers is unclear. I am sure we will soon learn more about this as this seems key in solving future issues with low-end controllers and HDDs combined with a limited number of storage nodes.

While the root cause has yet to be determined a combination of configuration choices could have caused this:

1. Only three servers in the VSAN cluster were used for storage. When 1 failed only two were left. Those two both were active in rebuild for about 77 virtual machines at the same time.
2. Using SATA 7200 rpm drives as the HDD persistent storage layer. Fine for normal operations when SSD is used for cache. In a rebuild operation not the most powerfull drives having low queue depths.
3. Using an entry level Dell PERC H310 disk controller. The queue depth of this controller is only 25 while advised is to use a controller with 250+ queue depth.

Some considerations
1. Just to be on the safe side use controllers with at least 250+ queue depth
2. for production workloads use N+2 redundancy.
3. use NL-SAS drives or better hdd. These have much higher queue depths (256) compared to SATA hdd (32).
4. in case of a failure of a VSAN storage node: try to fix the server by swapping memory/components to prevent rebuilds. A sync is always better than a rebuild.

5. It will be helpfull if VMware added more control for the rebuild process. When n+2 is used, rebuild could be scheduled to be executed only during non-business hours. Also some sort of control of priority on which replica’s are rebuild first would be nice. Something like this:

in case n+1: tier 1 vms rebuild after 60 minutes. tier 2,3  rebuild during non-business hours
in case n+2: all rebuilds only during non-business hours. Tier 1 vm’s first, then tier 2 then tier 3 etc.

Some other blogs about this particular case
Jeramiah Dooley Hardware is Boring–The HCL Corollary

Hans De Leenheer VSAN: THE PERFORMANCE IMPACT OF EXTRA NODES VERSUS FAILURE

Some usefull links providing insights into VSAN

Jason Langer : Notes from the Field: VSAN Design–Networking

Duncan Epping and others wrote many postings about VSAN. Here a complete overview.

A selection of those blog posts which are interesing for this case.
Duncan Epping How long will VSAN rebuilding take with large drives?
Duncan Epping 4 is the minimum number of hosts for VSAN if you ask me
Duncan Epping How VSAN handles a disk or host failure
Duncan Epping Disk Controller features and Queue Depth?

Cormac Hogan VSAN Part 25 – How many hosts needed to tolerate failures?

Cormac Hogan Components and objects  and What is a witness disk 

 

Amazon Web Services releases AWS Management Portal for vCenter. Sign of competition̶ o̶r̶ ̶p̶a̶r̶t̶n̶e̶r̶i̶n̶g̶ ?

update June 2:
Chris Wolf, CTO Americas for VMware responded to the release of the Management Portal with a blogpost titled: Don’t Be Fooled By Import Tools Disguised as Hybrid Cloud Management.  This provides a clear answer: sign of competition.

—–

Amazon Web Services released at May 30 2014 the ‘AWS Management Portal for vCenter‘. This free plug-in for vCenter allows  management of virtual machines and virtual networks running on Amazon Web Service from the vSphere Client.

AWS Management Portal also allows to import a vSphere virtual machine to AWS. The VMware virtual machine needs to be shutdown to perform a conversion to the Amazon .AMI format as well as the upload to an Amazon datacenter.

The management portal also offers self-service access to AWS.

A single console now provides management of both VMware on-premises infrastruture as well as public clouds. This is not a comprehensive tool for creating and managing AWS resources. The management portal enables vCenter users to get started quickly with basic tasks, such as creating a VPC and subnet, and launching an EC2 instance. To complete more advanced tasks, users must use the AWS Management Console or AWS CLI.

A comprehensive step by step description of the features of AWS Management Portal is published on amazon.com.

AWS Management Portal is distributed as a .OVA file which can easily imported into vCenter Server. Download here.

Competition or partnering?

Forbes.com reports on the release of the plug-in with a post titled: Amazon Web Services Takes The Battle To VMware . Is this a battle or a sign of collaboration between VMware and Amazon?

Both have are marketleaders in their field: VMware vSphere for on-premises datacenters, Amazon for public IaaS cloud. However the hybrid proposition of both is weak. Microsoft has many powerfull cards in their hands. Allmost all organizations worldwide are a customer of Microsoft. Microsoft Azure is developing in a rapid pace and offers many scenarios for enabling hybrid cloud connecting infrastructures and applications.

Amazon Web Services (AWS) is by far the biggest cloud provider. See the Gartner Magic Quadrant for IaaS for example. However most AWS customers are not using AWS to host their enterprise applications. The Gartner definition of an enterprise application is:

 

These are general-purpose workloads that are mission-critical, and they may be complex, performance-sensitive or contain highly sensitive data; they are typical of a modest percentage of the workloads found in the internal data centers of most traditional businesses. They are usually not designed to scale out, and the workloads may demand large VM sizes. They are architected with the assumption that the underlying infrastructure is reliable and capable of high performance.
source 

VMware vSphere is the market leader in hosting enterprise applications located in on-premises datacenters and colocation environments. VMware does not have a large presence in Infrastructure as a Service. Their IaaS offering named vCloud Hybrid Service has been available since end of August 2013. VMware for the first time appeared in the Gartner MQ for IaaS in May 2014.

Last year (June) at the Gartner Catalyst conference  Chris Wolf did an interview with Raghu Raghuram, executive VP of cloud infrastructure and management. A small piece of the interview went like this. Source Gigaom.com

  • Wolf:  “What if VMware and Amazon were to work together on seamless workflow? Our audience loves it — it’s a customer requirement.”
  • Raghuram: “How do you know we’re not working closely with them?”
  • Wolf: “We haven’t seen any results.”
  • Raghuram: “Stay tuned.”
  • Wolf: “Would you to elaborate on that?”
  • Raghuram: “Nope.”

So this release of the AWS Management Portal for vCenter could be the start of something bigger. However it is unclear what benefits VMware has. The AWS Management Portal could lure VMware customers to Amazon EC2 instead of the VMware vCHS.

 

 

Gartner releases Magic Quadrant for Cloud Infrastructure as a Service. Microsoft Azure now a leader.

Gartner published the Magic Quadrant (MQ) for Cloud Infrastructure as a Service at May 28 , 2014. The document provides an overview of IaaS and which providers offer this service. To be included in the MQ providers had to meet various criteria. In total 25+ scoring categories are used to determine placement.

A free reprint of the Magic Quadrant  is available here. 

Amazon Web Services is far ahead of the rest of the providers. The marketshare on compute capacity is about 87 %, 5 times more than the total capacity combined of all other providers.  They are thought leaders, have a mature offering and offer much more capacity.

Microsoft  is the only other company listed in the Leader quadrant. Their IaaS offering Azure is listed for the first time ever in the Leaders quadrant.

Google and VMware vCHS are added for the first time to the MQ.

 

The MQ is very interesting to read! It offers a lot of information on the market. For an explanation see this blog of Lydia Leong, analyst of Gartner.

Not everyone is impressed by the MQ. Some believe the gap between Amazon and Microsoft should be bigger. Lydia Leong states the move up and to the right of Microsoft is mostly because of the company’s remarkable market power and less because of the growth in technical features.

Microsoft is present in almost any organization and is able to buy itself into the IaaS market. For example by giving away free Azure credits for Enterprise Agreement customers. Microsoft also shown a lot of vision.

Below the MQ for May 2014.

Gartner MQ IaaS May 2014

 

Below the MQ for IaaS on August 2013 

Automatically download Microsoft sessions published on Channel 9 (TechEd, BUILD etc)

Microsoft publishes recorded sessions and PowerPoint slides of many Microsoft events on Channel 9. Recently TechEd 2014 North America was held in Houston. Over 600 sessions can be watched for free here. 

What if you could download those sessions so you can watch those while offline?

Claus Nilsen wrote a great PowerShell script. It is very easy to use: execute the PowerShell script. You will be presented with a graphical user interface which allows to select the category, author and the event. You can also choose to download the PowerPoint slides, the recorded sessions or both.

Download the script here.

 

download-channel9

Driver IRQL not less or equal (bridge.sys)

I experienced an annoying issue on my Windows 8 laptop. All of a sudden the laptop got a blue screen showing a text.

Your PC ran into a problem and needs to restart. We’re just collecting some error info, and then we’ll restart for you.

Driver IRQL not less or equal (brdige.sys)

I had no idea what could be causing this. My laptop was running for many months without any issues. Now all of a sudden a couple of crashes in one hour.

Using Google I found out it could be related to Hyper-V which was installed. So I started Hyper-v Manager , selected Virtual switch manager and deleted the switch which was connected to the wireless network adapter.

Problem solved.

I did not try to solve the problem by installing new drivers as I was not using the Wifi nic.

Registreer voor de Windows Management User Group Nederland bijeenkomst op 16 september

Op 16 september 2014 organiseert de Windows Management User Group Nederland alweer de vierde bijeenkomst. Het thema is deze keer virtualisatie. De locatie is bij PQR in De Meern en de toegang is gratis. Registratie kan hier.

Sessie 2 zal gaan over hoe servervirtualisatie en cloud kan helpen om jouw IT-omgeving weer zonder stress snel operationeel te krijgen na een bijvoorbeeld brand, water/rook schade of langdurige stoomuitval

Het voorlopige programma ziet er als volgt uit:

  • 17:00u – 17:50u  ontvangst / eten
  • 17:50u – 18:00u  kickoff (PQR – Ruben Spruijt)
  • 18:00u – 19:00u  sessie 1 (Eric Sloof)
  • 19:00u – 19:10u  pauze
  • 19:10u – 19:55u  sponsor sessie (…)
  • 19:55u – 20:05  pauze
  • 20:05 – 21:05 sessie 2 (Marcel van den Berg)
  • 21:05u – 22:00u  borrel

Sessie 2 gaat dus over disaster recovery. De mogelijkheden om applicaties  te beschermen tegen uitval zijn de laatste jaren enorm toegenomen. Hyper-V en vSphere voorzien in  gratis software om replicatie naar een ander datacenter mogelijk te maken. VMware en Microsoft bieden beide de mogelijkheid om de cloud als uitwijk datacenter te gebruiken (DRaaS).  Wat zijn de features, de voor- en nadelen? Wat kan er niet? Is het duur? Hoe zit het met licenties. Mag er zomaar een kopie in een ander datacenter worden geplaatst?  Welke Nederlandse bedrijven bieden Disaster Recovery as a Service? Op deze en andere vragen krijg je antwoord op 16 september.

Mijn sessie zal onder andere aandacht besteden aan oplossingen en diensten als Zerto Virtual Replication, VMware Site Recovery Manager, VMware vSphere Replication, Hyper-V Replica, VMware vCHS – Disaster Recovery en Azure Site Recovery.

In een demo simuleren we een uitval van een datacenter en laten we zien op welke wijze eenvoudig kan worden uitgeweken naar een cloud omgeving.

Registratie kan hier.

Partners will be able to resell Microsoft Azure in Open Licensing per August 1

Microsoft customers  wanting to use Microsoft Azure services currently have two options for purchasing:

1. Register for Azure and submit creditcard details. Microsoft will charge on pay-as-you go. For enterprises using a creditcard it is very difficult to manage costs and perform chargeback to internal departments.
2. Buy Azure credits as part of an Enterprise Agreement. Enterprise Agreements are sold exclusively by Microsoft Licensing Solution Providers (LSP , the new name for Large Account Reseller). The disadvantage of purchasing Azure in Enterprise Agreement is  the customer needs to buy up-front credits. Those credits last for one year. If not consumed the remaining credit is lost. 

The current model was not very attractive for system integrators (SIs) and value-added resellers (VARs). When they help customers onboarding to Microsoft Azure they get a kickback fee lasting two years while Microsoft does the billing. The registration of leads by the partner and receiving the kickback free is a rather complex and lengthy process.

Especially System Integrators selling hardware like storage would hesitate to onboard a customer to Microsoft Azure. It could turn out to a shoot in the own foot as they start to loss the revenue of selling hardware as well as the relationship with the customer.

Microsoft is now enabling selling of Azure more easily.

Starting August 1 Microsoft Azure becomes available in Open Licensing. Open License is a two year agreement with Microsoft for buying software licenses. Targeted at organizations having 5 to max 250 devices. Customers pay up-front.

Microsoft Partners will be able to purchase tokens from a distributor. The partner resells tokens to their customer. Reselling is actually adding the tokens (each worth $ 100,-) to the Azure subscription of the customer. This enables a billing relationship between the Microsoft partner and the customer consuming Azure resources. It also ensures recurring revenue for the partner.  Mind the credit is only valid for 1 year. Remaining credit does not roll over to the next year!

More information here and here.

Aidan Finn wrote a blogpost about this news here. As usual his opinions are just a little bit  biased. His quote below is incorrect.

finn-biasMicrosoft  certainly does not have a unique selling point here. VMware has a hybrid cloud offer as well. Not only VMware itself offers a public IaaS service (VMware vCloud Hybrid Service or vCHS owned by VMware) which connects seamless to on-premises vSphere infrastructures. There are also many vCloud Providers offering public IaaS with excellent connectivity to on-premises owned datacenters.

Microsoft is not the only cloud vendor with a partner-enabling model. VMware vCHS is sold the same way as on-premise VMware licenses with a standard SKU and partners can retain the billing relationship with their customers.

Amazon has a Channel Reseller Programm for quite some time now.

 

 

VMware releases VMware Workstation Technology Preview 2014 – May 2014

VMware released VMware Workstation Technology Preview 2014 – May 2014. This is a beta which VMware likes feedback on by people using the software.

Requirements:

The system requirements for this Technology Preview are mostly same with those for Workstation 10, with the following differences:

  • All 32-Bit Host Operating Systems are no longer supported
  • Windows XP 64-Bit, Windows Vista 64-Bit, Windows Server 2003 64-Bit are no longer supported as host operating system

Download the software here. The software can be used until October 15 2014. Use this key for installation: H54AA-J0L9J-08A82-0138P-ADQ4Y

The release notes are here

What’s New

New OS Support – To popularity of Windows 8 is still growing especially after the Windows 8.1 release. We have been running the Windows 8.1 Update 1 since the date it was released and continuing to improve our support for it. We would appreciate  your comments and suggestions for making it easier to run the latest Windows 8 / 8.1 versions in a virtual machine. Of course we are running the latest Ubuntu, Fedora, RHEL, OpenSUSE and other Linux distros as well and we would appreciate your feedback on their performance too.

VMware Hardware Version 11 – This Technology Preview introduces hardware Version 11.  Hardware versions introduce new virtual hardware functionality and new features while enabling VMware to run legacy operating systems in our virtual machines.

CPU enablement – While this Technology Preview still supports creating and running virtual machines with up to 16 virtual CPUs, we extended the support of the latest generation of CPUs. The microarchitectures of both Intel Haswell and AMD Jaguar are fully supported, and those of Intel Broadwell and AMD Steamroller have been made to be compatible. We are interested in the feedback of creating and running virtual machines on those latest CPUs.

Virtual xHCI controller – virtual xHCI controller was added in virtual hardware version 8 and it conforms to version 0.96 of the Intel xHCI specification. In this version of virtual hardware, we updated it to be compliant with the latest version 1.0 of the specification. Better compatibility and performance of USB 3.0 devices is expected, we would love to see the results with your USB devices.

Dedicated graphics memory for guest operating system –  In order to make our customers be able to precisely control the memory allocation when there are multiple virtual machines up running, the new virtual hardware version makes the guest video memory backed by their own dedicated chunk, which could be allocated / configured by the user.

 

Graphics memory configuration – For those virtual machines with virtual hardware version 11, you can adjust the dedicated graphics memory on a per virtual machine basis. Go to Virtual Machine Settings -> Hardware -> Display, you can see the graphics memory dropdown and make adjustment there. Please note that a virtual machine could be less stable if too much memory is assigned, and on the other hand the graphics performance could drop if a very small amount of memory is assigned.

 

Windows 8 Unity mode improvements – When you run a Windows 8 / 8.1 virtual machine on a Windows 8 / 8.1 host in Unity mode, the user experience has been improved especially when you try to go to the Start screen of the host or navigate to the Start screen of the guest.

 

Boot virtual machine with EFI – As an alternative of BIOS, EFI is supported by more and more operating systems including Windows 7, Windows 8/8.1 and many Linux distros. This Technology Preview version allows you to create and boot the guest operating system with EFI. To do so, just simply do not power on the guest after creation, go to Virtual Machine Settings -> Options -> Advanced, check the setting “Boot with EFI instead of BIOS”, then save the setting and power on the virtual machine.

 

What are threats of data stored in ‘the cloud’ and how cloud providers protect their customers

The spying done by the NSA and revealed by Edward Snowden for sure did not much good for revenues of companies selling cloud solutions.

Nobody believes anymore that NSA’s main purpose is to defeat terrorisme. Foremost NSA is very interested in political views of other countries (Germany, the EU), financial data (Swift bank tranfers)  and economical spying (Brazilian oil company Petrobras). National security is used as an excuses to violate people’s privacy.

A lot has to change in the minds of US. At a CIA congress in June Congressman Mike Rogers says Google Is Unpatriotic For Not Wanting NSA To Spy On Its Users.

Many US firms colaborated with the NSA enabling them to add backdoors to hardware and software. See for example this article  on how Microsoft helped the NSA, The NSA itself tampered with US-made routers by intercepting shipments to customers, add backdoors and then shipped the router to the final destination (source The Guardian)

Outsourcing  infrastructure or applications is a matter of trust.  There is a saying that ‘Trust arrives on foot but leaves on horseback’

Add the Patriot Act, American Stored Communications Act (SCA) and the Foreign Intelligence Surveillance Amendments Act (FISAA) and many,  especially European and Brazilian organizations, are worried to store any privacy, intellectual property or any other sensitive information in a datacenter which they do not own and trust. Red alert when the provider is a US company.

Microsoft  admitted in 2011 that data owned by Europeans and stored in European datacenters but processed by US firms is not safe for US authorities.  (source ZDnet).

Data requests
So how many times US authorities request data from providers and what kind of data is requested? Meta data or actual data like content of email? The problem is that this kind of information cannot be made public by law. Providers are not allowed to reveal court orders. They are allowed to reveal the number of orders with a delay of 6 months after the order was handed over. The Guardian has an article about this.

Microsoft received from  January to June 2013  fewer than 1,000 orders from the Fisa court for communications content during the same period, related to between 15,000 and 15,999 “accounts or individual identifiers”.

The company, which owns the internet video calling service Skype, also disclosed that it received fewer than 1,000 orders for metadata – which reveals communications patterns rather than individual message content – related to fewer than 1,000 accounts or identifiers.

Mind these numbers are for all Microsoft services including Skype and Outlook.com. So in many cases court orders from Fisa are related to personal accounts and not to enterprise accounts.

This is important to understand the problem.

Non disclosure of  National Security Letter or court orders (gag order)

US authorities like FBI, US Department of Justice  can request  a cloud/service provider to hand over customers data without disclosing that request to the customer. This is a so called gag order. The official name of such a request is a National Security Letter or NSL.

In any cloud contract of Microsoft and likely every US provider as well some lines are written like the ones below:

The cloud services that Microsoft provides to are governed by contract (the "Contract"). The Contract provides that Microsoft may disclose data to satisfy legal requirements, comply with law or respond to lawful requests by a regulatory _or
judicial body, or as required in a legal proceeding. The Contract also provides that, unless prohibited by law, Microsoft must use commercially reasonable efforts to give notice of any such disclosures in advance, or as soon as commercially reasonable after such

Reach of Patriot Act
So how far reaches that notorious Patriot Act? When is data safe? Nobody knows for sure. Likely it is effective on data stored on servers of any company located in:

– The United States;
– The European Union with a parent company located in the United States;
– The European Union and  uses  data processing services of a subsidiary which is established in the United States;
– The European Union and uses a third party for data storage or data processing, like a US-based hosting company;
– The European Union, but  does structural business with a company in United States of America.

The last one is the most unclear one and open for many interpretations.

There are some other serious security issues as well when using cloud. Amazon supplied Windows Server images in 2014 which were not patched since 2009. Auto update was disabled. Also HP and GoGrid offered images which were not up-to-date with latest security patches and also had auto-update disabled. Microsoft was the only investigated cloud provider which offered up-to-date images. (source Bkav).

So there are some serious issues to solve in cloud computing. What actions are taken by cloud providers to regain trust and how likely are those to keep the bad guys out?

  1. Object to court orders and go to court
  2. Trying to change mind of government
  3. Offer encryption
  4. Contracts
  5. Datacenters located in the EU
  6. Operate datacenters by branches
  7. Employ non US staff
  8. Use non ‘made in the United States’ software or hardware

Object and go to court
In several cases cloud providers like Google and Microsoft went to court when they received a National Security Letter. In an interesting case in 2013 when the FBI handed over a NSL to Microsoft including a non-disclosure, Microsoft went to court.

FBI wanted to have information on an Office 365 customer. After Microsoft filed this challenge in Federal Court in Seattle, the FBI withdrew its Letter.

Microsoft challenged the letter in court, saying the law the FBI used to obtain it violated the First Amendment, and was an unreasonable ban on free speech. 

In 2014 a Seattle judge ordered to unseal certain documents of this case. More information on gigaom.com

While this is a small success, many NSLs remain undisclosed.

Trying to change mind of US government
Microsoft is asking the US government this as described in this June 2014 post by Microsoft:

  • End bulk collection
  • Reform the FISA Court
  • Commit not to hack data centers or cables
  • Continue to increase transparency

See this article  Microsoft presses the US government on NSA reform

Encryption
Microsoft and others are doing its very best to make NSA life as hard as possible. They offer encryption in about any solution which stores on-premise created data in Microsoft Azure. The customer is the only one having the encryption key. Office 365 files stored in SharePoint Online and OneDrive for Business will have its own encryption key, So even when the NSA puts a gun to Microsoft head they will not be able to hand over readable data. Microsoft is working on encryption of data travelling between Azure datacenters. Google and others already encrypt that data.

Make sure data is encrypted the moment it leaves your on-premise trusted infrastructure. For how long encryption will be effective remains to be seen. NSA is building a datacenter  with supercomputer to decrypt AES encrypted data (source Forbes)

SSL traffic to and from Azure Web Sites can now be encrypted using Elliptic Curve Cryptography (ECC) certificates. Reversing a private key from a public key is about 10 times harder then when classic encryption methods. More info on ECC here. 

The story of email firm Lavabit shows the power of FBI , NSA and others. Lavabit provided encrypted email services which protect privacy of users. Snowden was one of the users of Lavabit (and probably the reason for the interest of FBI in Lavabit). One day the FBI knocked on the door of the owner of Lavabit holding a court order requiring the installation of surveillance equipment on the Lavabit network.The court order also required Lavabit to hand over its SSL private keys. Lavabit objected to comply, since that would give access to all messages to and from all customers, which would be unfair and unreasonable.

The owner refused, searched for a lawyer, got into a courtcase. The result: Lavabit had to hand over 5 SSL private keys. Lavabit even tried to handover the cryptgraphic material in printed form, stretched over 11 pages in a four-point font. (source Sophos.com)

In the end Lavabit had to close the company. (Source: the Guardian)

Contracts
Recently Microsoft proudly published that their contracts with customers using cloud services comply to the highest standards of the EU. Privacy authorities across Europe approve Microsoft’s cloud commitments.  While this contract is usefull so Microsoft customers are assured Microsoft complies to privacy laws, it is not a guarantee data is safe for the bad guys/curious types like NSA and FBI. As Microsoft states: they will have to handover data if requested and even do not have to inform the customer about the handover.

Datacenters located in the EU
There are severall reasons why US cloud providers offer datacenters located in the EU. First to provide the best possible latency. Secondly because EU laws prohibit certain type of data to be stored outside the EU.
Data stored in a EU datacenter but processed by a US firm is by no ways safe for Patriot Act. See the story about a US judge which orders Microsoft to hand over data stored in a Dublin datacenter. Microsoft goes to court. Many information on internet on this case, like this article.

Operate datacenters by branches of US companies
VMware entered the public cloud IaaS market a while ago by offering vCloud Hybrid Service. Besides 4 US-located datacenters they also have one datacenter located in Slough near London (UK). They stated at VMworld that data is safe for Patriot Act because vCHS is  operated by VMware UK. The datacenter is owned by UK company Savvis. I do not think this can avoid US authorities with court orders to hand over data as VMware UK has a parent in the US.

Employ non-US staff
Dutch telecom and IT services company KPN recently announced that its public cloud offer named CloudNL is fully managed by Dutch administrators which are not bound by U.S. law. This way, according to KPN, the company is not required to hand over data to NSA, FBI and other non-Dutch organizations. However KPN is 100% owner of US company iBasis. This ownership would make KPN a target for the Patriot Act as it does ‘structural business with a US company’. However the KPN believes access by NSA etc via iBasis is blocked because servers are located in Dutch datacenters. Dutch newspaper Trouw reported  (english here). Computerworld has an interesting article on CloudNL as well.

Use non ‘made in the United States’ software or hardware
When software made by US companies is used  the NSA could have a backdoor. Or the Patriot Act could have influence on the requirement to hand over data. So IT company Capgemini decided to build a cloud in which not a single component is made in the US. It provides software for email, calender sharing, presentations, file sharing and video conferencing. News about this cloud offer called Clair was published by nu.nl (translation in English

Capgemini does have about 27 offices in the US so even that might be a backdoor.

Conclusion
There is a lot uncertainty about the power of US acts like the Patriot Act. The only way to find out the reach are legal battles in court. Not all companies offering cloud services are interested in legal battles. They have an interest to be friends with US authorities.

Encryption of data which could be interesting for others and make sure to own the encryption key is a first step to secure data.

 

 

%d bloggers like this: