Microsoft DirSync to be replaced by Azure Active Directory Sync Services

Microsoft is actively working on enhancements to connect on-premises Active Directory to Azure Active Directory.

DirSync and Active Directory Federation Services are two options to connect both. DirSync can now be used as a backup for ADFS. See my post here.

Microsoft is working on a replacement for DirSync. DirSync is a software tool used to synchronize objects located in an  on-premises, single forest Active Directory  to Azure Active Directory. Azure Active Directory is the Microsoft multi-tenent cloud version of Active Directory used for identity management for services like Office 365.

DirSync is basically an implementation of Forefront Identity Manage but with limited features. For example it is not able to sync objects of multiple on-premises AD forests nor is it able to handle multiple Exchange organizations.

To support these scenarios enterprises are at the moment required to use Forefront Identity Manager. However, configuring FIM can be challenging and can take considerable time.

The new tool which replaces DirSync will be named Azure Active Directory Sync Services or AADSync.  AADSync significantly simplifies the configuration and makes it more predictive.

Microsoft Azure Active Directory Sync Services (AADSync) is used to onboard an on-prem environment to Windows Azure Active Directory and Office 365 and continue to synchronize changes. It is used for more advanced scenarios where DirSync does not provide support, for example multiple on-prem AD forests. At the moment AADSync does not support multiple Azure subscriptions.

AADSync will also be able to synchronize Exchange Global Address Lists. Support for PowerShell is also available, it has about 58 commands.

Microsoft Azure Active Directory Sync Services is currently available in customer technology preview (CTP).  This is a first beta release.

You can join the Azure Active Directory Sync Services preview here. The AADSync preview will then be added to your Microsoft Connect account. Through this you will be able to download the most recent version, get information on known issues and updates, as well as provide feedback.

Currently AADSync is in beta. You may not use this release in a production environment unless agreed to by Microsoft. For customers participating in the TAP program, the beta can be used in production.
To be considered for the TAP program, please contact the feedback alias

Mind AADSync does not have these features at the moment:

    • Exchange hybrid co-existence is not available.
    • Compared to DirSync, the following features are not available:
      • Password synchronization
      • Self-service password reset write-bac

More information on AADSync here.

Documentation on AADSync can be found here 


Amazon Web Services releases AWS Management Portal for vCenter. Sign of competition̶ o̶r̶ ̶p̶a̶r̶t̶n̶e̶r̶i̶n̶g̶ ?

update June 2:
Chris Wolf, CTO Americas for VMware responded to the release of the Management Portal with a blogpost titled: Don’t Be Fooled By Import Tools Disguised as Hybrid Cloud Management.  This provides a clear answer: sign of competition.


Amazon Web Services released at May 30 2014 the ‘AWS Management Portal for vCenter‘. This free plug-in for vCenter allows  management of virtual machines and virtual networks running on Amazon Web Service from the vSphere Client.

AWS Management Portal also allows to import a vSphere virtual machine to AWS. The VMware virtual machine needs to be shutdown to perform a conversion to the Amazon .AMI format as well as the upload to an Amazon datacenter.

The management portal also offers self-service access to AWS.

A single console now provides management of both VMware on-premises infrastruture as well as public clouds. This is not a comprehensive tool for creating and managing AWS resources. The management portal enables vCenter users to get started quickly with basic tasks, such as creating a VPC and subnet, and launching an EC2 instance. To complete more advanced tasks, users must use the AWS Management Console or AWS CLI.

A comprehensive step by step description of the features of AWS Management Portal is published on

AWS Management Portal is distributed as a .OVA file which can easily imported into vCenter Server. Download here.

Competition or partnering? reports on the release of the plug-in with a post titled: Amazon Web Services Takes The Battle To VMware . Is this a battle or a sign of collaboration between VMware and Amazon?

Both have are marketleaders in their field: VMware vSphere for on-premises datacenters, Amazon for public IaaS cloud. However the hybrid proposition of both is weak. Microsoft has many powerfull cards in their hands. Allmost all organizations worldwide are a customer of Microsoft. Microsoft Azure is developing in a rapid pace and offers many scenarios for enabling hybrid cloud connecting infrastructures and applications.

Amazon Web Services (AWS) is by far the biggest cloud provider. See the Gartner Magic Quadrant for IaaS for example. However most AWS customers are not using AWS to host their enterprise applications. The Gartner definition of an enterprise application is:


These are general-purpose workloads that are mission-critical, and they may be complex, performance-sensitive or contain highly sensitive data; they are typical of a modest percentage of the workloads found in the internal data centers of most traditional businesses. They are usually not designed to scale out, and the workloads may demand large VM sizes. They are architected with the assumption that the underlying infrastructure is reliable and capable of high performance.

VMware vSphere is the market leader in hosting enterprise applications located in on-premises datacenters and colocation environments. VMware does not have a large presence in Infrastructure as a Service. Their IaaS offering named vCloud Hybrid Service has been available since end of August 2013. VMware for the first time appeared in the Gartner MQ for IaaS in May 2014.

Last year (June) at the Gartner Catalyst conference  Chris Wolf did an interview with Raghu Raghuram, executive VP of cloud infrastructure and management. A small piece of the interview went like this. Source

  • Wolf:  “What if VMware and Amazon were to work together on seamless workflow? Our audience loves it — it’s a customer requirement.”
  • Raghuram: “How do you know we’re not working closely with them?”
  • Wolf: “We haven’t seen any results.”
  • Raghuram: “Stay tuned.”
  • Wolf: “Would you to elaborate on that?”
  • Raghuram: “Nope.”

So this release of the AWS Management Portal for vCenter could be the start of something bigger. However it is unclear what benefits VMware has. The AWS Management Portal could lure VMware customers to Amazon EC2 instead of the VMware vCHS.



Gartner releases Magic Quadrant for Cloud Infrastructure as a Service. Microsoft Azure now a leader.

Gartner published the Magic Quadrant (MQ) for Cloud Infrastructure as a Service at May 28 , 2014. The document provides an overview of IaaS and which providers offer this service. To be included in the MQ providers had to meet various criteria. In total 25+ scoring categories are used to determine placement.

A free reprint of the Magic Quadrant  is available here. 

Amazon Web Services is far ahead of the rest of the providers. The marketshare on compute capacity is about 87 %, 5 times more than the total capacity combined of all other providers.  They are thought leaders, have a mature offering and offer much more capacity.

Microsoft  is the only other company listed in the Leader quadrant. Their IaaS offering Azure is listed for the first time ever in the Leaders quadrant.

Google and VMware vCHS are added for the first time to the MQ.


The MQ is very interesting to read! It offers a lot of information on the market. For an explanation see this blog of Lydia Leong, analyst of Gartner.

Not everyone is impressed by the MQ. Some believe the gap between Amazon and Microsoft should be bigger. Lydia Leong states the move up and to the right of Microsoft is mostly because of the company’s remarkable market power and less because of the growth in technical features.

Microsoft is present in almost any organization and is able to buy itself into the IaaS market. For example by giving away free Azure credits for Enterprise Agreement customers. Microsoft also shown a lot of vision.

Below the MQ for May 2014.

Gartner MQ IaaS May 2014


Below the MQ for IaaS on August 2013 

What are threats of data stored in ‘the cloud’ and how cloud providers protect their customers

The spying done by the NSA and revealed by Edward Snowden for sure did not much good for revenues of companies selling cloud solutions.

Nobody believes anymore that NSA’s main purpose is to defeat terrorisme. Foremost NSA is very interested in political views of other countries (Germany, the EU), financial data (Swift bank tranfers)  and economical spying (Brazilian oil company Petrobras). National security is used as an excuses to violate people’s privacy.

A lot has to change in the minds of US. At a CIA congress in June Congressman Mike Rogers says Google Is Unpatriotic For Not Wanting NSA To Spy On Its Users.

Many US firms colaborated with the NSA enabling them to add backdoors to hardware and software. See for example this article  on how Microsoft helped the NSA, The NSA itself tampered with US-made routers by intercepting shipments to customers, add backdoors and then shipped the router to the final destination (source The Guardian)

Outsourcing  infrastructure or applications is a matter of trust.  There is a saying that ‘Trust arrives on foot but leaves on horseback’

Add the Patriot Act, American Stored Communications Act (SCA) and the Foreign Intelligence Surveillance Amendments Act (FISAA) and many,  especially European and Brazilian organizations, are worried to store any privacy, intellectual property or any other sensitive information in a datacenter which they do not own and trust. Red alert when the provider is a US company.

Microsoft  admitted in 2011 that data owned by Europeans and stored in European datacenters but processed by US firms is not safe for US authorities.  (source ZDnet).

Data requests
So how many times US authorities request data from providers and what kind of data is requested? Meta data or actual data like content of email? The problem is that this kind of information cannot be made public by law. Providers are not allowed to reveal court orders. They are allowed to reveal the number of orders with a delay of 6 months after the order was handed over. The Guardian has an article about this.

Microsoft received from  January to June 2013  fewer than 1,000 orders from the Fisa court for communications content during the same period, related to between 15,000 and 15,999 “accounts or individual identifiers”.

The company, which owns the internet video calling service Skype, also disclosed that it received fewer than 1,000 orders for metadata – which reveals communications patterns rather than individual message content – related to fewer than 1,000 accounts or identifiers.

Mind these numbers are for all Microsoft services including Skype and So in many cases court orders from Fisa are related to personal accounts and not to enterprise accounts.

This is important to understand the problem.

Non disclosure of  National Security Letter or court orders (gag order)

US authorities like FBI, US Department of Justice  can request  a cloud/service provider to hand over customers data without disclosing that request to the customer. This is a so called gag order. The official name of such a request is a National Security Letter or NSL.

In any cloud contract of Microsoft and likely every US provider as well some lines are written like the ones below:

The cloud services that Microsoft provides to are governed by contract (the "Contract"). The Contract provides that Microsoft may disclose data to satisfy legal requirements, comply with law or respond to lawful requests by a regulatory _or
judicial body, or as required in a legal proceeding. The Contract also provides that, unless prohibited by law, Microsoft must use commercially reasonable efforts to give notice of any such disclosures in advance, or as soon as commercially reasonable after such

Reach of Patriot Act
So how far reaches that notorious Patriot Act? When is data safe? Nobody knows for sure. Likely it is effective on data stored on servers of any company located in:

– The United States;
– The European Union with a parent company located in the United States;
– The European Union and  uses  data processing services of a subsidiary which is established in the United States;
– The European Union and uses a third party for data storage or data processing, like a US-based hosting company;
– The European Union, but  does structural business with a company in United States of America.

The last one is the most unclear one and open for many interpretations.

There are some other serious security issues as well when using cloud. Amazon supplied Windows Server images in 2014 which were not patched since 2009. Auto update was disabled. Also HP and GoGrid offered images which were not up-to-date with latest security patches and also had auto-update disabled. Microsoft was the only investigated cloud provider which offered up-to-date images. (source Bkav).

So there are some serious issues to solve in cloud computing. What actions are taken by cloud providers to regain trust and how likely are those to keep the bad guys out?

  1. Object to court orders and go to court
  2. Trying to change mind of government
  3. Offer encryption
  4. Contracts
  5. Datacenters located in the EU
  6. Operate datacenters by branches
  7. Employ non US staff
  8. Use non ‘made in the United States’ software or hardware

Object and go to court
In several cases cloud providers like Google and Microsoft went to court when they received a National Security Letter. In an interesting case in 2013 when the FBI handed over a NSL to Microsoft including a non-disclosure, Microsoft went to court.

FBI wanted to have information on an Office 365 customer. After Microsoft filed this challenge in Federal Court in Seattle, the FBI withdrew its Letter.

Microsoft challenged the letter in court, saying the law the FBI used to obtain it violated the First Amendment, and was an unreasonable ban on free speech. 

In 2014 a Seattle judge ordered to unseal certain documents of this case. More information on

While this is a small success, many NSLs remain undisclosed.

Trying to change mind of US government
Microsoft is asking the US government this as described in this June 2014 post by Microsoft:

  • End bulk collection
  • Reform the FISA Court
  • Commit not to hack data centers or cables
  • Continue to increase transparency

See this article  Microsoft presses the US government on NSA reform

Microsoft and others are doing its very best to make NSA life as hard as possible. They offer encryption in about any solution which stores on-premise created data in Microsoft Azure. The customer is the only one having the encryption key. Office 365 files stored in SharePoint Online and OneDrive for Business will have its own encryption key, So even when the NSA puts a gun to Microsoft head they will not be able to hand over readable data. Microsoft is working on encryption of data travelling between Azure datacenters. Google and others already encrypt that data.

Make sure data is encrypted the moment it leaves your on-premise trusted infrastructure. For how long encryption will be effective remains to be seen. NSA is building a datacenter  with supercomputer to decrypt AES encrypted data (source Forbes)

SSL traffic to and from Azure Web Sites can now be encrypted using Elliptic Curve Cryptography (ECC) certificates. Reversing a private key from a public key is about 10 times harder then when classic encryption methods. More info on ECC here. 

The story of email firm Lavabit shows the power of FBI , NSA and others. Lavabit provided encrypted email services which protect privacy of users. Snowden was one of the users of Lavabit (and probably the reason for the interest of FBI in Lavabit). One day the FBI knocked on the door of the owner of Lavabit holding a court order requiring the installation of surveillance equipment on the Lavabit network.The court order also required Lavabit to hand over its SSL private keys. Lavabit objected to comply, since that would give access to all messages to and from all customers, which would be unfair and unreasonable.

The owner refused, searched for a lawyer, got into a courtcase. The result: Lavabit had to hand over 5 SSL private keys. Lavabit even tried to handover the cryptgraphic material in printed form, stretched over 11 pages in a four-point font. (source

In the end Lavabit had to close the company. (Source: the Guardian)

Recently Microsoft proudly published that their contracts with customers using cloud services comply to the highest standards of the EU. Privacy authorities across Europe approve Microsoft’s cloud commitments.  While this contract is usefull so Microsoft customers are assured Microsoft complies to privacy laws, it is not a guarantee data is safe for the bad guys/curious types like NSA and FBI. As Microsoft states: they will have to handover data if requested and even do not have to inform the customer about the handover.

Datacenters located in the EU
There are severall reasons why US cloud providers offer datacenters located in the EU. First to provide the best possible latency. Secondly because EU laws prohibit certain type of data to be stored outside the EU.
Data stored in a EU datacenter but processed by a US firm is by no ways safe for Patriot Act. See the story about a US judge which orders Microsoft to hand over data stored in a Dublin datacenter. Microsoft goes to court. Many information on internet on this case, like this article.

Operate datacenters by branches of US companies
VMware entered the public cloud IaaS market a while ago by offering vCloud Hybrid Service. Besides 4 US-located datacenters they also have one datacenter located in Slough near London (UK). They stated at VMworld that data is safe for Patriot Act because vCHS is  operated by VMware UK. The datacenter is owned by UK company Savvis. I do not think this can avoid US authorities with court orders to hand over data as VMware UK has a parent in the US.

Employ non-US staff
Dutch telecom and IT services company KPN recently announced that its public cloud offer named CloudNL is fully managed by Dutch administrators which are not bound by U.S. law. This way, according to KPN, the company is not required to hand over data to NSA, FBI and other non-Dutch organizations. However KPN is 100% owner of US company iBasis. This ownership would make KPN a target for the Patriot Act as it does ‘structural business with a US company’. However the KPN believes access by NSA etc via iBasis is blocked because servers are located in Dutch datacenters. Dutch newspaper Trouw reported  (english here). Computerworld has an interesting article on CloudNL as well.

Use non ‘made in the United States’ software or hardware
When software made by US companies is used  the NSA could have a backdoor. Or the Patriot Act could have influence on the requirement to hand over data. So IT company Capgemini decided to build a cloud in which not a single component is made in the US. It provides software for email, calender sharing, presentations, file sharing and video conferencing. News about this cloud offer called Clair was published by (translation in English

Capgemini does have about 27 offices in the US so even that might be a backdoor.

There is a lot uncertainty about the power of US acts like the Patriot Act. The only way to find out the reach are legal battles in court. Not all companies offering cloud services are interested in legal battles. They have an interest to be friends with US authorities.

Encryption of data which could be interesting for others and make sure to own the encryption key is a first step to secure data.



VMware vCHS will offer pay as you go and NSX

VMware vCloud Hybrid Service is a VMware owned and operated Infrastructure as a Service Offering. vCHS is available at the moment in the US and in the UK. Datacenters are located in Santa Clara CA, Las Vegas NV, Dallas TX, Sterling VA and Slough UK (west of London). VMware is likely to expand in Europe this year with datacenters in France and Germany. 

The big difference with other public IaaS cloud offers like those of Microsoft Azure and Amazon is that vCHS is a so called ‘reliable cloud’. This means the platform has features built in to protect the virtual machines running on it. Think about features like vMotion and HA. Azure and Amazon EC2 are designed such that the application must provide resiliency (best effort cloud). When Microsoft has to reboot a host, an evacuation of virtual machines running on that host using a migration technology is not possible.

vCHS is built using the same software used by many organizations; VMware vSphere. It offers a hybrid cloud managed by a single console.

vCHS allows to keep the current private IP address of virtual machines when moving between on premise and vCHS. It uses VXLAN to enable this feature. 

Recently VMware added several new services. Disaster Recovery allows to replicate virtual machines  from an on-premise datacenter to vCHS using vSphere Replication technology. Purchase is based on an upfront investment in vCPU, memory, storage and network bandwidth. Costs start at $795/month for 1TB of data, 10GHz of CPU and 20GB RAM. 

Also added recently is vCHS-Data Protection which offers backups of virtual machines running on vCHS.

On a chat session at Reddit yesterday VMware told about future features of vCHS like NSX and Pay as you go.

Pay as you go
Currently customers wanting to use vCHS have to purchase a certain amount of compute and storage resources upfront.Even if a customer does not need all of the resources. Think about it as if you like to buy an apple and Walmart only sells a box containing 3 apples, 2 pineapples and an orange.

In H2 2014 VMware plans to launch a pay as you go service option. Here you only pay resources which you have actually used. This service will include billing in arrears (pay at the end of the month). Granularity will be by the minute, and will be aggregated across all resources consumed (not per VM).

vCHS services can be ordered via VMware Solutions Providers or directly at VMware. VMware does not offer a service which allows registration using a creditcard and minutes later the customer can consume the service.

VMware NSX
NSX is the software defined networking solution which was added by the acquisition of Nicira.  NSX was introduced in August 2013. It is targeted at large organizations like service providers  having a lot of changes in their network with multiple tenants. Instead of a network admin performing those changes manually, NSX can do that.

VMware is using NSX  in the “back office” of vCHS today for  management networks. VMware is in the process of implementing NSX for customer environments. Expect to see support for enhanced Edge capabilities and Distributed Firewalls and more.

Microsoft announces Microsoft Azure Files

At TechEd 2014 in Houston Microsoft announced a new service named  ‘Azure Files’. The service is now in preview. The reason for this feature is to allow to migration of traditional applications requiring a SMB fileshare to Microsoft Azure. 

Azure Files allows VMs in an Azure Data Center to mount a shared file system using the SMB protocol. These VMs will then be able to access the file system using standard Windows file APIs (CreateFile, ReadFile, WriteFile, etc). Many VMs (or PaaS roles) can attach to these file systems concurrently, allowing you to share persistent data easily between various roles and instances. In addition to accessing your files through the Windows file APIs, you can access your data using the file REST API, which is similar to the familiar blob interface.

Basically blob storage can now be accessed over SMB just like being served from a Windows VM. With Azure files no requirement for a VM to serve files. Untill now files stored in Azure Storage could only be accessed using REST API over http.

It can be compared to a traditional storage array being able to present files using SMB.

Files served out by Azure files seem to be only accesible from VM’s running in Azure.

much more information here. 

VMware vCloud Hybrid Service introduces lower cost storage

VMware vCloud Hybrid Service (vCHS) is an Infrastructure as a Service cloud offering owned and operated by VMware. It is available in datacenters located in the US and a datacenter located in the UK.

Besides hosting virtual servers recently VMware added a service to use vCHS for disaster recovery.

More information on vCHS services can be found in the Service Description.

Announced today is the availability of an additional  storage tier in vCHS. Besides the already available SSD-accelerated storage tier now also a Standard Tier is available.Standard Storage is a low-cost storage option available with the Dedicated and Virtual Private Cloud offerings from vCloud Hybrid Service. Standard Storage is cheaper and is suited for low IOPS workloads like webservers.

The tier of storage used by a virtual machine harddisk can be changed by the admin. Strangely enough the migration does not seem to use VMware vSphere Storage Migration which migrates virtual disks without downtime. Mind the warning when storage is adjusted: all virtual machines can experience downtime . Not sure why this warning is presented.


The core subscription for Dedicated Cloud includes 6TB of either Standard Storage or SSD-Accelerated Storage and the core subscription for Virtual Private Cloud includes 2TB of either Standard Storage or SSD-Accelerated Storage. At the time of purchase, you can
specify which type of storage should be included with your instance on vCloud Hybrid Service. After the time of purchase, you can purchase more of the same storage option or the other option via the My VMware portal or by submitting a purchase order.

Read this VMware blog for more information. Costs of this new storage tier were not available at the time of publication of this blog.



Microsoft Azure introduces Zone Redundant Storage (ZRS)

Microsoft Azure introduces in the next months a new option for customers to have their data highly available. Zone Redundant Storage  is a mix of two currently available options for data redundancy: it replicates data to other datacenters but is limited to three copies.

Currently Microsoft Azure (the cloud service formerly known as Windows Azure) offers customers two choices for keeping data highly available.

1. Locally redundant storage (LRS). Data is stored as three different copies inside the same facility. A facility is a physical datacenter in a single region. Data is replicated synchronously

2. Geo redundant storage (GRS). This is the default option when creating an Azure Storage Account. Data is stored as three different copies inside the same, primary datacenter (like LRS). Additionally, three copies of the data are also stored in a secondary datacenter located in another region. So in total data is stored six times. Replication to the secondary location is performed asynchronously. Distance between the two regions is a couple of hundred miles. In some cases the replicated data is stored in a different county. (Amsterdam-Dublin,Singapore-Hong Kong, Brasil- San Antonio, Texas)

Image source.

The main purpose for replication is disaster recovery. In case a facility has a technical issue or is unavailable (fire, flooding etc), a copy of the data is available in another region. Mind that this replication is mainly an issurance for Microsoft to meet the SLA. Microsoft staff will decide to failover to the replicated data in case of issues. Customers cannot initiate a failover.

Since the first week of April 2014 Read Access Geo Redundant Storage (RA-GRS) is general available. RA-GRS allows customers to have read access to the data stored in the secondary region. Writes are not possible to this secondary region.

A new, third option for data availability named Zone Redundant Storage (ZRS) will become available in the coming months. It allows data to be replicated to a secondary datacenter/facility located in the same region or to a paired region. Instead of storing 6 copies of data like GRS does, 3 copies of data are stored. So ZRS is a mix of LRS and GRS. Three copies but stored in different datacenters/facilities. I am not sure if all Azure regions have more than one datacenter. If not, the data will replicated to another region.

ZRS will be priced 37.5% lower than GRS as it becomes available.




Download Windows Azure Symbol/Icon Set for Visio and PowerPoint

Microsoft released a set of Visio and PowerPoint icons representing objects available in Windows Azure. Examples are SQL Database, VHD data disk, autoscale etc etc. Icons are particularly usefull for documenting applications running on Azure Platform as a Service.

This package contains a set of symbols/icons to help you create visual representations of systems that use Windows Azure and related technologies.

The symbol set supports Microsoft Office Visio 2003 and Microsoft PowerPoint 97 or later. Users who don’t have either Microsoft application can use PNG files or the free downloadable Microsoft viewers.

download the Azure Symbol/Icon set here.

How to capture an image of an Azure Windows Server virtual machine the safe way

Customers can create custom made Windows Server images in Windows Azure based on their own created baseline Windows Server image. A custom made image provides a way to deploy virtual machines which are identically configured.

Windows Azure currently has issues which can cause unwanted lost of a baseline image resulting in lost of work. This is because the server on which sysprep is exectuted is not shutdown but rebooted by Azure.

This blogpost describes a workaround.

The procedure to create an image is simple:

  1. Deploy a virtual machine using a Microsoft supplied image or your own image
  2. customize the guest operating system
  3. execute sysprep
  4. capture the guest operating system OS disk and publish it as an image

Sysprep should be performed by selecting ‘shutdown’. Because of an issue in Windows Azure in certain circumstances Azure restarts the guest when a guest initiated shutdown is selected.

This results is customers not being able to Capture the virtual machine because it is still running in a state waiting for input after the reboot.

This issue is hard to reproduce. In many cases customers will not encounter an issue. However I encountered this issue 4 times in a row on 4 different servers.

Microsoft does planned maintenance on Windows Azure for installation of bug fixes and new features. These updates are done in batches which means at any given time some hosts are running as non-patched and some are patched. When  Microsoft has patched all hosts this problem will not occur.

The workaround is simple: in Sysprep select Quit instead of Shutdown. Then do a Shutdown initiated from the Azure Management Console.When the VM has stopped a capture can be performed.

This blogpost has all the details.



Could not verify the domain when adding custom domain to Windows Azure Active Directory

Windows Azure Active Directory (WAAD) is a multi-tenant cloud-based identity management service offered by Microsoft. WAAD is used by many services of Microsoft like Office 365, Exchange Online and Windows Azure.

WAAD is used for authentication to Office 365, to Microsoft Azure and SaaS applications.

It allows for synchronization of local / on-premise Microsoft Active Directory accounts and security groups to WAAD. At creation a Azure Active Directory has a default domain name like

To be able to authenticate using a customer owned domain account  like , so called custom domains can be added by customers to a Azure AD.

Customers adding their domain must prove they own the domain. This is proved by adding a record to the DNS server which is master for their domain.


During verification of this domain an error is shown ‘could not verify the domain’

To check the records of your domain services like can be used. This made clear the MX and TXT records were not actually added to the DNS server.

It was solved in my situation by not typing in the @ sign in the host/name field of the DNS register form of my domain registar. For some reason any record I added which had a @ in the name field was not added. The webinterface however did not shown an error indicating an invalid input.

When I used the name of the domain instead of @ ,  the domain verify was succesfull within minutes.




Windows Azure poster November 2013 edition

Jouni Heikniemi made a nice poster showing all the Windows Azure application types. The poster focusses on the developers side of Windows Azure so virtual networks are not shown for example.

Download here.

An introduction to OpenStack and how VMware participates

OpenStack gets a lot of attention these days, extra fuelled  because of the OpenStack summit which is held  in Hong Kong at this moment. Around 3000 people are attending. Hong Kong is not chosen by incident. OpenStack has a remarkable adoption in China. Beijing has worlds highest number of Active Technical Contributors (developers making a contribution to OpenStack).  Shanghai is also ranked high.

So what is OpenStack? I have asked this myself and thought to write a small blogpost. Certainly not complete, just some observations.

OpenStack is not a hypervisor. It is an open technology and open source cloud operating system which enables Infrastructure as a Service in public or private cloud. Some say it is the Linux of the cloud. The software manages compute, networking and storage resources. Those resources can be presented by many solutions. Many commercial and open source solutions are supported.

Another definition of OpenStack is a  “Cloud Management Platform” (CMP). CMPs are a software layer that sits on top of the software infrastructure and enables a “self-service” model in which application owners can directly request and provision the compute, network, and storage resources needed to deploy their application.

Think about the same set of features which are delivered by VMware vCloud Suite.

Drivers for organizations to use OpenStack are cost savings, avoiding  vendor lock-in and open technology.

Its roots are at service provider Rackspace and at NASA who both developed OpenStack. In July 2010 OpenStack was launched.  Later it  moved to the open source community and is now managed by  OpenStack Foundation. The OpenStack Foundation promotes the development, distribution and adoption of the OpenStack cloud operating system.

A lot of companies are supporting OpenStack. HP, NetApp, VMware, Intel, AT&T, Red Hat and IBM are just a few names. Red Hat and IBM are 2 of the biggest contributors of OpenStack.

Around 5000 developers are working on OpenStack. Each 6 months a major software release is released. OpenSource is as big as development on Linux.

At the moment there are less than 200 organizations worldwide using OpenStack in production. However there are quite a few Proof of Concepts running.

Read more of this post

VMware vCloud Hybrid Service roadmap

In September 2013 VMware launched vCloud Hybrid Service (vCHS). It is a VMware owned and operated public Infrastructure as a Service offering. It offers a lot of control over virtual machines and networking.

During VMworld US and Europe VMware had a lot of breakout sessions about vCHS. In the near future I will make some postings with technical details on this new service.

First lets have a look at the roadmap of vCHS. The information in this post is based on the various sessions I attended at VMworld and is made public by VMware.

Read more of this post

Windows Azure adds Oracle images and new virtual machine size

Windows Azure adds new features every couple of weeks. Yesterday Microsoft added two new components to Windows Azure:

  1. A new virtual machine size. The A5 has 2 virtual cores and 14 GB internal memory. The price of consumption is $ 0.45 per hour.
  2. Oracle software preconfigured in virtual machine images. The software available includes Java, Oracle Database, Oracle Linux  and Oracle WebLogic Server.


Oracle and Microsoft entered a strategic  partnering in June 2013. Oracle will fully support running all Oracle software on Hyper-V and Windows Azure. That is quite a step as officially Oracle does not support running Oracle software on for instance VMware vSphere. Now the partnership has become visible by making Oracle software available on Azure using virtual machine images. The availability of the software comes at the same time Oracle Open World starts.

Mind the Windows Azure images containing Oracle Database are offered as a preview. Production deployments are not recommended and not supported. A general availability date for these services has not yet been set.

Oracle customers can bring it their own licenses and apply it to Azure virtual machines. The preconfigured Oracle images in Azure are “license included”.

Oracle Database clustering is not currently supported on Windows Azure. Only standalone Oracle Database instances are possible. This is because virtual disk-sharing in a read/write manner among multiple virtual machine instances in Windows Azure is not currently supported.

Brad Anderson, Corporate Vice President, Windows Server & System Center writes about the above news here.

For a complete overview of how to install Oracle images in Azure and restrictions see this post

Here is a Frequently Asked Questions post  on running Oracle on Azure .

%d bloggers like this: