At VMworld this week VMware announced the VMware Cloud on AWS service. Providing an option for organisations to run applications across private, public and hybrid cloud environments utilising the familiar VMware vSphere tooling. Consisting of vSphere, NSX and vSAN to provide an SDDC built onto bare-metal AWS infrastructure. So what does that actually mean and how are the components configured? Initial details are emerging as to how it will operate, and more importantly what gaps there might be.
The initially available service is based upon the following cluster configurations; four hosts configured with 512 GB RAM each, dual Intel Xeon E5-2686 v4 CPUs clocked at 2.3 GHZ with 18 cores and Hyper-threading enabled. Providing initial cluster resource containing 144 physical cores (288 logical) and 2048 GB RAM. Host configuration is locked at this time, by that I mean that you will not be able to modify or request custom host hardware, at this stage. With VMware Cloud on AWS being offered in an ‘aaS’ model, perhaps future sizing changes will be introduced but I suspect customisation is out. Scaling out the cluster to 16 nodes, gives a very impressive 576 physical cores (1152 logical) and 8 TB of RAM.
vSphere DRS and HA are configured, DRS set in fully automated mode with a migration threshold set to avoid excessive vMotion operations. HA is configured in a standard way to guarantee available resources during an ESXi host failure, by standard way I mean in the N+1 configuration that most administrators will be familiar with. Host isolation is configured to power off and restart VMs. As this is being sold as a service host failures and remediation of those failures is the responsibility of VMware. If a host blows up, it is VMware responsibility to replace it.
The Cloud SDDC cluster is configured with two resource pools, one that contains the management VMs required to operate the SDDC and the other to manage customer workloads. With the option to create child resource pools, as required.
The as a Service element of this will be a welcome addition to many organisations, who want to free up staff to undertake higher value work over keeping the lights on.
As we have already covered the service offering include vSAN. This is manifest as an all-flash vSAN storage array, with each host configured with 10TB of raw storage capacity. So, the default four node cluster, is going to provide 40TBs of vSAN all-flash storage. How that storage is configured is up to the customer, with options ranging from RAID-1, through to RAID-5 and 6 (Raid six requires a six-node cluster). How the storage is configured will be totally dependent on your use case and requirements.
Each of the ESXi hosts in a provided SDDC cluster is equipped with eight Non-Volitile-Memory-Express modules (NVMe). These eight devices are spread across two vSAN disk groups, within those disk groups they sit within a write-caching tier (one) and the storage capacity tier (three).
VM level encryption will not be available as part of the initial offering, I assume that this will be something included in the service road map. Encryption in the initial offering is provided at the firmware level of the NVMe devices. Encryption keys are managed by AWS and are not exposed to VMware or other AWS customers.
With compliance requirements within certain industries to provide data at rest encryption within deployed cloud services. I am sure that the ability for the customer to manage encryption at the storage layer will be eagerly anticipated. I can certainly foresee some industries delaying adoption until that becomes available.
We know from the announcement that the initial offering is from one AWS zone only. When the service extends to further AWS zones, multi-AWS zone availability will be offered, by stretching the SDDC cluster across to availability zones in the same region. Extending HA across zones, and providing significant RPO and RTO benefits. It will be interesting to see if this service develops to include stretching clusters across AWS regions.
The final part of the SDDC is networking, with this offering this is provided by NSX. NSX abstracts the AWS VPC networks, to provide your hosts and VMs with logical networks that are automatically provided to hosts and VMs as the solution is scaled out. During the initial offering, connections to the Cloud SDDC are made via two IPsec layer 3 VPN connections. One connection is dedicated to management, connecting the on-premises vCenter Server to the SDDC management components. With another dedicated to provide connectivity between on-premises workloads and those hosted in the SDDC Cloud. All networking and security is provided by NSX. Compute gateways and DLR are pre-configured, with customers only needing to provide subnets and IP ranges.
With the initial offering being dependent upon VPN technology, integration with the AWS direct connect offering will again be eagerly anticipated. Whilst most organisations will be happy with the level of security that an IPsec VPN tunnel offers, the reliance on the telecommunications network ‘middle mile’, might see some organisations delay adoption.
VMware Cloud on AWS is exciting. Many organisations have envious eyes to the cloud, but struggle with adoption, in part due to skills gaps or retraining requirements. Whilst the technology is amazingly cool, the exciting thing from a strategic capability point of view, is that VMware Cloud on AWS allows an organisation to take steps into the cloud whilst leveraging existing skill-sets.
Whilst there are some issues that will give organisations cause to pause, such as storage encryption and VPN dependence. Once those features are introduced and combined with the Pivitol Container Service (PKS), it will take a very convincing argument from other vendors and Microsoft in particular for them to make inroads into the Hybrid Cloud market.