Mapping the Rationale for Hybrid and Cloud DCs
Mapping the Rationale for Hybrid and Cloud DCs
It seems like where ever you look across the public cloud landscape you will find a partnership between VMware and Hyperscaler. Far from 2020 being the year of VDI, it seems to be the year to move your VMware workloads, as VMware workloads, into a Hyperscalers cloud. The current portfolio for public cloud providers where you can run VMware workloads is very impressive;
- VMware Cloud on AWS
- Azure VMware Solutions
- Google Cloud VMware Engine
- Oracle Cloud VMware Solution
- IBM Cloud for VMware Solutions
- Alibaba Cloud VMware Solution
Thankfully for me, Simon Long author of the excellent SLOG Blog has done a fantastic job of comparing and contrasting the various solutions on offer. Go and take a look and bookmark the site, if you are going to be considering building infrastructure in this space I can guarantee you’ll want to refer to it.
Rather than compare and contrasting these services, I want to dig a little into why these services have been created. All the Hyperscaler clouds have IaaS models so why has the most recent public cloud evolutionary effort been to incorporate rather than directly compete with VMware solutions? Did VMware win a battle for IaaS hearts and minds? What does that mean for the future state?
Mapping our Application
If you follow this blog, then you would have read about the Wardley Mapping techniques that I’m going to use below. If you work as an architect or even if you do not, then I strongly recommend that you supercharge your career by building an understanding of mapping and how you can use it. To understand the method a little better, divert for a second and read this twitter thread, where Simon Wardley is to be found having a conversation with ‘X’.
If you like the technique I recommend you explore Simon Wardley’s own posts on mapping or the imaginatively named wardleypedia.
The technique can be used to map technology, services, processes, concepts and a range of things that have hitherto not been imagined. I’m going to use the technique to map out the technological components that support an imaginary application. I’m going to develop the map as we identify points of inertia and resistance. I’m also going to hypothesise the drivers behind that inertia and resistance. We’ll then explore the impact of the Cloud SDDC to this landscape.
The map above has been created to show the technological components that make up a very generic application. This application is hosted on an OS, which connects to a DB and file systems. Plotting the components of the solution is subjective, as is placement on any map. Although, I think that the placement on the map above is not really contentious.
In my experience if you are running a data centre, then those facilities and the management of it is customised, even if it’s in the basement or on the end of a desk. Same with networking, we might be using components that are products and purchasable off the shelf. However, when we consider the broader constructs I’ve mapped above such as LAN, WAN and DC Network the combination of components is heavily customised. When we consider elements such as OS’, Hypervisors, Storage and Compute these are again productised/rentable services although not quite utilities, given the DC locations. Devices, Power and the Internet are undeniably utilities in this context. The only element I’ve placed in genesis is DR, and that’s because I would treat most DR plans I come across as ‘Schrodinger’s DR’. That is to say nobody has any idea if it’ll actually work until it’s invoked.
Migrating our Application to Cloud Native Technologies
It seems to me that for at least the past decade, everyone (including myself), advocates for moving and operating applications to cloud native technologies. If that is the advice and cloud native solutions are the future, why is the migration to a cloud native platforms seemingly so tough? What is stopping the mass power down of custom data centres across the world? What inertia and resistance prevents this type of cloud native move?
I’ve captured and speculated on a few of the reasons that I think might contribute.
Need to deprecate previous investments
So therefore the move to cloud is a question of timing. If an organisation just spent £x million on a raft of hardware and locations to run applications internally, what incentive is there to move at any speed greater than the organisationally governed depreciation cycle of that investment? Apart from if we start measuring things like opportunity cost. An imaginary quote following that logic might read…
“Here at Blockbuster, we’ve invested £x million in our datacenter facilities and are comfortable that we can meet the needs of our existing retail operations and customers for the foreseeable future”.
Architecture is never finished, it needs evolve and grow, adapting to the situation and context within which an organisation is operating – which interesting is what makes Wardley Mapping so very useful.
Indecision linked to political capital
Is not making a decision and being wrong better or worse than making a decision and being wrong? Probably a question for sociologists and organisational culture experts to answer. However, if you are operating in an organisation that has a previous record of ahem unfavourably dealing with mistakes and issues in the past, it takes a strong character risk the safety of a crowd.
Bias toward existing skillsets, cost of acquiring new skillsets
Everyone is biased to what they know, fact. If you are posed with a problem, I bet that you draw on your own experiences, that is human nature. Changes to operating models and the need to train staff for new skills is asking for people to act against that nature. Especially if we’re asking for revolutionary changes in thinking and skills, over say building upon what you known through evolutionary change.
Cost and skills required to rearchitect estates
If you’ve been anywhere near a hyperscaler in the past decade, they will have spoken to you about Gartner’s five R’s of cloud migration. These come delivered as the widely accepted Rehost, Refactor, Revise, Rebuild, or Replace. Some folk would add in a sixth R that is Retire. If you are a charismatic individual with a gift for writing, then you could probably make a solid career over the short term by regurgitating those six R’s, masquerading as insight. The problem is not with the logic, it’s who is actually going to do the work? How much will that cost? and what happens to those skills once it’s done?
Past success, past data
You’ve got buckets and buckets of data telling you that what you’ve done has worked, well as the phrase goes ‘if it ain’t broke, don’t fix it’ (see above Blockbuster quote). Following a similar logic we can also attribute rising global temperatures to the decline in pirates. Correlation does not indicate causation, past successes do not guarantee future glory.
Current operating models and changes to governance and management
Change is hard to do well, change involving governance will generally move at such a glacial pace that it’s hard to know if anything is actually changing. I suspect this is how organisations end up transforming current data centres like for like to a hyperscaler IaaS solution, without touching any cloud native technologies. Congratulations! You’ve just swallowed a heap of disruption to change the format of your virtual machines.
Supplier challenges, need to forge new relationships and multiple sourcing options
We like our current suppliers, new companies are strange, building new relationships is hard, moving from a CAPEX to an OPEX models feels hard. In IT we embrace or at least accept change because if we didn’t, well if we didn’t not a lot would get done, although CAB meetings would become much more straight forward. Do you know where they don’t like change? In Procurement and what we now need to do is take what is probably a government backed set of standards and processes from the ’90s and try and apply that to companies that might be renamed by the end of the week.
Supplier switching and the fear of lock-in
The fear of lock in is a very real thing when it comes to cloud and hyperscalers. I’ve never really understood this fear. Especially, when it is most often vocalised from organisations that only run Windows servers and desktops, Office 365, Active Directory and Exchange… Ostensibly, that doesn’t seem like an organisation that is scared of lock in, just an organisation that doesn’t want to move from the status quo.
Loss of strategic control, direction and agency
If I’m not in control of all components of my organisations strategy, how can I effectively set the strategic direction? Apart from during the free time I have from not having to manage a whole section of infrastructure? Just how strategic are the components in the data centre? Obviously if block storage became unavailable that would create a serious impediment to operations, but what strategic value does that storage hold in the context of the organisation?
VMware SDDC in the Hyperscaler Cloud
I started this piece by speculating about what the acceleration of VMware services within hyperscaler clouds means and why so many hyperscaler clouds have moved in this direction. To help answer this I’ve included the Cloud DC element to our application map.
If we work through the map we’ll find that many of the previous points of inertia and resistance have been addressed.
- Bias toward existing skillsets, cost of acquiring new skillsets
- Any of the VMware cloud solutions address bias and concerns around existing skills, because there is no need to change any skills. Within each of the hyperscaler offerings, access is consistent via the VMware tooling, tools that were present in approximately 80% of the virtualisation and data centre market in 2017
- Cost and skills required to rearchitect estates
- No change in tooling with a consistent platform means no immediate requirement to worry about the six R’s. Organisations can take a longer view of the application estates and what they plan to do with them.
- Past success, past data
- Again this doesn’t really change anything other then the DC workloads are being hosted in. So that past data is still referenceable.
- Current operating models and changes to governance and management
- No change to governance or management, keep managing workloads and applications as per the current model. You do get to move out of the business of patching ESXi hosts and other VMware tooling which I would hope would be viewed as a positive.
- Supplier switching and the fear of lock-in
- Well the good news is your running workloads on VMware. However, you do need to have a relationship with a hyperscaler as well now.
Where this doesn’t quite cover the points of inertia and resistance, is the below.
- Need to deprecate previous investments
- If you spent £x million on a 5 year depreciation cycle, shuttering that to move to a cloud SDDC might not win many friends. Unless of course you are using situational awareness to help identify oppourtunities.
- Indecision linked to political capital
- Moving a DC to the cloud can be cool, technology is cool, having a supportive work culture that allows for experimentation and learning from mistakes is way cooler. The bad news is that technology is not going to fix any cultural issues.
- Supplier challenges, need to forge new relationships and multiple sourcing options
- If procurement are desperate to work in a CAPEX model, then you’ll be surprised how willing any vendor is to take CAPEX money as an upfront commitment. However, how good are most organisations at forecasting demand during a 12 month window? Of course there are now companies operating on the sole premise that you can’t forecast at all and they will charge a handsome fee for sorting out mistakes.
- Loss of strategic control, direction and agency.
- Moving the DC to the cloud does mean you don’t have control of a lengthly vendor selection process for new storage, servers or network switches. However, if that was what got you up in the morning then you should question the value you bring to your organisation. I would suggest (apart from some niche cases) any technological component that is of critical importance to an organisations strategy and success you still have control of.
We should update the map, assuming that we’ve decided to move our VMware workloads as VMware workloads to hyperscaler clouds.
We’re in the Cloud! What Next?
Well this is the point, this was the goal, it might not have been your initial goal, but it is most defiantly the goal of the hyperscaler cloud vendors. Now you are operating a VMware data centre in the cloud options available to explore the ‘six Rs’ are myriad; the inertia and resistance is reduced as cloud, albeit within the familiar VMware setting, becomes the new normal. A familiar setting that doesn’t need you to feed and water it, that is all taken care of by the hyperscaler. Which allows for more time focused on those ‘six Rs’, the art of the possible and delivering value back to the organisation.
Simply put, now you are running workloads within the hyperscaler public clouds, the drive, focus and organisational desire to move workloads, services and solutions to a cloud native footing, is really going to increase from here. As I said earlier architecture is never finished, it needs to adapt to the situation and context within which an organisation is operating. However, congratulations! It’s now much easier to integrate, explore and look at these cloud native services without having to start from scratch.
An oft discussed first step or adopted pattern, is reducing the managed infrastructure footprint of hosted data. Taking the form of moving databases, table, file or whatever storage to PaaS and SaaS solutions.
Summary
The blueprint is both clever and pretty clear I think. The overwhelming majority of organisations are not Uber or Spotify, they are not born in the cloud, as we can see from figures shared above, 80% of them have workloads hosted and developed using VMware tooling. As a hyperscaler, by building a VMware service in the cloud you are giving yourself a better chance of capturing a larger part of the market. As the hyperscaler do you really care if a percentage of that new spend will go to VMware?
In the business of hyperscaler clouds data is king. I don’t mean data in the context of ingress and egress (although that will be part of it), but in the context how services are being used. The data gathered from how customers use, combine, build and develop solutions with the provided services gives the organisation a significant advantage when making decisions on where to invest, what to develop and what to buy. As hyperscaler platform customers find new value through combining offered services (now including VMware solutions), hyperscalers are able to see trends, pick winning solutions and either purchase them or develop competition to maintain relevance and stay closer to the prize which is actual delivered value.
Now in business if you can hold on to both delivering measurable value and remaining relevant in the eyes of your customers, then you have a winning ticket.
At the start of what is now a much longer post then I intended I asked the following;
- All the Hyperscaler clouds have IaaS models, so why has the most recent public cloud evolutionary effort been to incorporate rather than directly compete with VMware solutions?
- Did VMware win a battle for IaaS hearts and minds?
What I think is clear is that The hyperscaler cloud vendors realised that the fastest way to beat resistance and inertia to cloud migration was to bypass most of it by developing VMware solutions. I think probably yes, VMware has sort of won a battle for IaaS hearts and minds.
- What does that mean for the future state?
As I said above, VMware has sort of won an IaaS battle here, or at least a realisation that in a straight our IaaS fight nobody really wins. However, that isn’t a victory that comes for free. It creates a much simpler path for VMware customers to move to cloud native. It also means that as more customers take up these offerings, the metadata on how the solutions are being used is being shared with the hyperscaler providers.
That’s a big problem for VMware, right?
Not really.
VMware has way more to offer then just a hypervisor. At a quick glance you can see the direction changing rapidly away from the business of just hosting workloads (although it still does this excellently) to a focus on the application and developer. If we consider the full solution offerings from VMware what you will notice that a number of these are hypervisor agnostic, solutions such as Workspace One, NSX, vRealize, CloudHealth and Tanzu. For example, if you focus on Tanzu, it is a suite of products and consultative offerings that are focused on the build, run, manage and protection of cloud native applications regardless of the hyperscaler or platform you might choose to run them.
If you thought VMware was just a hypervisor running in the server room, you really need to look again.
Thanks
Simon