2nd December 2019

The impact of gravity on the Cloud

The impact of gravity on the cloud, what am I talking about now…

Gravity is defined as “the force that attracts a body towards the centre of the earth”, that can be extrapolated onwards to state “or towards any other physical body having mass”.  It’s a force that has shape everything in our universe.  Which is interesting. A topic that people take a life time investigating and exploring, but why am I writing about gravity on this of all blogs?

Because, when defining an architecture for a new solution or considering whether to re-host, re-factor or re-platform, there are two elements that can be viewed as having gravity;

Capabilities and Data.

Capabilities have gravity

Have you ever wondered why in an organisation the same tool might be used over an over for delivery of services?

If you’ve ever worked in the UK public sector maybe you’ve seen a department in which the tool of choice is an Access DB.  Maybe it’s SQL or Oracle.  Maybe somebody has developed complex macros in Word to display information in tables.  Maybe you work at an organisation that uses Salesforce and has access to Tableau, but all reporting is presented in Excel or PowerBI?  Why is that? what’s going on there?

“To a hammer everything is a nail”

What I’ve tried to describe are a few examples of the gravitational effect of capability.

The public sector department making use of Access, really wasn’t doing anything wrong (as hard as some might find that to believe).  They had a problem to solve and were naturally drawn toward the capabilities that they had to resolve it.  Their existing capabilities held a gravitational force on the choices that were made.

We can say the same of the team building macros in Word, or those exporting data from Tableau to Excel and PowerBI.  It’s not that they are doing anything fundamentally wrong, but again we can demonstrate that in each case they gravitated toward existing capabilities.

Infrastructure Capabilities also have gravity

The infrastructure teams managing IT estates can also be observed to follow this principle.

The storage team that has been working with, learning and training on block based storage for the last 30 years, when responding to storage requirements for a new application, is the initial response, block based? Provision a LUN? What I/O is needed?

The compute team that’s been managing a combination of Hypervisors and x86 servers for the last 20 years, when responding to the same request is the initial response, How many vCPUs? RAM? What OS? SLA? Backups?

The networking team that’s been working with physical infrastructure for the last 30 years, is their response, what IPv4 subnet size? throughput? what physical ports? Firewall rules?

The Security team…  Is their response to provide a list of agents that have to be installed? Alongside a document of all their requirements that must be tested and met?

The response is to gravitate toward existing capabilities and the solutions that the teams are trained to utilise.

So What?

Well in a word, Cloud.

Cloud computing and the operational model that it provides, enables those that can utilise it, with a mechanism to operate and demonstrate business agility at a scale that was not previously possible.

The cloud allows for business agility that enables a department leader to ignore the target operating model of their IT department and consume services directly, feeling confident that the benefits outweigh any negative response.  In the industry has a term for this, ‘Shadow IT’.

“Shadow IT, also known as Stealth IT or Client IT, are Information technology (IT) systems built and used within organisations without explicit organisational approval, for example, systems specified and deployed by departments other than the IT department”

However, to the department consuming those cloud services, which more closely aligns to shadow IT?

The SaaS service that is billed upon consumption and provides well defined outcomes to a department from day 1? or the corporate data centre that takes weeks to deliver basic building blocks which then require additional weeks of manipulation and configuration to deliver the same defined outcomes?

So the ‘shadow’ part of shadow IT really depends on who is observing.

So what has ‘Shadow IT’ got to do with ‘Capability Gravity’


If the IT department had innovation built into its mandate? If it were encouraged to try and allow the adoption and consumption of new operational models?  What would happen? Would we have such a proliferation of Shadow IT?

So the gravitational impact of capability can be reduced through innovation.

Innovation, which means offering staff training on new technologies, freedom to explore where they might fit into the existing estate, embracing those new technologies and accepting the chance that things might go wrong, or be a little uncomfortable whilst this is all happening.

Investing in and encouraging teams to innovate is great for assisting with capability gravity, but it does nothing for data gravity.

Data has gravity

In the same way that a physical body having mass exerts gravity, so does data exert a gravitational force to applications that want to consume it.  Many organisations have a cloud first strategy, this is great, but how do you deliver a cloud first solution when the data you need to access is stored on an ageing platform, located in the darkest depths of the data centre.  How do you connect and develop against this data when it is deemed critical to the life of an organisation?

How do you work with the speed of light?  Those cloud native services are hosted somewhere, they have to access services in your data centre, what round trip time on a network packet can you work with?

The speed of the DC

Working in any IT department, there will generally be one application that is either too complicated or considered too risky to allow any kind of innovation.  By coincidence this also tends to be the application that stores the most critical data for an organisation.  The data that if it became unavailable, for even a short period of time, then the business fails.  The data that even the department heads embracing shadow IT, don’t touch.

By further coincidence, it is also the data that if it could be made available for innovation, would have a transformational impact.  Organisations go to great lengths to try and innovate with this data, with mixed success.

I’ve seen a few examples of this over the years.

Customers who extract the data into different formats multiple times a day and access the copies.  Customers who keep live and test versions of the data synchronised overnight and only permit access to the test copy.  One customer who surrounded their data with not one, not two but three different orchestration layers (of varying ages), to provide access to the data to different endpoint locations.  The last I’d heard they were busy working on how to integrate a 4th layer of orchestration to provide native cloud service connectivity.

Why did that customer look to add that level of complexity, these orchestration layers to its data? Why did the others create refactored copies?

What each of these has in common is that they are trying to break away from the gravitational pull of the data, by either creating new copies that can be modified, broken or refactored without impact to the organisation.  Or by adding complexity and abstracting the data connection to a higher point in the infrastructure.  It was all done to try and reduce the risk that comes along with interacting, writing or reading from this critical data.

In each of those cases the data being accessed was in formats that are no longer supported, running on OS’ that are no longer available or the teams responsible for managing it have left.  This data, since its creation, was always considered critical, so it was therefore always far too easy to push back against any updates, innovation or modernisation, often justified against the findings on a risk register.

Much like with gravity where the relative mass of the object impacts the strength of the gravitational force; we can consider that the risk impact associated with the data being unavailable directly relates to the gravitational force of the data.  Or in simpler terms, the amount of convincing required before anyone will let you do anything innovative or transformational with it.

So what can we do?

In some circumstances, there are no simple choices left.

Do nothing.  order buckets of sand all around. Ignore the problems until the organisation is made irrelevant by another’s innovation.

You could engage in what will most likely be a lengthly, risk filled, data migration and refactoring project to bring the data up to a viable format suitable for innovation.

However, one beacon of hope might be if the data is running inside of VMware virtual machines.  You could then build on VMware innovation, and take advantage of solutions such as; VMC on AWS, VMware Cloud on Azure or VMware on Google Cloud.  Lifting the data up from the foundations of the data centre to the cloud, reducing the effects of gravity and the implications of the speed of light.  Moving the data to a hosting platform that enables an organisation to easily access its data from cloud native solutions.  Maybe just buying enough time to look at a less painful version of a data migration and refactoring project.


Both capability and data have gravity, accept this fact. Formulating a plan for how to either break or negate these forces is vital to the survival and continued innovation of any organisation.

Stop your plans to have cloud native solutions publish/subscribe to cloud message brokers which in turn publish/subscribe to on-premises message brokers which hold queues for on-premises services…. (Data consistency anyone).  To continue the analogy of the post, this is an application equivalent of building a space elevator, when it works it’ll look fantastic, just hope you don’t have to fix it when catastrophe strikes!

Thanks for reading