I was surprised to see, when working with a client recently, that they had not implemented Storage I/O Control (SIOC) within their vSphere estate. When I enquired why, it wasn’t something that they had considered, it hadn’t featured in any of there planning discussions.
Given they were hosting virtual machines from multiple different customers, hosting virtual machines with different I/O profiles and using fewer larger VMFS volumes. Their use case is, in my opinion, a prime candidate for SIOC.
What is Storage I/O Control?
vSphere Storage I/O Control allows cluster-wide storage I/O prioritisation. Taking the same share model that is applied to CPU and RAM, that vSphere administrators know and love, and extending that to handle storage I/O resources.
As we know from working with CPU and RAM shares, these configured values only come into consideration when there is resource contention. Therefore providing a mechanism for administrators to prioritise VM behaviour during contention. When there is no contention, then there is no need to prioritise access to the resources.
By monitoring the device latency observed by hosts communicating with an SIOC enabled datastore. When device latency exceeds a configured threshold, the datastore is considered to be congested and as discussed above when the resource is congested this is when the per VM share values kick in.
Why use Storage I/O Control?
Well firstly acknowledge that VM performance will be reduce, when it must contend for any resource, be that CPU, RAM or Storage I/O. In a vSphere environment, these resources are shared between many VMs, each with different resource profiles, requirements and schedules.
Before SIOC, if an administrator needed to guarantee Storage I/O to a VM, we had to go down the path of dedicating Volumes and LUNS to that VM. I recall on one occasion, in a previous role, having to dedicate a shelf of storage in an IBM DS 8300 for a workload. That’s ok, but it isn’t really an efficient use of resources.
By introducing SIOC to an environment, we can reduce the need to isolate critical workloads from less critical workloads at the storage tier because we now have a dynamic control mechanism for managing storage I/O resources. using the same familiar mechanism that we can use to control access to CPU and RAM, without having to resort to pinning VMs to logical CPUs and reserving a VMs RAM regardless of what it might be using.
If you have a vSphere environment and you don’t currently have SIOC configured, there is the potential to encounter the noisy neighbour scenario. Simply put without any controls on storage I/O one VM could monopolise access to a datastore, to the detriment of every other VM on that datastore.
The above situation can be avoided by simply enabling SIOC on that datastore and leaving all values configured as per default – Set it and forget it. SIOC algorithms will ensure fairness across all VMs sharing the same datastore as they will all, by default, have the same number of shares.
So at a minimum as an administrator you could set SIOC and forget about it. Doing that doesn’t invalidate any storage I/O sizing, or placement planning but it will protect your environment from the impact of unexpected VM Storage I/O behaviour or noisy neighbours whilst adding another tool to help control, manage and understand resource requirements. I think that’s pretty cool.
If you don’t have it configured in your environment, perhaps that’s something that is worth revisiting