Close

12th September 2017

Objective 3.2 – Configure Software-Defined Storage

Continuation of objective 3 with, Objective 3.2 – Configure Software-Defined Storage.  Again there is a lot of material covered here and a great deal of information sourced mainly from the VMware documentation centre and reformatted to match the certification blueprint;

As always, the VCP6.5-DCV Blueprint post has been linked to this post

Happy Revision

Simon

Objective 3.2 – Configure Software-Defined Storage

Create vSAN cluster

Characteristics of a vSAN Cluster

Before working on a vSAN environment, you should be aware of the characteristics of a vSAN cluster.

A vSAN cluster includes the following characteristics:

  • You can have multiple vSAN clusters for each vCenter Server instance. You can use a single vCenter Server to manage more than one vSAN cluster.
  • vSAN consumes all devices, including flash cache and capacity devices, and does not share devices with other features.
  • vSAN clusters can include hosts with or without capacity devices. The minimum requirement is three hosts with capacity devices. For best result, create a vSAN cluster with uniformly configured hosts.
  • If a host contributes capacity, it must have at least one flash cache device and one capacity device.
  • In hybrid clusters, the magnetic disks are used for capacity and flash devices for read and write cache. vSAN allocates 70 percent of all available cache for read cache and 30 percent of available cache for write buffers In this configurations, the flash devices serve as a read cache and a write buffer.
  • In all-flash cluster, one designated flash device is used as a write cache, additional flash devices are used for capacity. In all-flash clusters, all read requests come directly from the flash pool capacity.
  • Only local or direct-attached capacity devices can participate in a vSAN cluster. vSAN cannot consume other external storage, such as SAN or NAS, attached to cluster.

Requirements for vSAN Cluster

ESXi Hosts

  • Verify that you are using the latest version of ESXi on your hosts.
  • Verify that there are at least three ESXi hosts with supported storage configurations available to be assigned to the vSAN cluster. For best results, configure the vSAN cluster with four or more hosts.

Memory

  • Verify that each host has a minimum of 8 GB of memory.
  • For larger configurations and better performance, you must have a minimum of 32 GB of memory in the cluster.

Storage I/O controllers, drivers, firmware

  • Verify that the storage I/O controllers, drivers, and firmware versions are certified and listed in the VCG Web site
  • Verify that the controller is configured for passthrough or RAID 0 mode.
  • Verify that the controller cache and advanced features are disabled. If you cannot disable the cache, you must set the read cache to 100 percent.
  • Verify that you are using controllers with higher queue depths. Using controllers with queue depths less than 256 can significantly impact the performance of your virtual machines during maintenance and failure.

Cache and capacity

  • Verify that vSAN hosts contributing storage to the cluster must have at least one cache and one capacity device. vSAN requires exclusive access to the local cache and capacity devices of the hosts in the vSAN cluster. They cannot share these devices with other uses, such as Virtual Flash File System (VFFS), VMFS partitions, or an ESXi boot partition.
  • For best results, create a vSAN cluster with uniformly configured hosts.

Network connectivity

  • Verify that each host is configured with at least one network adapter.
  • For hybrid configurations, verify that vSAN hosts have a minimum dedicated bandwidth of 1 GbE.
  • or all-flash configurations, verify that vSAN hosts have a minimum bandwidth of 10 GbE

Enabling vSAN

To use vSAN, you must create a host cluster and enable vSAN on the cluster.

A vSAN cluster can include hosts with capacity and hosts without capacity. Follow these guidelines when you create a vSAN cluster.

  • A vSAN cluster must include a minimum of three ESXi hosts. For a vSAN cluster to tolerate host and device failures, at least three hosts that join the vSAN cluster must contribute capacity to the cluster. For best results, consider adding four or more hosts contributing capacity to the cluster.
  • Only ESXi 5.5 Update 1 or later hosts can join the vSAN cluster.
  • All hosts in the vSAN cluster must have the same on-disk format.
  • Before you move a host from a vSAN cluster to another cluster, make sure that the destination cluster is vSAN enabled.
  • To be able to access the vSAN datastore, an ESXi host must be a member of the vSAN cluster.

After you enable vSAN, the vSAN storage provider is automatically registered with vCenter Server and the vSAN datastore is created

Set Up a VMkernel Network for vSAN

To enable the exchange of data in the vSAN cluster, you must provide a VMkernel network adapter for vSAN traffic on each ESXi host.

  • In the vSphere Web Client, navigate to the host.
  • Click the Configure tab.
  • Under Networking, select VMkernel adapters.
  • Click the Add host networking icon to open the Add Networking wizard.
  • On the Select connection type page, select VMkernel Network Adapter and click Next.
  • Configure the target switching device.
  • On the Port properties page, select vSAN traffic.
  • Complete the VMkernel adapter configuration.
  • On the Ready to complete page, verify that vSAN is Enabled in the status for the VMkernel adapter, and click Finish.

vSAN network is enabled for the host

Create a vSAN Cluster

  • You can enable vSAN when you create a cluster.
  • Right-click a data center in the vSphere Web Client and select New Cluster.
  • Type a name for the cluster in the Name text box. This name appears in the vSphere Web Client navigator.
  • Select the vSAN Turn ON check box and click OK. The cluster appears in the inventory.
  • Add hosts to the vSAN cluster
  • Enabling vSAN creates a vSAN datastore and registers the vSAN storage provider. vSAN storage providers are built-in software components that communicate the storage capabilities of the datastore to vCenter Server

Create disk groups

Create a Disk Group on a vSAN Host

You can manually combine specific cache devices with specific capacity devices to define disk groups on a particular host.

In this method, you manually select devices to create a disk group for a host. You add one cache device and at least one capacity device to the disk group.

  • Navigate to the vSAN cluster in the vSphere Web Client.
  • Click the Configure tab.
  • Under vSAN, click Disk Management.
  • Select the host and click the Create a new disk group icon.
  • Select the flash device to be used for cache.
  • From the Capacity type drop-down menu, select the type of capacity disks to use, depending on the type of disk group you want to create (HDD for hybrid or Flash for all-flash).
  • Select the devices you want to use for capacity.
  • Click OK.

The new disk group appears in the list.

Monitor vSAN

Monitor the vSAN Cluster

You can monitor the vSAN cluster and all the objects related to it.

  • Navigate to the vSAN cluster in the vSphere Web Client.
  • Click the Monitor tab and click vSAN.
  • Select Physical Disks to review all hosts, cache devices, and capacity devices in the cluster.

vSAN displays information about capacity devices, such as total capacity, used capacity, reserved capacity, functional status, physical location, and so on. The physical location is based on the hardware location of cache and capacity and devices on vSAN hosts.

  • Select a capacity device and click Virtual Disks to review the virtual machines that use the device.

You can monitor many aspects of virtual machine objects, including their current state and whether they are compliant with the storage policies assigned to them.

  • Select Capacity to review information about the amount of capacity provisioned and used in the cluster, and also to review a breakdown of the used capacity by object type or by data type.
  • Select the Configure tab and select General to check the status of the vSAN cluster, verify Internet connectivity, and review the on-disk format used in the cluster.

Monitor vSAN Capacity

You can monitor the capacity of the vSAN datastore, deduplication and compression efficiency and a breakdown of capacity usage.

The vSphere Web Client cluster Summary tab includes a summary of vSAN capacity. You also can view more detailed information in the Capacity monitor.

  • Navigate to the vSAN cluster in the vSphere Web Client.
  • Click the Monitor tab and click vSAN.
  • Select Capacity to view vSAN capacity information.

The Capacity Overview displays the storage capacity of the vSAN datastore, including used space and free space. The Used Capacity Breakdown displays the percentage of capacity used by different object types or data types. If you select Data types, vSAN displays the percentage of capacity used by primary VM data, vSAN overhead, and temporary overhead. If you select Object types, vSAN displays the percentage of capacity used by the following object types:

  • Virtual disks
  • VM home objects
  • Swap objects
  • Performance management objects
  • .vmem files
  • Checksum overhead
  • Snapshot memory
  • Deduplication and compression overhead
  • Space under dedup engine consideration
  • iSCSI home and target objects, and iSCSI LUNs
  • Other, such as user-created files, VM templates, and so on

If you enable deduplication and compression on the cluster, the Deduplication and Compression Overview displays capacity information related to that feature. When deduplication and compression are enabled, it might take several minutes for capacity updates to be reflected in the Capacity monitor as disk space is reclaimed and reallocated.

Monitor Virtual Devices in the vSAN Cluster

You can view the status of virtual disks in the vSAN cluster.

When one or more hosts are unable to communicate with the vSAN datastore, the information about virtual devices is not displayed.

  • Navigate to the vSAN cluster in the vSphere Web Client.
  • Click the Monitor tab and click vSAN.
  • Select Virtual Disks to view all hosts and the corresponding virtual disks in the vSAN cluster, including which hosts, cache and capacity devices their components are currently consuming.
  • Select the VM home folder on one of the virtual machines and click the Physical Disk Placement tab to view device information, such as name, identifier or UUID, and so on.
  • Click the Compliance Failures tab to check the compliance status of your virtual machine.
  • Select hard disk on one of the virtual machines and click the Physical Disk Placement tab to view the device information, such as name, identifier or UUID, number of devices used for each virtual machine, and how they are mirrored across hosts.
  • Click the Compliance Failures tab to check the compliance status of your virtual device.
  • Click the Compliance Failures tab to check the compliance status of your virtual machines.

 Describe vVOLs

The Virtual Volumes functionality changes the storage management paradigm from managing space inside datastores to managing abstract storage objects handled by storage arrays. With Virtual Volumes, an individual virtual machine, not the datastore, becomes a unit of storage management, while storage hardware gains complete control over virtual disk content, layout, and management.

Historically, vSphere storage management used a datastore-centric approach. With this approach, storage administrators and vSphere administrators discuss in advance the underlying storage requirements for virtual machines. The storage administrator then sets up LUNs or NFS shares and presents them to ESXi hosts. The vSphere administrator creates datastores based on LUNs or NFS, and uses these datastores as virtual machine storage. Typically, the datastore is the lowest granularity level at which data management occurs from a storage perspective. However, a single datastore contains multiple virtual machines, which might have different requirements. With the traditional approach, it is difficult to meet the requirements of an individual virtual machine.

The Virtual Volumes functionality helps to improve granularity. It helps you to differentiate virtual machine services on a per application level by offering a new approach to storage management. Rather than arranging storage around features of a storage system, Virtual Volumes arranges storage around the needs of individual virtual machines, making storage virtual-machine centric.

Virtual Volumes maps virtual disks and their derivatives, clones, snapshots, and replicas, directly to objects, called virtual volumes, on a storage system. This mapping allows vSphere to offload intensive storage operations such as snapshot, cloning, and replication to the storage system.

By creating a volume for each virtual disk, you can set policies at the optimum level. You can decide in advance what the storage requirements of an application are, and communicate these requirements to the storage system. The storage system creates an appropriate virtual disk based on these requirements. For example, if your virtual machine requires an active-active storage array, you no longer must select a datastore that supports the active-active model. Instead, you create an individual virtual volume that is automatically placed to the active-active array

Understand a vSAN iSCSI target

Use the iSCSI target service to enable hosts and physical workloads that reside outside the vSAN cluster to access the vSAN datastore.

This feature enables an iSCSI initiator on a remote host to transport block-level data to an iSCSI target on a storage device in the vSAN cluster.

After you configure the vSAN iSCSI target service, you can discover the vSAN iSCSI targets from a remote host. To discover vSAN iSCSI targets, use the IP address of any host in the vSAN cluster, and the TCP port of the iSCSI target. To ensure high availability of the vSAN iSCSI target, configure multipath support for your iSCSI application. You can use the IP addresses of two or more hosts to configure the multipath

 Explain vSAN and vVOL architectural components

vSAN Concepts

VMware vSAN uses a software-defined approach that creates shared storage for virtual machines. It virtualizes the local physical storage resources of ESXi hosts and turns them into pools of storage that can be divided and assigned to virtual machines and applications according to their quality-of-service requirements. vSAN is implemented directly in the ESXi hypervisor.

You can configure vSAN to work as either a hybrid or allȬflash cluster. In hybrid clusters, flash devices are used for the cache layer and magnetic disks are used for the storage capacity layer. In allȬflash clusters, flash devices are used for both cache and capacity.

You can activate vSAN on your existing host clusters and when you create new clusters. vSAN aggregates all local capacity devices into a single datastore shared by all hosts in the vSAN cluster. You can expand the datastore by adding capacity devices or hosts with capacity devices to the cluster. vSAN works best when all ESXi hosts in the cluster share similar or identical configurations across all cluster members, including similar or identical storage configurations. This consistent configuration balances virtual machine storage components across all devices and hosts in the cluster. Hosts without any local devices also can participate and run their virtual machines on the vSAN datastore.

If a host contributes its local storage devices to the vSAN datastore, it must provide at least one device for flash cache and at least one device for capacity. Capacity devices are also called data

Virtual Volumes Architecture

Virtual volumes are objects exported by a compliant storage system and typically correspond one-to-one with a virtual machine disk and other VM-related files. A virtual volume is created and manipulated out-of-band, not in the data path, by a VASA provider.

A VASA provider, or a storage provider, is developed through vSphere APIs for Storage Awareness. The storage provider enables communication between the ESXi hosts, vCenter Server, and the vSphere Web Client on one side, and the storage system on the other. The VASA provider runs on the storage side and integrates with the vSphere Storage Monitoring Service (SMS) to manage all aspects of Virtual Volumes storage. The VASA provider maps virtual disk objects and their derivatives, such as clones, snapshots, and replicas, directly to the virtual volumes on the storage system.

The ESXi hosts have no direct access to the virtual volumes storage. Instead, the hosts access the virtual volumes through an intermediate point in the data path, called the protocol endpoint. The protocol endpoints establish a data path on demand from the virtual machines to their respective virtual volumes. The protocol endpoints serve as a gateway for direct in-band I/O between ESXi hosts and the storage system. ESXi can use Fibre Channel, FCoE, iSCSI, and NFS protocols for in-band communication.

The virtual volumes reside inside storage containers that logically represent a pool of physical disks on the storage system. On the vCenter Server and ESXi side, storage containers are presented as Virtual Volumes datastores. A single storage container can export multiple storage capability sets and provide different levels of service to different virtual volumes.

 Determine the role of storage providers in vSAN

Enabling vSAN automatically configures and registers a storage provider for each host in the vSAN cluster.

vSAN storage providers are built-in software components that communicate datastore capabilities to vCenter Server. A storage capability is typically represented by a key-value pair, where the key is a specific property offered by the datastore. The value is a number or range that the datastore can provide for a provisioned object, such as a virtual machine home namespace object or a virtual disk. You can also use tags to create user-defined storage capabilities and reference them when defining a storage policy for a virtual machine.

The vSAN storage providers report a set of underlying storage capabilities to vCenter Server. They also communicate with the vSAN layer to report the storage requirements of the virtual machines.

vSAN registers a separate storage provider for each host in the vSAN cluster, using the following URL:

http://host_ip:8080/version.xml

where host_ip is the actual IP of the host.

Verify that the storage providers are registered.

  • Browse to vCenter Server in the vSphere Web Client navigator.
  • Click the Configure tab, and click Storage Providers.

The storage providers for vSAN appear on the list. Each host has a storage provider, but only one storage provider is active. Storage providers that belong to other hosts are in standby. If the host that currently has the active storage provider fails, the storage provider for another host becomes active.

Determine the role of storage providers in vVOLs

A storage provider is a software component that is either offered by VMware or is developed by a third party through the vSphere APIs for Storage Awareness (VASA) program. The storage provider can also be called VASA provider. The storage providers integrate with various storage entities that include external physical storage and storage abstractions, such as Virtual SAN and Virtual Volumes. Storage providers can also support software solutions, for example, I/O filters.

Generally, vCenter Server and ESXi use the storage providers to obtain information about storage configuration status, and storage data services offered in your environment. This information appears in the vSphere Web Client. The information helps you to make appropriate decisions about virtual machine placement, to set storage requirements, and to monitor your storage environment.

Storage providers that manage arrays and storage abstractions, are called persistence storage providers. Providers that support Virtual Volumes or Virtual SAN belong to this category. In addition to storage, persistence providers can provider other data services, such as replication.

Another category of providers is I/O filter storage providers, or data service providers. These providers offer data services that include host based caching, compression, and encryption.

Built-in storage providers typically do not require registration. For example, the storage providers that support I/O filters become registered automatically.

When a third party offers a storage provider, you typically must register the provider. An example of such a provider is the Virtual Volumes provider. You use the vSphere Web Client to register and manage each storage provider component.

Explain vSAN failure domains functionality

The vSAN fault domains feature instructs vSAN to spread redundancy components across the servers in separate computing racks. In this way, you can protect the environment from a rack-level failure such as loss of power or connectivity.

vSAN requires at least two fault domains, each of which consists of one or more hosts. Fault domain definitions must acknowledge physical hardware constructs that might represent a potential zone of failure, for example, an individual computing rack enclosure.

If possible, use at least four fault domains. Three fault domains do not support certain data evacuation modes, and vSAN is unable to reprotect data after a failure. In this case, you need an additional fault domain with capacity for rebuilding, which you cannot provide with only three fault domains.

If fault domains are enabled, vSAN applies the active virtual machine storage policy to the fault domains instead of the individual hosts.

Calculate the number of fault domains in a cluster based on the Primary level of failures to tolerate (PFTT) attribute from the storage policies that you plan to assign to virtual machines.

number of fault domains = 2 * PFTT + 1

If a host is not a member of a fault domain, vSAN interprets it as a stand-alone fault domain.

Configure/Manage VMware vSAN

For vSAN deployment see earlier sections.

Using vSAN Configuration Assist and Updates

You can use Configuration Assist to check the configuration of your vSAN cluster, and resolve any issues.

vSAN Configuration Assist enables you to verify the configuration of cluster components, resolve issues, and troubleshoot problems. The configuration checks cover hardware compatibility, network, and vSAN configuration options.

The Configuration Assist checks are divided into categories. Each category contains individual configuration checks.

Hardware compatibility

Checks the hardware components for the vSAN cluster, to ensure that they are using supported hardware, software, and drivers.

vSAN configuration

Checks vSAN configuration options.

Generic cluster

Checks basic cluster configuration options.

Network configuration Checks

vSAN network configurations

Burn-in test

Checks burn-in test operations.

Check vSAN Configuration

You can view the configuration status of your vSAN cluster, and resolve issues that affect the operation of your cluster.

  • Navigate to the vSAN cluster in the vSphere Web Client.
  • Click the Configuration tab.
  • Under vSAN, click Configuration Assist to review the vSAN configuration categories.
  • If the Test Result column displays a warning icon, expand the category to review the results of individual configuration checks.
  • Select an individual configuration check and review the detailed information at the bottom of the page.
  • You can click the Ask VMware button to open a knowledge base article that describes the check and provides information about how to resolve the issue.

 Create/Modify VMware Virtual Volumes (vVOLs)

Your Virtual Volumes environment must include storage providers, also called VASA providers. Typically, third-party vendors develop storage providers through the VMware APIs for Storage Awareness (VASA). Storage providers facilitate communication between vSphere and the storage side. You must register the storage provider in vCenter Server to be able to work with Virtual Volumes.

You use the New Datastore wizard to create a Virtual Volumes datastore.

ESXi hosts use a logical I/O proxy, called protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate. Protocol endpoints are exported, along with associated storage containers, by the storage system through a storage provider. Protocol endpoints become visible in the vSphere Web Client after you map a storage container to a Virtual Volumes datastore. You can review properties of protocol endpoints and modify specific seĴings.

If your ESXi host uses SCSI-based transport to communicate with protocol endpoints representing a storage array, you can modify default multipathing policies assigned to protocol endpoints. Use the Edit Multipathing Policies dialog box to change a path selection policy.

Configure Storage Policies

To create and manage storage policies for your virtual machines, you use the VM Storage Policies interface of the vSphere Web Client.

After the VM Storage Policies interface is populated with the appropriate data, you can start creating your storage policies. When you create a storage policy, you define placement and data service rules. The rules are the basic element of the VM storage policy. Within the policy, the rules are grouped in collections of rules, or rule sets.

In certain cases, you can prepackage the rules in storage policy components. The components are modular building blocks that you define in advance and can reference in multiple storage policies.

After you create the storage policy, you can edit or clone it, or delete any unused policies

About Datastore-Specific and Common Rule Sets

After the VM Storage Policies interface is populated with the appropriate data, you can start defining your storage policies. A basic element of a VM storage policy is a rule. Each individual rule is a statement that describes a single requirement for virtual machine storage and data services. Within the policy, rules are grouped in collections of rules. Two types of collections exist, regular rule sets and common rule sets.

Regular Rule Sets

Regular rule sets are datastore-specific. Each rule set must include placement rules that describe requirements for virtual machine storage resources. All placement rules within a single rule set represent a single storage entity. These rules can be based on tags or storage capabilities. In addition, the regular rule set can include optional storage policy components that describe data services to provide for the virtual machine.

To define the storage policy, one regular rule set is required. Additional rule sets are optional. A single policy can use multiple rule sets to define alternative storage placement parameters, often from several storage providers.

Common Rule Sets

Unlike datastore-specific regular rule sets, common rule sets do not define storage placement for the virtual machine, and do not include placement rules. Common rule sets are generic for all types of storage and do not depend on the datastore. These rule sets activate data services for the virtual machine. Common rule sets include rules or storage policy components that describe particular data services, such as encryption, replication, and so on.

Enable/Disable vSAN Fault Domains

Create a New Fault Domain in vSAN Cluster

To ensure that the virtual machine objects continue to run smoothly during a rack failure, you can group hosts in different fault domains.

When you provision a virtual machine on the cluster with fault domains, vSAN distributes protection components, such as witnesses and replicas of the virtual machine objects across different fault domains. As a result, the vSAN environment becomes capable of tolerating entire rack failures in addition to a single host, storage disk, or network failure.

Prerequisites

Choose a unique fault domain name. vSAN does not support duplicate fault domain names in a cluster.

Verify the version of your ESXi hosts. You can only include hosts that are 6.0 or later in fault domains.

Verify that your vSAN hosts are online. You cannot assign hosts to a fault domain that is offline or unavailable due to hardware configuration issue.

  • Navigate to the vSAN cluster in the vSphere Web Client.
  • Click the Configure tab.
  • Under vSAN, click Fault Domains and Stretched Cluster.
  • Click the Create a new fault domain icon.
  • Type the fault domain name.
  • From the Show drop-down menu, select Hosts not in fault domain to view the list of hosts that are not assigned to a fault domain or select Show All Hosts to view all hosts in the cluster.
  • Select one or more hosts to add to the fault domain.

A fault domain cannot be empty. You must select at least one host to include in the fault domain.

  • Click OK.

The selected hosts appear in the fault domain.

Remove Selected Fault Domains

When you no longer need a fault domain, you can remove it from the vSAN cluster.

  • Navigate to the vSAN cluster in the vSphere Web Client.
  • Click the Configure tab.
  • Under vSAN, click Fault Domains and Stretched Cluster.
  • Select the fault domain that you want to delete and click the Remove selected fault domains icon.
  • Click Yes.

All hosts in the fault domain are removed and the selected fault domain is deleted from the vSAN cluster. Each host that is not part of a fault domain is considered to reside in its own single-host fault domain.

Create Virtual Volumes given the workload and availability requirements

Guidelines and Limitations in Using vSphere Virtual Volumes

For the best experience with vSphere Virtual Volumes functionality, you must follow specific guidelines.

  • Virtual Volumes supports the following capabilities, features, and VMware products:
  • With Virtual Volumes, you can use advanced storage services that include replication, encryption, deduplication, and compression on individual virtual disks. Check with your storage vendor for information about services they support with Virtual Volumes.
  • Virtual Volumes functionality supports backup software that uses vSphere APIs – Data Protection. Virtual volumes are modeled on virtual disks. Backup products that use vSphere APIs – Data Protection are as fully supported on virtual volumes as they are on VMDK files on a LUN. Snapshots that are created by backup software using vSphere APIs – Data Protection look as non-VVols snapshots to vSphere and the backup software.
  • For more information about integration with the vSphere Storage APIs – Data Protection, consult your backup software vendor.
  • Virtual Volumes supports such vSphere features as vSphere vMotion, Storage vMotion, snapshots, linked clones, Flash Read Cache, and DRS.
  • You can use clustering products, such as Oracle Real Application Clusters, with Virtual Volumes. To use these products, you activate the multiwrite setting for a virtual disk stored on the VVol datastore.

vSphere Virtual Volumes Limitations

Improve your experience with vSphere Virtual Volumes by knowing the following limitations:

  • Because the Virtual Volumes environment requires vCenter Server, you cannot use Virtual Volumes with a standalone host.
  • Virtual Volumes functionality does not support RDMs.
  • A Virtual Volumes storage container cannot span multiple physical arrays. Some vendors present multiple physical arrays as a single array. In such cases, you still technically use one logical array.
  • Host profiles that contain Virtual Volumes datastores are vCenter Server specific. After you extract this type of host profile you can attach it only to hosts and clusters managed by the same vCenter Server as the reference host.

Best Practices for Storage Container Provisioning

Follow these best practices when provisioning storage containers on the vSphere Virtual Volumes array side.

Creating Containers Based on Your Limits

Because storage containers apply logical limits when grouping virtual volumes, the container must match the boundaries that you want to apply.

Examples might include a container created for a tenant in a multitenant deployment, or a container for a department in an enterprise deployment.

Organizations or departments, for example, Human Resources and Finance

Groups or projects, for example, Team A and Red Team

Customers

Putting All Storage Capabilities in a Single Container

Storage containers are individual datastores. A single storage container can export multiple storage capability profiles As a result, virtual machines with diverse needs and different storage policy settings can be a part of the same storage container.

Changing storage profiles must be an array-side operation, not a storage migration to another container.

Avoiding Over-Provisioning Your Storage Containers

When you provision a storage container, the space limits that you apply as part of the container configuration are only logical limits. Do not provision the container larger than necessary for the anticipated use. If you later increase the size of the container, you do not need to reformat or repartition it.

Using Storage-Specific Management UI to Provision Protocol Endpoints

Every storage container needs protocol endpoints (PEs) that are accessible to ESXi hosts.

When you use block storage, the PE represents a proxy LUN defined by a T10-based LUN WWN. For NFS storage, the PE is a mount point, such as an IP address or DNS name, and a share name.

Typically, configuration of PEs is array-specific. When you configure PEs, you might need to associate them with specific storage processors, or with certain hosts. To avoid errors when creating PEs, do not configure them manually. Instead, when possible, use storage-specific management tools.

No Assignment of IDs Above Disk.MaxLUN to Protocol Endpoint LUNs

By default, an ESXi host can access LUN IDs that are within the range of 0 to 1023. If the ID of the protocol endpoint LUN that you configure is 1024 or greater, the host might ignore the PE.

If your environment uses LUN IDs that are greater than 1023, change the number of scanned LUNs through the Disk.MaxLUN parameter.

Best Practices for vSphere Virtual Volumes Performance

To ensure optimal vSphere Virtual Volumes performance results, follow these best practices.

Using Different VM Storage Policies for Individual Virtual Volumes

By default, all components of a virtual machine in the Virtual Volumes environment get a single VM storage policy. However, different components might have different performance characteristics, for example, a database virtual disk and a corresponding log virtual disk. Depending on performance requirements, you can assign different VM storage policies to individual virtual disks and to the VM home file or config-VVol.

When you use vSphere Web Client, you cannot change the VM storage policy assignment for swap-VVol, memory-VVol, or snapshot-VVol.

Getting a Host Profile with Virtual Volumes

The best way to get a host profile with Virtual Volumes is to configure a reference host and extract its profiles If you manually edit an existing host profile in the vSphere Web Client and aĴach the edited profile to a new host, you might trigger compliance errors and other unpredictable problems. For more details, see the VMware Knowledge Base article 2146394.

Monitoring I/O Load on Individual Protocol Endpoint

All virtual volume I/O goes through protocol endpoints (PEs). Arrays select protocol endpoints from several PEs that are accessible to an ESXi host. Arrays can do load balancing and change the binding path that connects the virtual volume and the PE.

On block storage, ESXi gives a large queue depth to I/Os because of a potentially high number of virtual volumes. The Scsi.ScsiVVolPESNRO parameter controls the number of I/Os that can be queued for PEs. You can configure the parameter on the Advanced System SeĴings page of the vSphere Web Client.

Monitoring Array Limitations

A single VM might occupy multiple virtual volumes.

Suppose that your VM has two virtual disks, and you take two snapshots with memory. Your VM might occupy up to 10 VVol objects: a config-VVol, a swap-VVol, 2 data-VVols, 4 snapshot-VVols, and 2 memory snapshot-VVols.

Ensuring that Storage Provider Is Available

To access vSphere Virtual Volumes storage, your ESXi host requires a storage provider (VASA provider). To ensure that the storage provider is always available, follow these guidelines:

Do not migrate a storage provider VM to Virtual Volumes storage.

Back up your storage provider VM.

When appropriate, use vSphere HA or Site Recovery Manager to protect the storage provider VM.

 Collect vSAN Observer output

The VMware vSAN Observer is a Web-based tool that runs on RVC and is used for in-depth performance analysis and monitoring of the vSAN cluster. Use vSAN Observer for information about the performance statistics of the capacity layer, detailed statistical information about physical disk groups, current CPU usage, consumption of vSAN memory pools, and physical and in-memory object distribution across vSAN clusters.

VSAN Observer has the ability to generate a log bundle that can later be examined offline. This can be very useful for sending to a third party such as VMware Technical Support for troubleshooting. The option to generate a log bundle is — generate-html-bundle. For example, to generate a performance statistics bundle over a one hour period at 30 second intervals for a Virtual SAN cluster named VSAN and save the generated statistics bundle to the /tmp folder, run the command:

vsan.observer –run-webserver –force –generate-html-bundle /tmp — interval 30 –max-runtime 1

This command creates the entire set of required HTML files and then stores them in a tar.gz offline bundle in the /tmp directory (in this example). The name will be similar to /tmp/vsan-observer-.tar.gz.

To review the offline bundle, extract the tar.gz in an appropriate location that can be navigated to from a web browser.

Create storage policies appropriate for given workloads and availability requirements

One of the aspects of SPBM is virtual machine storage policies, which are essential to virtual machine provisioning. The policies control which type of storage is provided for the virtual machine and how the virtual machine is placed within storage. They also determine data services that the virtual machine can use.

vSphere offers default storage policies. In addition, you can define policies and assign them to the virtual machines.

You use the VM Storage Policies interface to create a storage policy. When you define the policy, you specify various storage requirements for applications that run on the virtual machines. You can also use storage policies to request specific data services, such as caching or replication, for virtual disks.

You apply the storage policy when you create, clone, or migrate the virtual machine. After you apply the storage policy, the SPBM mechanism assists you with placing the virtual machine in a matching datastore. In certain storage environments, SPBM determines how the virtual machine storage objects are provisioned and allocated within the storage resource to guarantee the required level of service. The SPBM also enables requested data services for the virtual machine and helps you to monitor policy compliance.

 Configure vVOLs Protocol Endpoints

Although storage systems manage all aspects of virtual volumes, ESXi hosts have no direct access to virtual volumes on the storage side. Instead, ESXi hosts use a logical I/O proxy, called the protocol endpoint, to communicate with virtual volumes and virtual disk files that virtual volumes encapsulate. ESXi uses protocol endpoints to establish a data path on demand from virtual machines to their respective virtual volumes.

Each virtual volume is bound to a specific protocol endpoint. When a virtual machine on the host performs an I/O operation, the protocol endpoint directs the I/O to the appropriate virtual volume. Typically, a storage system requires just a few protocol endpoints. A single protocol endpoint can connect to hundreds or thousands of virtual volumes.

On the storage side, a storage administrator configures protocol endpoints, one or several per storage container. The protocol endpoints are a part of the physical storage fabric. The storage system exports the protocol endpoints with associated storage containers through the storage provider. After you map the storage container to a Virtual Volumes datastore, the ESXi host discovers the protocol endpoints and they become visible in the vSphere Web Client. The protocol endpoints can also be discovered during a storage rescan. Multiple hosts can discover and mount the protocol endpoints.

In the vSphere Web Client, the list of available protocol endpoints looks similar to the host storage devices list. Different storage transports can be used to expose the protocol endpoints to ESXi. When the SCSI-based transport is used, the protocol endpoint represents a proxy LUN defined by a T10-based LUN WWN. For the NFS protocol, the protocol endpoint is a mount-point, such as an IP address (or DNS name) and a share name. You can configure multipathing on the SCSI-based protocol endpoint, but not on the NFS-based protocol endpoint. No maĴer which protocol you use, the storage array can provide multiple protocol endpoints for availability purposes.

Protocol endpoints are managed per array. ESXi and vCenter Server assume that all protocol endpoints reported for an array are associated with all containers on that array. For example, if an array has two containers and three protocol endpoints, ESXi assumes that virtual volumes on both containers can be bound to all three protocol points.

Binding and Unbinding Virtual Volumes to Protocol Endpoints

At the time of creation, a virtual volume is a passive entity and is not immediately ready for I/O. To access the virtual volume, ESXi or vCenter Server send a bind request.

The storage system replies with a protocol endpoint ID that becomes an access point to the virtual volume. The protocol endpoint accepts all I/O requests to the virtual volume. This binding exists until ESXi sends an unbind request for the virtual volume.

For later bind requests on the same virtual volume, the storage system can return different protocol endpoint IDs.

When receiving concurrent bind requests to a virtual volume from multiple ESXi hosts, the storage system can return the same or different endpoint bindings to each requesting ESXi host. In other words, the storage system can bind different concurrent hosts to the same virtual volume through different endpoints.

The unbind operation removes the I/O access point for the virtual volume. The storage system might unbind the virtual volume from its protocol endpoint immediately, or after a delay, or take some other action. A bound virtual volume cannot be deleted until it is unbound.