Objective 3.4 – Perform VMFS and NFS configurations and upgrades
Here are my revision notes for Objective 3.4 – Perform VMFS and NFS configurations and upgrades, as always these are sorted by the objective listing and linked back to the VCP6.5 Certification Blueprint post.
Happy Revision
Simon
Objective 3.4 – Perform VMFS and NFS configurations and upgrades
Perform VMFS v5 and v6 configurations
Create a VMFS Datastore
VMFS datastores serve as repositories for virtual machines. You can set up VMFS datastores on any SCSIbased storage devices that the host discovers, including Fibre Channel, iSCSI, and local storage devices.
Prerequisites
Install and configure any adapters that your storage requires.
To discover newly added storage devices, perform a rescan.
Verify that storage devices you are planning to use for your datastores are available.
- In the vSphere Web Client navigator, select Global Inventory Lists > Datastores.
- Click the New Datastore icon.
- Type the datastore name and if necessary, select the placement location for the datastore. The vSphere Web Client enforces a 42 character limit for the datastore name.
- Select VMFS as the datastore type.
- Select the device to use for your datastore.
- Specify the datastore version.
VMFS6
This option is default for 512e storage devices. The ESXi hosts of version 6.0 or earlier cannot recognize the VMFS6 datastore. If your cluster includes ESXi 6.0 and ESXi 6.5 hosts that share the datastore, this version might not be appropriate.
VMFS5
This option is default for 512n storage devices. VMFS5 datastore supports access by the ESXi hosts of version 6.5 or earlier.
- Define configuration details for the datastore.
- Specify partition configuration.
Use all available partitions
Dedicates the entire disk to a single VMFS datastore. If you select this option, all file systems and data currently stored on this device are destroyed.
Use free space
Deploys a VMFS datastore in the remaining free space of the disk.
If the space allocated for the datastore is excessive for your purposes, adjust the capacity values in the Datastore Size field.
By default, the entire free space on the storage device is allocated.
- For VMFS6, specify the block size and define space reclamation parameters.
Block size
The block size on a VMFS datastore defines the maximum file size and the amount of space the file occupies. VMFS6 supports the block size of 1 MB.
Space reclamation granularity
Specify granularity for the unmap operation. Unmap granularity equals the block size, which is 1 MB.
Storage sectors of the size smaller than 1 MB are not reclaimed.
Space reclamation priority
Select one of the following options.
Low (default). Process the unmap operations at a low rate.
None. Select this option if you want to disable the space reclamation operations for the datastore.
- In the Ready to Complete page, review the datastore configuration information and click Finish.
The datastore on the SCSI-based storage device is created. It is available to all hosts that have access to the device.
Describe VAAI primitives for block devices and NAS
VAAI was first introduced in vSPhere 4.1 as a method for offloading specific storage operations from the ESXi Host to the storage array. VAAI was ratified by the IEEE T10 committee and is based on running certain SCSI commands on the storage array instead f the ESXi Host. In order of that to happen, the storage vendor is required to add VAAI to the storage array’s operating system. IN the case of NAS, an additional plugin needs to be installed to help perform the off loading. VAAI defines a set of storage primitives, which replace select SCSI operations with VAAI operations that are performed on the storage array instead of the ESXi Host. VAAI offloads the processing to the storage system where it belongs instead of processing on the ESXi Host, because storage is where SCSI is most efficiently handled. This offloading of the operations to run directly on the storage array can significantly improve performance for certain operations, such as zeroing, storage migration and cloning.
In environments within VAAI, the original SCSI commands run directly on the ESXi Host, which default back to the old performance issues of additional CPU cycles and network bandwidth consumption. VAAI has three main built in capabilities:
- Full Copy
- Block Zeroing
- Hardware assisted locking
Differentiate VMware file system technologies
Datastores are logical containers, analogous to file systems, that hide specifics of physical storage and provide a uniform model for storing virtual machine files. Datastores can also be used for storing ISO images, virtual machine templates, and floppy images. Depending on the storage you use, the datastores can be of different types.
VMFS (version 3, 5, and 6)
Datastores that you deploy on block storage devices use the vSphere Virtual Machine File System format, a special high-performance file system format that is optimized for storing virtual machines.
NFS (version 3 and 4.1)
An NFS client built into ESXi uses the Network File System (NFS) protocol over TCP/IP to access a designated NFS volume that is located on a NAS server. The ESXi host mounts the volume as an NFS datastore, and uses it for storage needs. ESXi supports versions 3 and 4.1 of the NFS protocol.
Virtual SAN
Virtual SAN aggregates all local capacity devices available on the hosts into a single datastore shared by all hosts in the Virtual SAN cluster.
Virtual Volumes
Virtual Volumes datastore represents a storage container in vCenter Server and vSphere Web Client.
Migrate from VMFS5 to VMFS6
In this KB we are assuming that vSphere admins are upgrading their VMFS datastores one at a time. The below description applies mostly to these scenarios. Though the basic workflow will not change if someone wants to perform upgrade of multiple datastores in parallel.
Keep the following info handy while you plan for your VMFS datastore upgrade:
- Identify the VMFS datastore that must be upgraded to VMFS 6 file system type. For example, DS-1.
- Identify the name of the vCenter Server and list of all ESX hosts sharing the datastore with credentials.
- All ESX hosts and the vCenter server must be upgraded to vSphere 6.5.
Note: Do not proceed with datastore upgrade until all ESX hosts which share the datastore are upgraded to vSphere 6.5. If you do not upgrade, older ESX hosts lose connectivity to the new VMFS 6 datastore after upgrading. This may impact the business continuity.
- Spare datastore with equal or more capacity, which is shared with all ESX hosts. For example, DS-2 in this KB. The DS-2 datastore is used to temporarily host all virtual machines from the DS-1 datastore.
Requirements when you are automating the process using windows powershell scripts:
- Windows 2008/2008 R2/2012 on 64bit in a domain environment. The windows host used for launching the utility.
- PowerShell 2.0 or above, with all policies set. vSphere powershell plugins installed. Should be able to launch as Administrator. Execute in 64bit PowerShell environment.
The DS-1 datastore of VMFS 5 (or VMFS 3) must be upgraded to datastore with filesystem type VMFS 6. This datastore is shared with all the ESX hosts in the inventory and has some virtual machines running from this datastore.
To upgrade:
- Perform version check for the vCenter Server and all ESX hosts.
- Perform all pre checks for free space availability on datastore DS-2. Available space on DS-2 must be equal or more than the datastore DS-1 space.
- Ensure that the datastore DS-2 is VMFS 6 type.
- Prepare list of all virtual machines in the vCenter Server’s inventory that are hosted on datastore DS-1.
- Evacuate the datastore DS-1. For this, migrate all the virtual machines running from datastore DS-1 to datastore DS-2. Storage vMotion operations are performed on these virtual machines.
- Perform one migration at a time to avoid disrupting the performance of remaining datacenter entities. Keep track of any migration failures, if any, re-trigger the migration for those virtual machines.
- Ensure that datastore DS-1 is empty by listing files on this datastore.
- Unmount datastore DS-1 from all ESX hosts.
- Delete datastore DS-1.
- Create a new datastore with the VMFS 6 filesystem using the same lun. For example, DS-1.
- Trigger a storage rescan operation on all hosts and wait for few minutes for this operation to complete.
- Move all virtual machines back to datastore DS-1 from datastore DS-2 by performing storage vMotion operation. It is suggested to migrate one virtual machine at a time. Keep track of any migration failures, if any, re-trigger migration for those virtual machines.
Differentiate Physical Mode RDMs and Virtual Mode RDMs
An RDM is a mapping file in a separate VMFS volume that acts as a proxy for a raw physical storage device. The RDM allows a virtual machine to directly access and use the storage device. The RDM contains metadata for managing and redirecting disk access to the physical device.
The file gives you some of the advantages of direct access to a physical device while keeping some advantages of a virtual disk in VMFS. As a result, it merges VMFS manageability with raw device access.
RDMs can be described in terms such as mapping a raw device into a datastore, mapping a system LUN, or mapping a disk file to a physical disk volume. All these terms refer to RDMs.
Although VMware recommends that you use VMFS datastores for most virtual disk storage, on certain occasions, you might need to use raw LUNs or logical disks located in a SAN.
For example, you need to use raw LUNs with RDMs in the following situations:
When SAN snapshot or other layered applications run in the virtual machine. The RDM better enables scalable backup offloading systems by using features inherent to the SAN.
In any MSCS clustering scenario that spans physical hosts — virtual-to-virtual clusters as well as physical-to-virtual clusters. In this case, cluster data and quorum disks should be configured as RDMs rather than as virtual disks on a shared VMFS.
Think of an RDM as a symbolic link from a VMFS volume to a raw LUN. The mapping makes LUNs appear as files in a VMFS volume. The RDM, not the raw LUN, is referenced in the virtual machine configuration. The RDM contains a reference to the raw LUN.
Using RDMs, you can:
- Use vMotion to migrate virtual machines using raw LUNs.
- Add raw LUNs to virtual machines using the vSphere Web Client.
- Use file system features such as distributed file locking, permissions, and naming.
Two compatibility modes are available for RDMs:
Virtual compatibility mode allows an RDM to act exactly like a virtual disk file including the use of snapshots.
Physical compatibility mode allows direct access of the SCSI device for those applications that need lower level control.
RDM Virtual and Physical Compatibility Modes
You can use RDMs in virtual compatibility or physical compatibility modes. Virtual mode specifies full virtualization of the mapped device. Physical mode specifies minimal SCSI virtualization of the mapped device, allowing the greatest flexibility for SAN management software.
In virtual mode, the VMkernel sends only READ and WRITE to the mapped device. The mapped device appears to the guest operating system exactly the same as a virtual disk file in a VMFS volume. The real hardware characteristics are hidden. If you are using a raw disk in virtual mode, you can realize the benefits of VMFS such as advanced file locking for data protection and snapshots for streamlining development processes. Virtual mode is also more portable across storage hardware than physical mode, presenting the same behavior as a virtual disk file.
In physical mode, the VMkernel passes all SCSI commands to the device, with one exception: the REPORT LUNs command is virtualized so that the VMkernel can isolate the LUN to the owning virtual machine. Otherwise, all physical characteristics of the underlying hardware are exposed. Physical mode is useful to run SAN management agents or other SCSI target-based software in the virtual machine. Physical mode also allows virtual-to-physical clustering for cost-effective high availability.
VMFS5 and VMFS6 support greater than 2 TB disk size for RDMs in virtual and physical modes.
Create a Virtual/Physical Mode RDM
When you give your virtual machine direct access to a raw SAN LUN, you create an RDM disk that resides on a VMFS datastore and points to the LUN. You can create the RDM as an initial disk for a new virtual machine or add it to an existing virtual machine. When creating the RDM, you specify the LUN to be mapped and the datastore on which to put the RDM.
Although the RDM disk file has the same.vmdk extension as a regular virtual disk file. the RDM contains only mapping information. The actual virtual disk data is stored directly on the LUN.
This procedure assumes that you are creating a new virtual machine.
- Right-click any inventory object that is a valid parent object of a virtual machine, such as a data center folder, cluster, resource pool, or host, and select New Virtual Machine.
- Select Create a new virtual machine and click Next.
- Follow the steps required to create a virtual machine.
- On the Customize Hardware page, click the Virtual Hardware tab.
- To delete the default virtual hard disk that the system created for your virtual machine, move your cursor over the disk and click the Remove icon.
- From the New drop-down menu at the bottom of the page, select RDM Disk and click Add.
- From the list of SAN devices or LUNs, select a raw LUN for your virtual machine to access directly and click OK.
The system creates an RDM disk that maps your virtual machine to the target LUN. The RDM disk is shown on the list of virtual devices as a new hard disk.
- Click the New Hard Disk triangle to expand the properties for the RDM disk.
- Select a location for the RDM disk.
You can place the RDM on the same datastore where your virtual machine configuration files reside, or select a different datastore.
- Select a compatibility mode.
Physical
Allows the guest operating system to access the hardware directly. Physical compatibility is useful if you are using SAN-aware applications on the virtual machine. However, a virtual machine with a physical compatibility RDM cannot be cloned, made into a template, or migrated if the migration involves copying the disk.
Virtual
Allows the RDM to behave as if it were a virtual disk, so you can use such features as taking snapshots, cloning, and so on. When you clone the disk or make a template out of it, the contents of the LUN are copied into a .vmdk virtual disk file. When you migrate a virtual compatibility mode RDM, you can migrate the mapping file or copy the contents of the LUN into a virtual disk.
- If you selected virtual compatibility mode, select a disk mode. Disk modes are not available for RDM disks using physical compatibility mode.
Dependent
Dependent disks are included in snapshots.
Independent – Persistent
Disks in persistent mode behave like conventional disks on your physical computer. All data wriĴen to a disk in persistent mode are written permanently to the disk.
Independent – Nonpersistent
Changes to disks in nonpersistent mode are discarded when you power off or reset the virtual machine. With nonpersistent mode, you can restart the virtual machine with a virtual disk in the same state every time. Changes to the disk are written to and read from a redo log file that is deleted when you power off or reset.
- Click OK
Differentiate NFS 3.x and 4.1 capabilities
Compare and contrast VMFS and NFS datastore properties
VMFS and NFS are very similar in many ways, but there are some differences in the properties if these two datastores. One difference is that the maximum size of a VMFS datastore is 64 TB, while the maximum size of an NFS datastore is 100TB.
Another difference is that VMFS uses SCSI queuing and has a default queue length of 32 outstanding I/Os at a time, while with NFS, each VM gets its own I/O data path. Thus, the VM density of active virtual machines in a datastore is twice as many with NFS as with VMFS.
Configure Bus Sharing
You can set the type of SCSI bus sharing for a virtual machine and indicate whether the SCSI bus is shared. Depending on the type of sharing, virtual machines can access the same virtual disk simultaneously if the virtual machines reside on the same ESXi host or on a different host.
- Right-click a virtual machine in the inventory and select Edit Settings.
- On the Virtual Hardware tab, expand SCSI controller, and select the type of sharing in the SCSI Bus Sharing drop-down menu.
None: Virtual disks cannot be shared by other virtual machines.
Virtual: Virtual disks can be shared by virtual machines on the same ESXi host.
Physical: Virtual disks can be shared by virtual machines on any ESXi host.
Configure Multi-writer locking
To enable Multi Writer flag for sharing a particular disks, apply one of these options:
-
- Power off the virtual machine.
- In the . vmx file that defines the virtual machine, add an entry similar to:
scsiX:Y.sharing = “multi-writer”
where X is the controller ID and Y is the disk ID on that controller. The setting screen of a virtual machine shows these values.
- Add this setting for each virtual disk that you want to share. For example, to share four disks, the configuration file entries look like this:
scsi1:0.sharing = “multi-writer”
scsi1:1.sharing = “multi-writer”
scsi1:2.sharing = “multi-writer”
scsi1:3.sharing = “multi-writer”
Save the .vmx file and power on the virtual machine.
OR
- In the vSphere Client, power off the virtual machine, navigate to Edit Settings > Options > Advanced > General > Configuration Parameters. Add rows for each of the shared disks and set their values to multi-writer.
Add this disk to another virtual machine:
- In the vSphere Client inventory, right-click the virtual machine and click Edit Settings.
- Click the Hardware and click Add.
- Select Hard Disk and click Next.
- Select Use an Existing Virtual Disk.
Connect an NFS 4.1 datastore using Kerberos
You can use the New Datastore wizard to mount an NFS volume.
Prerequisites
Set up NFS storage environment.
If you plan to use Kerberos authentication with the NFS 4.1 datastore, make sure to configure the ESXi hosts for Kerberos authentication.
- In the vSphere Web Client navigator, select Global Inventory Lists > Datastores.
- Click the New Datastore icon.
- Type the datastore name and if necessary, select the placement location for the datastore.
The vSphere Web Client enforces a 42 character limit for the datastore name.
- Select NFS as the datastore type.
- Specify an NFS version.
NFS 3
NFS 4.1
- Type the server name or IP address and the mount point folder name.
You can use IPv6 or IPv4 formats.
With NFS 4.1, you can add multiple IP addresses or server names if the NFS server supports trunking. The ESXi host uses these values to achieve multipathing to the NFS server mount point.
- Select Mount NFS read only if the volume is exported as read-only by the NFS server.
- To use Kerberos security with NFS 4.1, enable Kerberos and select an appropriate Kerberos model.
Use Kerberos for authentication only (krb5)
Supports identity verification
Use Kerberos for authentication and data integrity (krb5i)
In addition to identity verification, provides data integrity services. These services help to protect the NFS traffic from tampering by checking data packets for any potential modifications.
If you do not enable Kerberos, the datastore uses the default AUTH_SYS security.
- If you are creating a datastore at the data center or cluster level, select hosts that mount the datastore.
- Review the configuration options and click Finish.
Create/Rename/Delete/Unmount VMFS datastores
Change Datastore Name
You can change the name of an existing datastore.
- In the vSphere Web Client navigator, select Global Inventory Lists > Datastores.
- Right-click the datastore to rename, and select Rename.
- Type a new datastore name.
The vSphere Web Client enforces a 42 character limit for the datastore name.
The new name appears on all hosts that have access to the datastore.
Unmount Datastores
When you unmount a datastore, it remains intact, but can no longer be seen from the hosts that you specify. The datastore continues to appear on other hosts, where it remains mounted.
Do not perform any configuration operations that might result in I/O to the datastore while the unmount is in progress.
Prerequisites
When appropriate, before unmounting datastores, make sure that the following prerequisites are met:
No virtual machines reside on the datastore.
The datastore is not managed by Storage DRS.
Storage I/O control is disabled for this datastore.
- In the vSphere Web Client navigator, select Global Inventory Lists > Datastores.
- Right-click the datastore to unmount and select Unmount Datastore.
- If the datastore is shared, specify which hosts should no longer access the datastore.
- Confirm that you want to unmount the datastore.
After you unmount a VMFS datastore from all hosts, the datastore is marked as inactive. If you unmount an NFS or a virtual volumes datastore from all hosts, the datastore disappears from the inventory. You can mount the unmounted VMFS datastore. To mount the NFS or virtual volumes datastore that has been removed from the inventory, use the New Datastore wizard.
Remove VMFS Datastores
You can delete any type of VMFS datastore, including copies that you have mounted without resignaturing. When you delete a datastore, it is destroyed and disappears from all hosts that have access to the datastore.
Prerequisites
Remove or migrate all virtual machines from the datastore.
Make sure that no other host is accessing the datastore.
Disable Storage DRS for the datastore. n Disable Storage I/O control for the datastore.
Make sure that the datastore is not used for vSphere HA heartbeating.
- In the vSphere Web Client navigator, select Global Inventory Lists > Datastores.
- Right-click the datastore to remove, and select Delete Datastore.
- Confirm that you want to remove the datastore.
Mount/Unmount an NFS datastore
If you have unmounted an NFS or a virtual datastore from all hosts, the datastore disappears from the inventory. To mount the NFS or virtual datastore that has been removed from the inventory, use the New Datastore wizard.
To unmount an NFS datastore
- In the vSphere Web Client navigator, select Global Inventory Lists> Datastores.
- Right-click the datastore to unmount and select Unmount Datastore.
- If the datastore is shared, select the hosts from which to unmount the datastore.
- Confirm that you want to unmount the datastore.
Extend/Expand VMFS datastores
- In the vSphere Web Client navigator, select Global Inventory Lists > Datastores.
- Select the datastore and click the Increase Datastore Capacity icon.
- Select a device from the list of storage devices.
Your selection depends on whether an expandable storage device is available.
To expand an existing extent
Select the device for which the Expandable column reads YES. A storage device is expandable when it has free space immediately after the extent.
To add a new extent
Select the device for which the Expandable column reads NO.
- Review the Partition Layout to see the available configurations.
- Select a configuration option from the bottom panel.
Depending on the current layout of the disk and on your previous selections, the menu items you see might vary.
Use free space to expand the datastore
Expands an existing extent to a required capacity.
Use free space
Deploys an extent in the remaining free space of the disk. This menu item is available only when you are adding an extent.
Use all available partitions
Dedicates the entire disk to a single extent. This menu item is available only when you are adding an extent and when the disk you are formatting is not blank. The disk is reformatted, and the datastores and any data that it contains are erased.
- Set the capacity for the extent.
The minimum extent size is 1.3 GB. By default, the entire free space on the storage device is available.
- Click Next.
- Review the proposed layout and the new configuration of your datastore, and click Finish.
Place a VMFS datastore in Maintenance Mode
- In the vSphere Client inventory, right-click a datastore in a datastore cluster and select Enter SDRS Maintenance Mode.
A list of recommendations appears for datastore maintenance mode migration.
- On the Placement Recommendations tab, deselect any recommendations you do not want to apply.
- If necessary, click Apply Recommendations.
Select the Preferred Path/Disable a Path to a VMFS datastore
- Browse to the host in the vSphere Web Client navigator.
- Click the Manage tab, and click Storage.
- Click Storage Devices or Protocol Endpoints.
- Select the item whose paths you want to change and click the Properties tab.
- Under Multipathing Policies, click Edit Multipathing.
- Select a path policy.
By default, VMware supports the following path selection policies. If you have a third-party PSP installed on your host, its policy also appears on the list.
Fixed (VMware)
Most Recently Used (VMware)
Round Robin (VMware)
- For the fixed policy, specify the preferred path.
- Click OK to save your settings and exit the dialog box.
Enable/Disable vStorage API for Array Integration (VAAI)
By default, the ESXi Hosts support VAAI hardware acceleration for block devices, which means no configuration is needed for block devices. If a storage device supports T10 SCSI commands, then by default the ESXi Host can use VAAI
To disable VAAI in ESXi/ESX, you must modify these advanced configuration settings:
- HardwareAcceleratedMove
- HardwareAcceleratedInit
- HardwareAcceleratedLocking
To check the current value of the configuration settings:
Using the vSphere CLI:
vicfg-advcfg connection_options -get OptionName
Using the PowerCLI:
Get-VMHostAdvancedConfiguration -VMHost Hostname -Name OptionName
Using SSH/DCUI:
# esxcfg-advcfg –get OptionName
Determine a proper use case for multiple VMFS/NFS datastores
Ultimately this is dependant on the requirements of the VMs and Storage hardware you have available.
Datastores sit on backend storage that have physical disks configured in a particular way. If you have a requirement where some applications need more space, or need to be faster than others, creating multiple datastores with different characteristics can answer that requirement
Disk contention could be a problem, having different datastores will allow you to spread those workloads over different physical disks
HA and resiliency – having multiple datastores allows you to spread your VMs across them. If you lose a datastore, all of your VMs won’t go down, only VMs located on that particular datastore.