Objective 3.1 – Manage vSphere Integration with Physical Storage.
Here is the the start of Objective 3, Objective 3.1 – Manage vSphere Integration with Physical Storage. This is another long post with a great deal of information contained within it. As always the VCP6.5-DCV Blueprint post has been linked to this.
Happy Revision
Simon
Objective 3.1 – Manage vSphere Integration with Physical Storage
Perform NFS v3 and v4.1 configurations
- On the NFS server, configure an NFS volume and export it to be mounted on the ESXi hosts.
- The IP address or the DNS name of the NFS server and the full path, or folder name, for the NFS share.
- For NFS 4.1, you can collect multiple IP addresses or DNS names to use the multipathing support that the NFS 4.1 datastore provides.
- On each ESXi host, configure a VMkernel Network port for NFS traffic
- If you plan to use Kerberos authentication with the NFS 4.1 datastore, configure the ESXi hosts for Kerberos authentication.
Discover new storage LUNs
Storage Rescan Operations
When you perform storage management tasks or make changes in the SAN configuration you might need to rescan your storage.
When you perform VMFS datastore management operations, such as creating a VMFS datastore or RDM, adding an extent, and increasing or deleting a VMFS datastore, your host or the vCenter Server automatically rescans and updates your storage. You can disable the automatic rescan feature by turning off the Host Rescan Filter.
In certain cases, you need to perform a manual rescan. You can rescan all storage available to your host or to all hosts in a folder, cluster, and data center.
If the changes you make are isolated to storage connected through a specific adapter, perform a rescan for this adapter.
Perform the manual rescan each time you make one of the following changes.
- Zone a new disk array on a SAN.
- Create new LUNs on a SAN.
- Change the path masking on a host.
- Reconnect a cable.
- Change CHAP settings (iSCSI only).
- Add or remove discovery or static addresses (iSCSI only).
Add a single host to the vCenter Server after you have edited or removed from the vCenter Server a datastore shared by the vCenter Server hosts and the single host.
Perform Storage Rescan
When you make changes in your SAN configuration you might need to rescan your storage. You can rescan all storage available to your host, cluster, or data center. If the changes you make are isolated to storage accessed through a specific host, perform the rescan for only this host.
- In the vSphere Web Client object navigator, browse to a host, a cluster, a data center, or a folder that contains hosts.
- From the right-click menu, select Storage > Rescan Storage .
- Specify extent of rescan.
- Scan for New Storage Devices
- Rescan all adapters to discover new storage devices. If new devices are discovered, they appear in the device list.
- Scan for New VMFS Volumes
- Rescan all storage devices to discover new datastores that have been added since the last scan. Any new datastores appear in the datastore list.
Perform Adapter Rescan
When you make changes in your SAN configuration and these changes are isolated to storage accessed through a specific adapter, perform rescan for only this adapter.
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Storage, click Storage Adapters, and select the adapter to rescan from the list.
- Click the Rescan Adapter icon.
Configure FC/iSCSI/FCoE LUNs as ESXi boot devices
When you set up your host to boot from a SAN, your host’s boot image is stored on one or more LUNs in the SAN storage system. When the host starts, it boots from the LUN on the SAN rather than from its local disk.
ESXi supports booting through a Fibre Channel host bus adapter (HBA) or a Fibre Channel over Ethernet (FCoE) converged network adapter (CNA).
Boot from SAN Benefits
Boot from SAN can provide numerous benefits to your environment. However, in certain cases, you should not use boot from SAN for ESXi hosts. Before you set up your system for boot from SAN, decide whether it is appropriate for your environment.
If you use boot from SAN, the benefits for your environment will include the following:
- Cheaper servers. Servers can be more dense and run cooler without internal storage.
- Easier server replacement. You can replace servers and have the new server point to the old boot location.
- Less wasted space. Servers without local disks often take up less space.
- Easier backup processes. You can backup the system boot images in the SAN as part of the overall SAN backup procedures. Also, you can use advanced array features such as snapshots on the boot image.
- Improved management. Creating and managing the operating system image is easier and more
- Efficient.
- Better reliability. You can access the boot disk through multiple paths, which protects the disk from being a single point of failure.
Boot from SAN procedures are described from page 50 to page 57 and page 103 for iSCSI of the vsphere-esxi-vcenter-server-65-storage-guide.pdf
Mount an NFS share for use with vSphere
- On the NFS server, configure an NFS volume and export it to be mounted on the ESXi hosts.
- The IP address or the DNS name of the NFS server and the full path, or folder name, for the NFS share.
- For NFS 4.1, you can collect multiple IP addresses or DNS names to use the multipathing support that the NFS 4.1 datastore provides.
- On each ESXi host, configure a VMkernel Network port for NFS traffic
- If you plan to use Kerberos authentication with the NFS 4.1 datastore, configure the ESXi hosts for Kerberos authentication.
Enable/Configure/Disable vCenter Server storage filters
Storage Filtering
vCenter Server provides storage filters to help you avoid storage device corruption or performancedegradation that might be caused by an unsupported use of storage devices. These filters are available by default.
Config.vpxd.filter.vmfsFilter (VMFS Filter)
Filters out storage devices, or LUNs, that are already used by a VMFS datastore on any host managed by vCenter Server. The LUNs do not show up as candidates to be formatted with another VMFS datastore or to be used as an RDM.
Config.vpxd.filter.rdmFilter (RDM Filter)
Filters out LUNs that are already referenced by an RDM on any host managed by vCenter Server. The LUNs do not show up as candidates to be formatted with VMFS or to be used by a different RDM. For your virtual machines to access the same LUN, the virtual machines must share the same RDM mapping files.
Config.vpxd.filter.SameHostsAndTransportsFilter (Same Hosts and Transports Filter)
Filters out LUNs ineligible for use as VMFS datastore extents because of host or storage type incompatibility. Prevents you from adding the following LUNs as extents:
- LUNs not exposed to all hosts that share the original VMFS datastore.
- LUNs that use a storage type different from the one the original VMFS datastore uses.
For example, you cannot add a Fibre Channel extent to a VMFS datastore on a local storage device.
Config.vpxd.filter.hostRescanFilter (Host Rescan Filter)
Automatically rescans and updates VMFS datastores after you perform datastore management operations. The filter helps provide a consistent view of all VMFS datastores on all hosts managed by vCenter Server.
Turn off Storage Filters
When you perform VMFS datastore management operations, vCenter Server uses default storage protection filters The filters help you to avoid storage corruption by retrieving only the storage devices that can be used for a particular operation. Unsuitable devices are not displayed for selection. You can turn off the filters to view all devices.
Prerequisites
Before you make changes to the device filters consult with the VMware support team. You can turn off the filters only if you have other methods to prevent device corruption.
- Browse to the vCenter Server in the vSphere Web Client object navigator.
- Click the configure tab.
- Under Settings, click Advanced Setting, and click Edit.
- Specify the filter to turn off
- In the Name text box, enter an appropriate filter name.
- config.vpxd.filter.vmfsFilter VMFS Filter
- config.vpxd.filter.rdmFilter RDM Filter
- config.vpxd.filter.SameHostsAndTransportsFilter Same Hosts and Transports Filter
- config.vpxd.filter.hostRescanFilter Host Rescan Filter
If you turn off the Host Rescan Filter, your hosts continue to perform a rescan each time you present a new LUN to a host or a cluster.
- In the Value text box, enter False for the specified key.
- Click Add, and click OK to save your changes.
You are not required to restart the vCenter Server system.
Configure/Edit hardware/dependent hardware initiators
A dependent hardware iSCSI adapter is a third-party adapter that depends on VMware networking, and iSCSI configuration and management interfaces provided by VMware.
An example of a dependent iSCSI adapter is a Broadcom 5709 NIC. When installed on a host, it presents its two components, a standard network adapter and an iSCSI engine, to the same port. The iSCSI engine appears on the list of storage adapters as an iSCSI adapter (vmhba). Although the iSCSI adapter is enabled by default, to make it functional, you must first connect it, through a virtual VMkernel adapter (vmk), to a physical network adapter (vmnic) associated with it. You can then configure the iSCSI adapter.
After you configure the dependent hardware iSCSI adapter, the discovery and authentication data are passed through the network connection, while the iSCSI traffic goes through the iSCSI engine, bypassing the network
View Dependent Hardware iSCSI Adapters
View a dependent hardware iSCSI adapter to verify that it is correctly loaded. If installed, the dependent hardware iSCSI adapter (vmhba#) appears on the list of storage adapters under such category as, for example, Broadcom iSCSI Adapter. If the dependent hardware adapter does not appear on the list of storage adapters, check whether it needs to be licensed.
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Storage, click Storage Adapters.
- Select the adapter (vmhba#) to view.
The default details for the adapter appear, including the iSCSI name, iSCSI alias, and the status.
Although the dependent iSCSI adapter is enabled by default, to make it functional, you must set up networking for the iSCSI traffic and bind the adapter to the appropriate VMkernel iSCSI port. You then configure discovery addresses and CHAP parameters.
Modify General Properties for iSCSI Adapters
You can change the default iSCSI name and alias assigned to your iSCSI adapters. For the independent hardware iSCSI adapters, you can also change the default IP settings
Prerequisites
Required privilege: Host .configuration.Storage Partition configuration
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure
- Under Adapter Details, click the Properties tab, and click Edit in the General panel.
- Modify the following general properties.
iSCSI Name
Unique name formed according to iSCSI standards that identifies the iSCSI adapter. If you change the name, make sure that the name you enter is worldwide unique and properly formatted. Otherwise, certain storage devices might not recognize the iSCSI adapter.
iSCSI Alias
A friendly name you use instead of the iSCSI name.
If you change the iSCSI name, it is used for new iSCSI sessions. For existing sessions, the new settings are not used until you log out and log in again.
Determine Association Between iSCSI and Network Adapters
You create network connections to bind dependent iSCSI and physical network adapters. To create the connections correctly, you must determine the name of the physical NIC with which the dependent hardware iSCSI adapter is associated.
- Select the iSCSI adapter (vmhba#) and click the Network Port Binding tab under Adapter Details.
- Click Add.
- The network adapter (vmnic#) that corresponds to the dependent iSCSI adapter is listed in the Physical Network Adapter column.
Set Up iSCSI Networking
If you use the software or dependent hardware iSCSI adapters, you must configure connections for the traffic between the iSCSI component and the physical network adapters.
Configuring the network connection involves creating a virtual VMkernel adapter for each physical network adapter. You then associate the VMkernel adapter with an appropriate iSCSI adapter. This process is called port binding.
Set Up Dynamic or Static Discovery for iSCSI
With dynamic discovery, each time the initiator contacts a specified iSCSI storage system, it sends the SendTargets request to the system. The iSCSI system responds by supplying a list of available targets to the initiator. In addition to the dynamic discovery method, you can use static discovery and manually enter information for the targets.
When you set up static or dynamic discovery, you can only add new iSCSI targets. You cannot change any parameters of an existing target. To make changes, remove the existing target and add a new one.
Prerequisites
Required privilege: Host.configuration.Storage Partition configuration
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure
- Under Adapter Details, click the Targets tab.
- Configure the discovery method.
- Dynamic Discovery
- Click Dynamic Discovery and click Add.
- Type the IP address or DNS name of the storage system and click OK.
- Rescan the iSCSI adapter.
- After establishing the SendTargets session with the iSCSI system, you host populates the Static Discovery list with all newly discovered targets.
- Static Discovery
- Click Static Discovery and click Add.
- Enter the target’s information and click OK
- Rescan the iSCSI adapter
Enable/Disable software iSCSI initiator
Activate the Software iSCSI Adapter
You must activate your software iSCSI adapter so that your host can use it to access iSCSI storage. You can activate only one software iSCSI adapter.
Prerequisites
Required privilege: Host.configuration.Storage Partition configuration
If you boot from iSCSI using the software iSCSI adapter, the adapter is enabled and the network configuration is created at the first boot. If you disable the adapter, it is reenabled each time you boot the host.
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Storage, click Storage Adapters, and click the Add icon.
- Select Software iSCSI Adapter and confirm that you want to add the adapter.
The software iSCSI adapter (vmhba#) is enabled and appears on the list of storage adapters. After enabling the adapter, the host assigns the default iSCSI name to it. If you need to change the default name, follow iSCSI naming conventions.
Disable Software iSCSI Adapter
If you do not need the software iSCSI adapter, you can disable it.
Disabling the software iSCSI adapter marks it for removal. The adapter is removed from the host on the next host reboot. After removal, all virtual machines and other data on the storage devices associated with this adapter become inaccessible to the host.
Prerequisites
Required privilege: Host.configuration.Storage Partition configuration
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure
- Under Adapter Details, click the Properties tab.
- Click Disable and confirm that you want to disable the adapter.
- The status indicates that the adapter is disabled.
- Reboot the host.
After reboot, the adapter no longer appears on the list of storage adapters. The iSCSI software adapter is no longer available and storage devices associated with it are inaccessible. You can later activate the adapter.
Configure/Edit software iSCSI initiator settings
You can change the default iSCSI name and alias assigned to your iSCSI adapters. For the independent hardware iSCSI adapters, you can also change the default IP settings
Prerequisites
Required privilege: Host .configuration.Storage Partition configuration
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Storage, click Storage Adapters, and select the adapter (vmhba#) to configure
- Under Adapter Details, click the Properties tab, and click Edit in the General panel.
- Modify the following general properties.
iSCSI Name
Unique name formed according to iSCSI standards that identifies the iSCSI adapter. If you change the name, make sure that the name you enter is worldwide unique and properly formatted. Otherwise, certain storage devices might not recognize the iSCSI adapter.
iSCSI Alias
A friendly name you use instead of the iSCSI name.
If you change the iSCSI name, it is used for new iSCSI sessions. For existing sessions, the new settings are not used until you log out and log in again.
Configure iSCSI port binding
iSCSI port binding creates connections for the traffic between the software or dependent hardware iSCSI adapters and the physical network adapters.
The following tasks discuss the iSCSI network configuration with a vSphere standard switch.
You can also use the VMware vSphere® Distributed Switch™ and VMware NSX® Virtual Switch™ in the iSCSI port biding configuration.
If you use a vSphere distributed switch with multiple uplink ports, for port binding, create a separate distributed port group per each physical NIC. Then set the team policy so that each distributed port group has only one active uplink port.
Create a Single VMkernel Adapter for iSCSI
Connect the VMkernel, which runs services for iSCSI storage, to a physical network adapter.
- Browse to the host in the vSphere Web Client navigator.
- Click Actions > Add Networking.
- Select VMkernel Network Adapter, and click Next.
- Select New standard switch to create a vSphere standard switch.
- Click the Add adapters icon, and select the network adapter (vmnic#) to use for iSCSI.
- Make sure to assign the adapter to Active Adapters.
- Enter a network label.
- A network label is a friendly name that identifies the VMkernel adapter that you are creating, for example, iSCSI.
- Specify the IP settings
- Review the information and click Finish.
You created the virtual VMkernel adapter (vmk#) for a physical network adapter (vmnic#) on your host.
Create Additional VMkernel Adapters for iSCSI
Use this task if you have two or more physical network adapters for iSCSI. And you want to connect all your physical adapters to a single vSphere standard switch. In this task, you add the physical adapters and VMkernel adapters to an existing vSphere standard switch.
Prerequisites
Create a vSphere standard switch that maps an iSCSI VMkernel adapter to a single physical network adapter designated for iSCSI traffic.
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Networking, click Virtual switches, and select the vSphere switch that you want to modify from the list.
- Connect additional network adapters to the switch.
- Click the Add host networking icon.
- Select Physical Network Adapters, and click Next.
- Make sure that you are using the existing switch, and click Next.
- Click the Add adapters icon, and select one or more network adapters (vmnic#) to use for iSCSI.
- With dependent hardware iSCSI adapters, select only those NICs that have a corresponding iSCSI component.
- Complete configuration and click Finish.
Create iSCSI VMkernel adapters for all physical network adapters that you added.
The number of VMkernel interfaces must correspond to the number of physical network adapters on the vSphere standard switch.
- Click the Add host networking icon.
- Select VMkernel Network Adapter, and click Next.
- Make sure that you are using the existing switch, and click Next.
- Complete configuration and click Finish.
Change Network Policy for iSCSI
If you use a single vSphere standard switch to connect multiple VMkernel adapters to multiple network adapters, set up network policy so that only one physical network adapter is active for each VMkernel adapter.
By default, for each VMkernel adapter on the vSphere standard switch, all network adapters appear as active. You must override this setup, so that each VMkernel adapter maps to only one co responding active Physical. For example, vmk1 maps to vmnic1, vmk2 maps to vmnic2, and so on.
Prerequisites
Create a vSphere standard switch that connects VMkernel with physical network adapters designated for iSCSI traffic The number of VMkernel adapters must correspond to the number of physical adapters on the vSphere standard switch.
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Networking, click Virtual switches, and select the vSphere switch that you want to modify from the list.
- On the vSwitch diagram, select the VMkernel adapter and click the Edit settings icon.
- On the Edit Settings wizard, click Teaming and Failover and select Override under Failover Order.
- Designate only one physical adapter as active and move all remaining adapters to the Unused Adapters category.
- Repeat Step 4 through Step 6 for each iSCSI VMkernel interface on the vSphere standard switch.
Bind iSCSI and VMkernel Adapters
Bind an iSCSI adapter with a VMkernel adapter.
Prerequisites
Create a virtual VMkernel adapter for each physical network adapter on your host. If you use multiple VMkernel adapters, set up the correct network policy.
Required privilege: Host.configuration.Storage Partition configuration
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Storage, click Storage Adapters, and select the software or dependent iSCSI adapter to configure from the list.
- Under Adapter Details, click the Network Port Binding tab and click the Add icon ( ).
- Select a VMkernel adapter to bind with the iSCSI adapter.
Make sure that the network policy for the VMkernel adapter is compliant with the binding requirements.
You can bind the software iSCSI adapter to one or more VMkernel adapters. For a dependent hardware iSCSI adapter, only one VMkernel adapter associated with the correct physical NIC is available.
- Click OK.
The network connection appears on the list of VMkernel port bindings for the iSCSI adapter.
Review Port Binding Details
Review networking details of the VMkernel adapter that is bound to the iSCSI adapter.
- Browse to the host in the vSphere Web Client navigator.
- Click the configure tab.
- Under Storage, click Storage Adapters, and select the software or dependent iSCSI adapter from the list.
- Under Adapter Details, click the Network Port Binding tab and click the View Details icon.
- Review the VMkernel adapter information by switching between available tabs.
Enable/Configure/Disable iSCSI CHAP
Because the IP networks that the iSCSI technology uses to connect to remote targets do not protect the data they transport, you must ensure security of the connection. One of the protocols that iSCSI implements is the Challenge Handshake Authentication Protocol (CHAP), which verifies the legitimacy of initiators that access targets on the network.
CHAP uses a three-way handshake algorithm to verify the identity of your host and, if applicable, of the iSCSI target when the host and target establish a connection. The verification is based on a predefined private value, or CHAP secret, that the initiator and target share.
ESXi supports CHAP authentication at the adapter level. In this case, all targets receive the same CHAP name and secret from the iSCSI initiator. For software and dependent hardware iSCSI adapters, ESXi also supports per-target CHAP authentication, which allows you to configure different credentials for each target to achieve greater level of security.
Choosing CHAP Authentication Method
ESXi supports unidirectional CHAP for all types of iSCSI initiators, and bidirectional CHAP for software and dependent hardware iSCSI.
Before configuring CHAP, check whether CHAP is enabled at the iSCSI storage system and check the CHAP authentication method the system supports. If CHAP is enabled, enable it for your initiators, making sure that the CHAP authentication credentials match the credentials on the iSCSI storage.
ESXi supports the following CHAP authentication methods:
Unidirectional CHAP
In unidirectional CHAP authentication, the target authenticates the initiator, but the initiator does not authenticate the target.
Bidirectional CHAP
In bidirectional CHAP authentication, an additional level of security enables the initiator to authenticate the target. VMware supports this method for software and dependent hardware iSCSI adapters only.
For software and dependent hardware iSCSI adapters, you can set unidirectional CHAP and bidirectional CHAP for each adapter or at the target level. Independent hardware iSCSI supports CHAP only at the adapter level.
When you set the CHAP parameters, specify a security level for CHAP.
CHAP Security Level
None
The host does not use CHAP authentication. Select this option to disable authentication if it is currently enabled.
Use unidirectional CHAP if required by target
The host prefers a non-CHAP connection, but can use a CHAP connection if required by the target.
Use unidirectional CHAP unless prohibited by target
The host prefers CHAP, but can use non-CHAP connections if the target does not support CHAP.
Use unidirectional CHAP
The host requires successful CHAP authentication. The connection fails if CHAP negotiation fails.
Use bidirectional CHAP
The host and the target support bidirectional CHAP.
Set Up CHAP for iSCSI Adapter
When you set up CHAP name and secret at the iSCSI adapter level, all targets receive the same parameters from the adapter. By default, all discovery addresses or static targets inherit CHAP parameters that you set up at the adapter level.
The CHAP name should not exceed 511 alphanumeric characters and the CHAP secret should not exceed 255 alphanumeric characters. Some adapters, for example the QLogic adapter, might have lower limits, 255 for the CHAP name and 100 for the CHAP secret.
Prerequisites
Before setting up CHAP parameters for software or dependent hardware iSCSI, determine whether to configure unidirectional or bidirectional CHAP. Independent hardware iSCSI adapters do not support bidirectional CHAP.
Verify CHAP parameters configured on the storage side. Parameters that you configure must match the ones one the storage side.
Required privilege: Host.configuration.Storage Partition configuration
- Display storage adapters and select the iSCSI adapter to configure
- Under Adapter Details, click the Properties tab and click Edit in the Authentication panel.
- Specify authentication method.
- Specify the outgoing CHAP name.
Make sure that the name you specify matches the name configured on the storage side.
- To set the CHAP name to the iSCSI adapter name, select Use initiator name.
To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator name and type a name in the Name text box.
- Enter an outgoing CHAP secret to be used as part of authentication. Use the same secret that you enter on the storage side.
If configuring bidirectional CHAP, specify incoming CHAP credentials. Make sure to use different secrets for the outgoing and incoming CHAP.
- Click OK.
- Rescan the iSCSI adapter.
If you change the CHAP parameters, they are used for new iSCSI sessions. For existing sessions, new settings are not used until you log out and log in again.
Set Up CHAP for Target
If you use software and dependent hardware iSCSI adapters, you can configure different CHAP credentials for each discovery address or static target.
The CHAP name should not exceed 511 and the CHAP secret 255 alphanumeric characters.
Prerequisites
Before setting up CHAP parameters for software or dependent hardware iSCSI, determine whether to configure unidirectional or bidirectional CHAP.
Verify CHAP parameters configured on the storage side. Parameters that you configure must match the ones one the storage side.
Access storage adapters.
Required privilege: Host.configuration.Storage Partition configuration
- Select the iSCSI adapter to configure and click the Targets tab under Adapter Details.
- Click either Dynamic Discovery or Static Discovery.
- From the list of available targets, select a target to configure and click Authentication.
- Deselect Inherit settings from parent and specify authentication method.
- Specify the outgoing CHAP name.
Make sure that the name you specify matches the name configured on the storage side.
To set the CHAP name to the iSCSI adapter name, select Use initiator name.
To set the CHAP name to anything other than the iSCSI initiator name, deselect Use initiator name and type a name in the Name text box.
- Enter an outgoing CHAP secret to be used as part of authentication. Use the same secret that you enter on the storage side.
If configuring bi-directional CHAP, specify incoming CHAP credentials. Make sure to use different secrets for the outgoing and incoming CHAP.
- Click OK.
- Rescan the iSCSI adapter.
If you change the CHAP parameters, they are used for new iSCSI sessions. For existing sessions, new settings are not used until you log out and login again.
Disable CHAP
You can disable CHAP if your storage system does not require it.
If you disable CHAP on a system that requires CHAP authentication, existing iSCSI sessions remain active until you reboot your host, end the session through the command line, or the storage system forces a logout. After the session ends, you can no longer connect to targets that require CHAP.
Required privilege: Host.configuration.Storage Partition configuration
- Open the CHAP Credentials dialog box.
- For software and dependent hardware iSCSI adapters, to disable just the mutual CHAP and leave the one-way CHAP, select Do not use CHAP in the Mutual CHAP area.
- To disable one-way CHAP, select Do not use CHAP in the CHAP area. The mutual CHAP, if set up, automatically turns to Do not use CHAP when you disable the one-way CHAP.
- Click OK.
Determine use cases for Fiber Channel zoning
If you are an ESXi administrator planning to set up hosts to work with SANs, you must have a working knowledge of SAN concepts. You can find information about SANs in print and on the Internet. Because this industry changes constantly, check these resources frequently.
If you are new to SAN technology, familiarize yourself with the basic terminology.
A storage area network (SAN) is a specialized high-speed network that connects computer systems, or host servers, to high performance storage subsystems. The SAN components include host bus adapters (HBAs) in the host servers, switches that help route storage traffic cables, storage processors (SPs), and storage disk arrays.
A SAN topology with at least one switch present on the network forms a SAN fabric.
To transfer traffic from host servers to shared storage, the SAN uses the Fibre Channel (FC) protocol that packages SCSI commands into Fibre Channel frames.
To restrict server access to storage arrays not allocated to that server, the SAN uses zoning. Typically, zones are created for each group of servers that access a shared group of storage devices and LUNs. Zones define which HBAs can connect to which SPs. Devices outside a zone are not visible to the devices inside the zone.
Zoning is similar to LUN masking, which is commonly used for permission management. LUN masking is a process that makes a LUN available to some hosts and unavailable to other hosts.
When transferring data between the host server and storage, the SAN uses a technique known as multipathing. Multipathing allows you to have more than one physical path from the ESXi host to a LUN on a storage system.
Generally, a single path from a host to a LUN consists of an HBA, switch ports, connecting cables, and the storage controller port. If any component of the path fails, the host selects another available path for I/O. The process of detecting a failed path and switching to another is called path failover.
Compare and contrast array thin provisioning and virtual disk thin provisioning
Virtual Disk Thin Provisioning
When you create a virtual machine, a certain amount of storage space on a datastore is provisioned to virtual disk files
By default, ESXi offers a traditional storage provisioning method for virtual machines. With this method, you first estimate how much storage the virtual machine will need for its entire life cycle. You then provision a fixed amount of storage space to its virtual disk in advance, for example, 40GB, and have the entire provisioned space committed to the virtual disk. A virtual disk that immediately occupies the entire provisioned space is a thick disk.
ESXi supports thin provisioning for virtual disks. With the disk-level thin provisioning feature, you can create virtual disks in a thin format. For a thin virtual disk, ESXi provisions the entire space required for the disk’s current and future activities, for example 40GB. However, the thin disk uses only as much storage space as the disk needs for its initial operations. In this example, the thin-provisioned disk occupies only 20GB of storage. As the disk requires more space, it can grow into its entire 40GB provisioned space.
ESXi and Array Thin Provisioning
You can use thin-provisioned storage arrays with ESXi.
The ESXi host integrates with block-based storage and performs these tasks:
The host can recognize underlying thin-provisioned LUNs and monitor their space usage to avoid running out of physical space. As your VMFS datastore grows or if you use Storage vMotion to migrate virtual machines to a thin-provisioned LUN, the host communicates with the LUN and warns you about breaches in physical space and about out-of-space conditions.
The host can automatically issue the T10 unmap command from VMFS6 and VM guest operating systems to reclaim unused space from the array. VMFS5 supports manual space reclamation.