Close

8th September 2017

Objective 2.1 – Configure policies, features and verify vSphere networking

OK, there is a huge amount of information in this post.  It contains all the revision material listed in Objective 2.1 – Configure policies, features and verify vSphere networking…  Ask me nicely and I might see about making these PDFs!

VCP6.5 Certification Blueprint

Happy Revision

Simon

 

Create/Delete a vSphere Distributed Switch

Create a vSphere distributed switch on a data center to handle the networking configuration of multiple hosts at a time from a central place.

  • In the vSphere Web Client, navigate to a data center.
  • In the navigator, right-click the data center and select Distributed Switch > New Distributed Switch.
  • On the Name and location page, type a name for the new distributed switch, or accept the generated name, and click Next.
  • On the Select version page, select a distributed switch version and click Next.

Distributed Switch: 6.5.0: Compatible with ESXi 6.5 and later.

Distributed Switch: 6.0.0: Compatible with ESXi 6.0 and later. Features released with later vSpheredistributed switch versions are not supported.

Distributed Switch: 5.5.0: Compatible with ESXi 5.5 and later. Features released with later vSphere distributed switch versions are not supported.

Distributed Switch: 5.1.0: Compatible with VMware ESXi 5.1 and later. Features released with later vSphere distributed switch versions are not supported.

Distributed Switch: 5.0.0: Compatible with VMware ESXi 5.0 and later. Features released with later vSphere distributed switch versions are not supported.

  • On the Edit settings page, configure the distributed switch settings
  • Use the arrow buttons to select the Number of uplinks.
  • Uplink ports connect the distributed switch to physical NICs on associated hosts. The number of uplink ports is the maximum number of allowed physical connections to the distributed switch per host.
  • Use the drop-down menu to enable or disable Network I/O Control.
  • By using Network I/O Control you can prioritize the access to network resources for certain types of infrastructure and workload traffic according to the requirements of your deployment. Network I/O Control continuously monitors the I/O load over the network and dynamically allocates available resources.
  • Select the Create a default port group check box to create a new distributed port group with default settings for this switch.
  • To create a default distributed port group, type the port group name in the Port group name, or accept the generated name. If your system has custom port group requirements, create distributed port groups that meet those requirements after you add the distributed switch.
  • Click Next.
  • On the Ready to complete page, review the settings you selected and click Finish.
  • Use the Back button to edit any settings

A distributed switch is created on the data center. You can view the features supported on the distributed switch as well as other details by navigating to the new distributed switch and clicking the Summary tab

Add/Remove ESXi hosts from a vSphere Distributed Switch

You can add new hosts to a vSphere Distributed Switch, connect network adapters to the switch, and remove hosts from the switch. In a production environment, you might need to keep the network connectivity up for virtual machines and VMkernel services while you manage host networking on the distributed switch.

Adding Hosts to a vSphere Distributed Switch

Consider preparing your environment before you add new hosts to a distributed switch.

Create distributed port groups for virtual machine networking.  Create distributed port groups for VMkernel services. For example, create distributed port groups for management network, vMotion, and Fault Tolerance.

Configure enough uplinks on the distributed switch for all physical NICs that you want to connect to the switch. For example, if the hosts that you want to connect to the distributed switch have eight physical NICs each, configure eight uplinks on the distributed switch.

Make sure that the configuration of the distributed switch is prepared for services with specific networking requirements. For example, iSCSI has specific requirements for the teaming and failover configuration of the distributed port group where you connect the iSCSI VMkernel adapter.

You can use the Add and Manage Hosts wizard in the vSphere Web Client to add multiple hosts at a time.

Removing Hosts from a vSphere Distributed Switch

Before you remove hosts from a distributed switch, you must migrate the network adapters that are in use to a different switch.

To add hosts to a different distributed switch, you can use the Add and Manage Hosts wizard to migrate the network adapters on the hosts to the new switch all together. You can then remove the hosts safely from their current distributed switch.

To migrate host networking to standard switches, you must migrate the network adapters in stages. For example, remove physical NICs on the hosts from the distributed switch by leaving one physical NIC on every host connected to the switch to keep the network connectivity up. Next, aĴach the physical NICs to the standard switches and migrate VMkernel adapters and virtual machine network adapters to the switches. Lastly, migrate the physical NIC that you left connected to the distributed switch to the standard switches.

Add/Configure/Remove dvPort groups

A distributed port group specifies port configuration options for each member port on a vSphere distributed switch. Distributed port groups define how a connection is made to a network.

Add a Distributed Port Group

Add a distributed port group to a vSphere Distributed Switch to create a distributed switch network for your virtual machines and to associate VMkernel adapters.

  • In the vSphere Web Client, navigate to the distributed switch.
  • Right-click the distributed switch and select Distributed port group > New distributed port group.
  • On the Select name and location page, type the name of the new distributed port group, or accept the generated name, and click Next.
  • On the Configure settings page, set the general properties for the new distributed port group and click Next.

Port binding

Choose when ports are assigned to virtual machines connected to this  distributed port group.

Static binding: Assign a port to a virtual machine when the virtual machine connects to the distributed port group.

Dynamic binding: Assign a port to a virtual machine the first time the virtual machine powers on after it is connected to the distributed port group. Dynamic binding has been deprecated since ESXi 5.0.

Ephemeral – no binding: No port binding. You can assign a virtual machine to a distributed port group with ephemeral port binding also when connected to the host

Port allocation

Elastic: The default number of ports is eight. When all ports are assigned, a new set of eight ports is created. This is the default.

Fixed: The default number of ports is set to eight. No additional ports are created when all ports are assigned.

Number of ports

Enter the number of ports on the distributed port group.

Network resource pool

Use the drop-down menu to assign the new distributed port group to a user defined network resource pool. If you have not created a network resource pool, this menu is empty.

VLAN

Use the VLAN type drop-down menu to select VLAN options:

None: Do not use VLAN.

VLAN: In the VLAN ID field enter a number between 1 and 4094.

VLAN trunking: Enter a VLAN trunk range.

Private VLAN: Select a private VLAN entry. If you did not create any private VLANs, this menu is empty.

Select Advanced check box to customize the policy configurations for the new distributed port group.

  • On the Security page, edit the security exceptions and click Next.

Promiscuous mode

Reject. Placing an adapter in promiscuous mode from the guest operating system does not result in receiving frames for other virtual machines.

Accept. If an adapter is placed in promiscuous mode from the guest operating system, the switch allows the guest adapter to receive all frames passed on the switch in compliance with the active VLAN policy for the port where the adapter is connected. Firewalls, port scanners, intrusion detection systems and so on, need to run in promiscuous mode.

MAC address changes

Reject. If you set this option to Reject and the guest operating system changes the MAC address of the adapter to a value different from the address in the .vmx configuration file the switch drops all inbound frames to the virtual machine adapter. If the guest operating system changes the MAC address back, the virtual machine receives frames again.

Accept. If the guest operating system changes the MAC address of a network adapter, the adapter receives frames to its new address.

Forged transmits

Reject. The switch drops any outbound frame with a source MAC address that is different from the one in the .vmx configuration files.

Accept. The switch does not perform filtering and permits all outbound frames.

  • On the Traffic shaping page, enable or disable Ingress or Egress traffic shaping and click Next.

Status

If you enable either Ingress traffic shaping or Egress traffic shaping, you are setting limits on the amount of networking bandwidth allocated for each virtual adapter associated with this particular port group. If you disable the policy, services have a free, clear connection to the physical network by default.

Average bandwidth: Establishes the number of bits per second to allow across a port, averaged over time. This is the allowed average load.

Peak bandwidth: The maximum number of bits per second to allow across a port when it is sending and receiving a burst of trafficǯ This tops the bandwidth used by a port whenever it is using its burst bonus.

Burst size: The maximum number of bytes to allow in a burst. If this parameter is set, a port might gain a burst bonus when it does not use all its allocated bandwidth. Whenever the port needs more bandwidth than specified by Average bandwidth, it might temporarily transmit data at a higher speed if a burst bonus is available. This parameter tops the number of bytes that might be accumulated in the burst bonus and thus transferred at a higher speed.

  • On the Teaming and failover page, edit the settings and click Next.

Load balancing

Specify how to choose an uplink.

Route based on originating virtual port. Choose an uplink based on the virtual port where the traffic entered the distributed switch.

Route based on IP hash. Choose an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, whatever is at those offsets is used to compute the hash.

Route based on source MAC hash. Choose an uplink based on a hash of the source Ethernet.

Route based on physical NIC load. Choose an uplink based on the current loads of physical NICs.

Use explicit failover order. Always use the highest order uplink from the list of Active adaptors which passes failover detection criteria.

Network failure detection

Specify the method to use for failover detection.

Link status only. Relies solely on the link status that the network adapter provides. This option detects failures, such as cable pulls and physical switch power failures, but not configuration errors, such as a physical switch port being blocked by spanning tree or that is misconfigured to the wrong VLAN or cable pulls on the other side of a physical switch.

Beacon probing. Sends out and listens for beacon probes on all NICs in the team and uses this information, in addition to link status, to determine link failure. This detects many of the failures previously mentioned that are not detected by link status alone.

Notify switches

Select Yes or No to notify switches in the case of failover. If you select Yes, whenever a virtual NIC is connected to the distributed switch or whenever that virtual NIC’s traffic would be routed over a different physical NIC in the team because of a failover event, a notification is sent out over the network to update the lookup tables on physical switches. In almost all cases, this process is desirable for the lowest latency of failover occurrences and migrations with vMotion.

Failback

Select Yes or No to disable or enable failback. This option determines how a physical adapter is returned to active duty after recovering from a failure. If failback is set to Yes (default), the adapter is returned to active duty immediately upon recovery, displacing the standby adapter that took over its slot, if any. If failback is set to No, a failed adapter is left inactive even after recovery until another currently active adapter fails, requiring its replacement.

Failover order

Specify how to distribute the work load for uplinks. To use some uplinks but reserve others for emergencies if the uplinks in use fail, set this condition by moving them into different groups:

Active uplinks. Continue to use the uplink when the network adapter connectivity is up and active.

Standby uplinks. Use this uplink if one of the active adapter’s connectivity is down.

Unused uplinks. Do not use this uplink.

  • On the Monitoring page, enable or disable NetFlow and click Next.

Disabled: NetFlow is disabled on the distributed port group.

Enabled: NetFlow is enabled on the distributed port group. NetFlow settings can be configured at the vSphere Distributed Switch level.

  • On the Miscellaneous page, select Yes or No and click Next.

Selecting Yes shuts down all ports in the port group. This action might disrupt the normal network operations of the hosts or virtual machines using the ports.

  • On the Edit additional settings page, add a description of the port group and set any policy overrides per port and click Next.
  • On the Ready to complete page, review your settings and click Finish.
  • Click the Back button to change any settings

Edit General Distributed Port Group Settings

You can edit general distributed port group settings such as the distributed port group name, port settings and network resource pool.

  • Locate a distributed port group in the vSphere Web Client.
  • Select a distributed switch and click the Networks tab.
  • Click Distributed Port Groups.
  • Right-click the distributed port group and select Edit Settings.
    • Each of the setting outlined above can be edited.

Configure Overriding Networking Policies on Port Level

To apply different policies for distributed ports, you configure the per-port overriding of the policies that are set at the port group level. You can also enable the reset of any configuration that is set on per-port level when a distributed port disconnects from a virtual machine.

  • Locate a distributed port group in the vSphere Web Client.
  • Select a distributed switch and click the Networks tab.
  • Click Distributed Port Groups.
  • Right-click the distributed port group and select Edit Settings.
  • Select the Advanced page.
  • Configure reset at disconnect
  • From the drop-down menu, enable or disable reset at disconnect. When a distributed port is disconnected from a virtual machine, the configuration of the distributed port is reset to the distributed port group settings Any per-port overrides are discarded.
  • Override port policies
  • Select the distributed port group policies to be overridden on a per-port level.
  • Use the policy pages to set overrides for each port policy.
  • Click OK.

Remove a Distributed Port Group

Remove a distributed port group when you no longer need the corresponding labeled network to provide connectivity and configure connection settings for virtual machines or VMkernel networking.

Prerequisites

Verify that all virtual machines connected to the corresponding labelled network are migrated to a different labelled network.

Verify that all VMkernel adapters connected to the distributed port group are migrated to a different port group, or are deleted.

  • Locate a distributed port group in the vSphere Web Client.
  • Select a distributed switch and click the Networks tab.
  • Click Distributed Port Groups.
  • Select the distributed port group.
  • From the Actions menu, select Delete.

Add/Remove uplink adapters to dvUplink groups

For hosts that are associated with a distributed switch, you can assign physical NICs to uplinks on the switch. You can configure physical NICs on the distributed switch for multiple hosts at a time.

For consistent networking configuration throughout all hosts, you can assign the same physical NIC on every host to the same uplink on the distributed switch. For example, you can assign vmnic1 from hosts ESXi A and ESXi B to Uplink 1.

  • In the vSphere Web Client, navigate to the distributed switch.
  • From the Actions menu, select Add and Manage Hosts.
  • In Select task, select Manage host networking and click Next.
  • In Select hosts, click Attached hosts and select from the hosts that are associated with the distributed switch.
  • Click Next.
  • In Select network adapter tasks, select Manage physical adapters and click Next.
  • In Manage physical network adapters, select a physical NIC from the On other switches/unclaimed list.
  • If you select physical NICs that are already assigned to other switches, they are migrated to the current distributed switch.
  • Click Assign uplink.
  • Select an uplink or select Auto-assign.
  • Click Next.
  • Review the impacted services as well as the level of impact.
  • No impact
  • iSCSI will continue its normal function after the new networking configuration is applied.
  • Important impact
  • The normal function of iSCSI might be disrupted if the new networking configuration is applied.
  • Critical impact
  • The normal function of iSCSI will be interrupted if the new networking configuration is applied.
  • If the impact on iSCSI is important or critical, click iSCSI entry and review the reasons that are displayed in the Analysis details pane.
  • After you troubleshoot the impact on iSCSI, proceed with your networking configurations
  • Click Next and click Finish.

Configure vSphere Distributed Switch general and dvPort group settings

See earlier section on adding dvPort Groups.

Create/Configure/Remove virtual adapters

Create a VMkernel adapter on hosts associated with a distributed switch to provide network connectivity to the hosts and to handle the traffic for vSphere vMotion, IP storage, Fault Tolerance logging, and Virtual SAN. You can create VMkernel adapters on multiple hosts simultaneously by using the Add and Manage Hosts wizard.

You should dedicate one distributed port group for each VMkernel adapter. One VMkernel adapter should handle only one traffic type.

  • In the vSphere Web Client, navigate to the distributed switch.
  • From the Actions menu, select Add and Manage Hosts.
  • In Select task, select Manage host networking and click Next.
  • In Select hosts, click attached hosts and select from the hosts that are associated with the distributed switch.
  • Click Next.
  • In Select network adapter tasks, select Manage VMkernel adapters and click Next.
  • Click New adapter. The Add Networking wizard opens.
  • In Select target device, select a distributed port group, and click Next.
  • On the Port properties page, configure the settings for the VMkernel adapter.
  • If you selected the vMotion TCP/IP or the Provisioning stack, click OK in the warning dialog that appears. If a live migration is already initiated, it completes successfully even after the involved VMkerne adapters on the default TCP/IP stack are disabled for vMotion. Same refers to operations that include VMkernel adapters on the default TCP/IP stack that are set for the Provisioning traffic.
  • On the IPv4 settings page, select an option for obtaining IP addresses
  • On the IPv6 settings page, select an option for obtaining IPv6 addresses
  • Review your settings selections on the Ready to complete page and click Finish.
  • Follow the prompts to complete the wizard.

 Migrate virtual machines to/from a vSphere Distributed Switch

To manage virtual machine networking by using a distributed switch, migrate virtual machine network adapters to labelled networks on the switch.

Prerequisites

Verify that at least one distributed port group intended for virtual machine networking exists on the distributed switch.

In the vSphere Web Client, navigate to the distributed switch.

  • From the Actions menu, select Add and Manage Hosts.
  • In Select task, select Manage host networking and click Next.
  • In Select hosts, click attached hosts and select from the hosts that are associated with the distributed switch.
  • Click Next.
  • In Select network adapter tasks, select Migrate virtual machine networking and click Next.
  • Configure virtual machine network adapters to the distributed switch.
  • To connect all network adapters of a virtual machine to a distributed port group, select the virtual machine, or select an individual network adapter to connect only that adapter.
  • Click Assign port group.
  • Select a distributed port group from the list and click OK.
  • Click Next and click Finish.

Migrate Virtual Machines to or from a vSphere Distributed Switch

In addition to connecting virtual machines to a distributed switch at the individual virtual machine level, you can migrate a group of virtual machines between a vSphere Distributed Switch network and a vSphere Standard Switch network.

  • In the vSphere Web Client, navigate to a data center.
  • Right-click the data center in the navigator and select Migrate VMs to Another Network.
  • Select a source network.
  • Select Specific network and use the Browse button to select a specific source network.
  • Select No network to migrate all virtual machine network adapters that are not connected to any other network.
  • Use Browse to select a destination network and click Next.
  • Select virtual machines from the list to migrate from the source network to the destination network and click Next.
  • Review your selections and click Finish. Click Back to edit any selections.

 Configure LACP on vDS given design parameters

Create a Link Aggregation Group

To migrate the network traffic of distributed port groups to a link aggregation group (LAG), you create a new LAG on the distributed switch.

  • In the vSphere Web Client, navigate to the distributed switch.
  • On the configure tab, expand Settings and select LACP.
  • Click the New Link Aggregation Group icon.
  • Name the new LAG.
  • Set the number of ports to the LAG.

Set the same number of ports to the LAG as the number of ports in the LACP port channel on the physical switch. A LAG port has the same function as an uplink on the distributed switch. All LAG ports form a NIC team in the context of the LAG.

  • Select the LACP negotiating mode of the LAG.

Active: All LAG ports are in an Active negotiating mode. The LAG ports initiate negotiations with the LACP port channel on the physical switch by sending LACP packets.

Passive: The LAG ports are in Passive negotiating mode. They respond to LACP packets they receive but do not initiate LACP negotiation. If the LACP-enabled ports on the physical switch are in Active negotiating mode, you can set the LAG ports in Passive mode and the reverse.

  • Select a load balancing mode from the hashing algorithms that LACP defines
  • Set the VLAN and the NetFlow policies for the LAG.

This option is active when overriding the VLAN and NetFlow policies per individual uplink ports is enabled on the uplink port group. If you set the VLAN and NetFlow policies to the LAG, they override the policies set on the uplink port group level.

  • Click OK.

The new LAG is unused in the teaming and failover order of distributed port groups. No physical NICs are assigned to the LAG ports.

As with standalone uplinks, the LAG has a representation on every host that is associated with the distributed switch. For example, if you create LAG1 with two ports on the distributed switch, a LAG1 with two ports is created on every host that is associated with the distributed switch.

Describe vDS Security Polices/Settings

Networking security policy provides protection of traffic against MAC address impersonation and unwanted port scanning

The security policy of a standard or distributed switch is implemented in Layer 2 (Data Link Layer) of the network protocol stack. The three elements of the security policy are promiscuous mode, MAC address changes, and forged transmits. See the vSphere Security documentation for information about potential networking threats.

Configure dvPort group blocking policies

Port blocking policies allow you to selectively block ports from sending or receiving data.

Edit the Port Blocking Policy for a Distributed Port Group

You can block all ports in a distributed port group. Blocking the ports of a distributed port group might disrupt the normal network operations of the hosts or virtual machines using the ports.

  • In the vSphere Web Client, navigate to the distributed switch.
  • Right-click the distributed switch in the object navigator and select Distributed Port Group > Manage Distributed Port Groups.
  • Select the Miscellaneous check box and click Next.
  • Select one or more distributed port group to configure and click Next.
  • From the Block all ports drop-down menu, enable or disable port blocking, and click Next.
  • Review your settings and click Finish.

Edit the Blocking Policy for a Distributed Port or Uplink Port

You can block an individual distributed port or uplink port. Blocking the flow through a port might disrupt the normal network operations on the host or virtual machine using the port.

Prerequisites

Enable the port-level overrides.

  • Navigate to a distributed switch and then navigate to a distributed port or an uplink port.
  • To navigate to the distributed ports of the switch, click Networks > Distributed Port Groups, double-click a distributed port group from the list, and click the Ports tab.
  • To navigate to the uplink ports of an uplink port group, click Networks > Uplink Port Groups, double-click an uplink port group from the list, and click the Ports tab.
  • Select a port from the list.
  • Click Edit distributed port settings.
  • In the Miscellaneous section, select the Override check box, and from the drop-down menu enable or disable port blocking.
  • Click OK.

Configure load balancing and failover policies

Configure NIC Teaming, Failover, and Load Balancing on a vSphere Standard Switch or Standard Port Group

Include two or more physical NICs in a team to increase the network capacity of a vSphere Standard Switch or standard port group. Configure failover order to determine how network traffic is rerouted in case of adapter failure. Select a load balancing algorithm to determine how the standard switch distributes the traffic between the physical NICs in a team.

Configure NIC teaming, failover, and load balancing depending on the network configuration on the physical switch and the topology of the standard switch.

If you configure the teaming and failover policy on a standard switch, the policy is propagated to all port groups in the switch. If you configure the policy on a standard port group, it overrides the policy inherited from the switch.

  • In the vSphere Web Client, navigate to the host.
  • On the Configure tab, expand Networking and select Virtual switches.
  • Navigate to the Teaming and Failover policy for the standard switch, or standard port group.
  • Standard Switch
  • Select the switch from the list.
  • Click Edit settings and select Teaming and failover.
  • Standard port group
    • Select the switch where the port group resides.
    • From the switch topology diagram, select the standard port group and click Edit settings.
    • Select Teaming and failover.
  • Select Override next to the policies that you want to override.
  • From the Load balancing drop-down menu, specify how the virtual switch load balances the outgoing traffic between the physical NICs in a team.

Route based on the originating virtual port: Select an uplink based on the virtual port IDs on the switch. After the virtual switch selects an uplink for a virtual machine or a VMkernel adapter, it always forwards traffic through the same uplink for this virtual machine or VMkernel adapter.

Route based on IP hash: Select an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, the switch uses the data at those fields to compute the hash . IP-based teaming requires that the physical switch is configured with EtherChannel.

Route based on source MAC hash: Select an uplink based on a hash of the source Ethernet.

Use explicit failover order: From the list of active adapters, always use the highest order uplink that passes failover detection criteria. No actual load balancing is performed with this option.

  • From the Network failure detection drop-down menu, select the method that the virtual switch uses for failover detection.

Link status only: Relies only on the link status that the network adapter provides. This option detects failures such as removed cables and physical switch power failures.

Beacon probing: Sends out and listens for beacon probes on all NICs in the team, and uses this information, in addition to link status, to determine link failure.ESXi sends beacon packets every second. The NICs must be in an active/active or active/standby configuration because the NICs in an unused state do not participate in beacon probing.

  • From the Notify switches drop-down menu, select whether the standard or distributed switch notifies the physical switch in case of a failover.
  • From the Failback drop-down menu, select whether a physical adapter is returned to active status after recovering from a failure.

If failback is set to Yes, the default selection, the adapter is returned to active duty immediately upon recovery, displacing the standby adapter that took over its slot, if any.

If failback is set to No for a standard port, a failed adapter is left inactive after recovery until another currently active adapter fails and must be replaced.

  • Specify how the uplinks in a team are used when a failover occurs by configuring the Failover Order list. If you want to use some uplinks but reserve others for emergencies in case the uplinks in use fail, use the up and down arrow keys to move uplinks into different groups.

Active adapters: Continue to use the uplink if the network adapter connectivity is up and active.

Standby adapters: Use this uplink if one of the active physical adapter is down.

Unused adapters: Do not use this uplink.

  • Click OK.

Configure NIC Teaming, Failover, and Load Balancing on a Distributed Port Group or Distributed Port

Include two or more physical NICs in a team to increase the network capacity of a distributed port group or port. Configure failover order to determine how network traffic is rerouted in case of adapter failure. Select a load balancing algorithm to determine how the distributed switch load balances the traffic between the physical NICs in a team.

Configure NIC teaming, failover, and load balancing according with the network configuration on the physical switch and the topology of the distributed switch.

If you configure the teaming and failover policy for a distributed port group, the policy is propagated to all ports in the group. If you configure the policy for a distributed port, it overrides the policy inherited from the group.

Prerequisites

To override a policy on distributed port level, enable the port-level override option for this policy.

  • In the vSphere Web Client, navigate to the distributed switch.
  • Navigate the Teaming and Failover policy on the distributed port group or port.
  • Distributed port group
  • From the Actions menu, select Distributed Port Group > Manage Distributed Port Groups.
  • Select Teaming and failover.
  • Select the port group and click Next.
  • Distributed port
    • On the Networks tab, click Distributed Port Groups and double-click a distributed port group.
    • On the Ports tab, select a port and click Edit distributed port seĴings.
    • Select Teaming and failover.
  • Select Override next to the properties that you want to override.

The options are identical to standard switches please refer to the above configuration steps and information.

 Configure VLAN/PVLAN settings for VMs given communication requirements

VLANs let you segment a network into multiple logical broadcast domains at Layer 2 of the network protocol stack.

VLAN Configuration

Virtual LANs (VLANs) enable a single physical LAN segment to be further isolated so that groups of ports are isolated from one another as if they were on physically different segments.

Benefits of Using VLANs in vSphere

The VLAN configuration in a vSphere environment provides certain benefitsǯ

Integrates ESXi hosts into a pre-existing VLAN topology.

Isolates and secures network traffic

Reduces congestion of network traffic

VLAN Tagging Modes

vSphere supports three modes of VLAN tagging in ESXi: External Switch Tagging (EST), Virtual Switch

Tagging (VST), and Virtual Guest Tagging (VGT).

EST – VLAN 0

The physical switch performs the VLAN tagging. The host network adapters are connected to access ports on the physical switch.

VST – VLANs Between 1 and 4094

The virtual switch performs the VLAN tagging before the packets leave the host. The host network adapters must be connected to trunk ports on the physical switch.

VGT VLAN 4095 for standard switch and a Range of individual VLANs for distributed switch

The virtual machine performs the VLAN tagging. The virtual switch preserves the VLAN tags when it forwards the packets between the virtual machine networking stack and external switch. The host network adapters must be connected to trunk ports on the physical switch. The vSphere Distributed Switch supports a modification of VGT. For security reasons, you can configure a distributed switch to pass only packets that belong to particular VLANs.

Configure traffic shaping policies

ESXi lets you shape outbound traffic on standard switches or port groups. The traffic shaper restricts the network bandwidth available to any port, but you can also configure it to temporarily allow bursts of traffic to flow through a port at higher speeds.

The traffic shaping policies that you set at switch or port group level are applied at each individual port that participates in the switch or port group. For example, if you set an average bandwidth of 100000 Kbps on a standard port group, 100000 Kbps averaged over time can pass through each port that is associated with the standard port group.

Below follows the process for a standard switch, the options for a distributed switch are identical.

  • In the vSphere Web Client, navigate to the host.
  • On the Configure tab, expand Networking and select Virtual switches.
  • Navigate to the traffic shaping policy on the standard switch or port group.
  • vSphere Standard Switch
  • Select a standard switch from the list.
  • Click Edit settings.
  • Select Traffic shaping.
  • Standard port group
    • Select the standard switch where the port group resides.
    • In the topology diagram, select a standard port group.
    • Click Edit settings.
  • Select Traffic shaping and select Override next to the options to override.
  • Configure traffic shaping policies.

Status: Enables setting limits on the amount of networking bandwidth allocated for each port that is associated with the standard switch or port group.

Average Bandwidth: Establishes the number of bits per second to allow across a port, averaged over time (the allowed average load).

Peak Bandwidth: The maximum number of bits per second to allow across a port when it is sending a burst of traffic This setting tops the bandwidth used by a port whenever it is using its burst bonus. This parameter can never be smaller than the average bandwidth.

Burst Size: The maximum number of bytes to allow in a burst. If this parameter is set, a port might gain a burst bonus when it does not use all its allocated bandwidth. Whenever the port needs more bandwidth than the average bandwidth specifies the port can temporarily transmit data at a higher speed if a burst bonus is available. This parameter tops the number of bytes that can accumulate in the burst bonus and can be transferred at a higher speed.

  • For each traffic shaping policy (Average Bandwidth, Peak Bandwidth, and Burst Size), enter a bandwidth value.
  • Click OK.

Enable TCP Segmentation Offload support for a virtual machine

Enable or Disable TSO on a Linux Virtual Machine

Enable TSO support on the network adapter of a Linux virtual machine so that the guest operating system redirects TCP packets that need segmentation to the VMkernel.

Prerequisites

Verify that ESXi 6.5 supports the Linux guest operating system. See the VMware Compatibility Guide documentation.

Verify that the network adapter on the Linux virtual machine is VMXNET2 or VMXNET3.

  • In a terminal window on the Linux guest operating system, to enable or disable TSO, run the ethtool command with the -K and tso options.
  • To enable TSO, run the following command:
  • ethtool -K ethY tso on
  • To disable TSO, run the following command:
  • ethtool -K ethY tso off

where Y in ethY is the sequence number of the NIC in the virtual machine.

Enable or Disable TSO on a Windows Virtual Machine

By default, TSO is enabled on a Windows virtual machine on VMXNET2 and VXMNET3 network adapters. For performance reasons, you might want to disable TSO.

Prerequisites

Verify that ESXi 6.5 supports the Windows guest operating system. See the VMware Compatibility Guide documentation.

Verify that the network adapter on the Windows virtual machine is VMXNET2 or VMXNET3.

  • In the Network and Sharing Center on the Windows control panel, click the name of the network adapter.
  • Click its name. A dialog box displays the status of the adapter.
  • Click Properties, and beneath the network adapter type, click Configure.
  • On the Advanced tab, set the Large Send Offload V2 (IPv4) and Large Send Offload V2 (IPv6) properties to Enabled or Disabled.
  • Click OK.

Restart the virtual machine

Enable Jumbo Frames support on appropriate components

Jumbo frames let ESXi hosts send larger frames out onto the physical network. The network must support jumbo frames end-to-end that includes physical network adapters, physical switches, and storage devices.

Before enabling jumbo frames, check with your hardware vendor to ensure that your physical network adapter supports jumbo frames.

You can enable jumbo frames on a vSphere distributed switch or vSphere standard switch by changing the maximum transmission unit (MTU) to a value greater than 1500 bytes. 9000 bytes is the maximum frame size that you can configure

  • In the vSphere Web Client, navigate to the vSphere switch.
  • On the configure tab, expand Settings and select Properties.
  • Click Edit.
  • Click Advanced and set the MTU property to a value greater than 1500 bytes.
  • You cannot set the MTU size to a value greater than 9000 bytes.
  • Click OK.

 Recognize behaviour of vDS Auto-Rollback

By rolling configuration changes back, vSphere protects hosts from losing connection to vCenter Server as a result from misconfiguration of the management network.

In vSphere 5.1 and later, networking rollback is enabled by default. However, you can enable or disable rollbacks at the vCenter Server level.

Host Networking Rollbacks

Host networking rollbacks occur when an invalid change is made to the networking configuration for the connection with vCenter Server. Every network change that disconnects a host also triggers a rollback. The following examples of changes to the host networking configuration might trigger a rollback:

  • Updating the speed or duplex of a physical NIC.
  • Updating DNS and routing settings
  • Updating teaming and failover policies or traffic shaping policies of a standard port group that contains the management VMkernel network adapter.
  • Updating the VLAN of a standard port group that contains the management VMkernel network adapter.
  • Increasing the MTU of management VMkernel network adapter and its switch to values not supported by the physical infrastructure.
  • Changing the IP settings of management VMkernel network adapters.
  • Removing the management VMkernel network adapter from a standard or distributed switch.
  • Removing a physical NIC of a standard or distributed switch containing the management VMkernel network adapter.
  • Migrating the management VMkernel adapter from vSphere standard to distributed switch.

If a network disconnects for any of these reasons, the task fails and the host reverts to the last valid configuration

 vSphere Distributed Switch Rollbacks

Distributed switch rollbacks occur when invalid updates are made to distributed switches, distributed port groups, or distributed ports. The following changes to the distributed switch configuration trigger a rollback:

  • Changing the MTU of a distributed switch.
  • Changing the following settings in the distributed port group of the management VMkernel network adapter:
    • Teaming and failover
    • VLAN
    • Traffic shaping
  • Blocking all ports in the distributed port group containing the management VMkernel network adapter.
  • Overriding the policies on at the level of the distributed port for the management VMkernel network adapter.

If a configuration becomes invalid because of any of the changes, one or more hosts might become out of synchronization with the distributed switch.

If you know where the conflicting configuration setting is located, you can manually correct the settings For example, if you have migrated a management VMkernel network adapter to a new VLAN, the VLAN might not be actually trunked on the physical switch. When you correct the physical switch configuration the next distributed switch-to-host synchronization will resolve the configuration problem.

If you are not sure where the problem exists, you can restore the state of the distributed switch or distributed port group to an earlier configuration

 Configure vDS across multiple vCenters to support [Long Distance vMotion]

vSphere 6.0 or later lets you migrate virtual machines between vCenter Server instances.

Migration of virtual machines across vCenter Server systems is helpful in certain VM provisioning cases.

  • Balance workloads across clusters and vCenter Server instances.
  • Elastically expand or shrink capacity across resources in different vCenter Server instances in the same site or in another geographical area .
  • Move virtual machines between environments that have different purposes, for example, from a development to production.
  • Move virtual machines to meet different Service Level Agreements (SLAs) regarding storage space, performance, and so on.

The following list sums the requirements that your system must meet so that you can use migration across vCenter Server instances:

  • The source and destination vCenter Server instances and ESXi hosts must be 6.0 or later.
  • The cross vCenter Server and long-distance vMotion features require an Enterprise Plus license.
  • Both vCenter Server instances must be time-synchronized with each other for correct vCenter Single Sign-On token verification.
  • For migration of compute resources only, both vCenter Server instances must be connected to the shared virtual machine storage.
  • When using the vSphere Web Client, both vCenter Server instances must be in Enhanced Linked Mode and must be in the same vCenter Single Sign-On domain. This lets the source vCenter Server to authenticate to the destination vCenter Server.

If the vCenter Server instances exist in separate vCenter Single Sign-On domains, you can use vSphere APIs/SDK to migrate virtual machines.

Migration of VMs between vCenter Server instances moves VMs to new networks. The migration process performs checks to verify that the source and destination networks are similar.

vCenter Server performs network compatibility checks to prevent the following configuration problems:

  • MAC address compatibility on the destination host
  • vMotion from a distributed switch to a standard switch
  • vMotion between distributed switches of different versions
  • vMotion to an internal network, for example, a network without a physical NIC
  • vMotion to a distributed switch that is not working properly

vCenter Server does not perform checks for and notify you about the following problems:

  • If the source and destination distributed switches are not in the same broadcast domain, virtual machines lose network connectivity after migration.
  • If the source and destination distributed switches do not have the same services configured, virtual machines might lose network connectivity after migration.
  • When you move a virtual machine between vCenter Server instances, the environment specifically handles MAC address migration to avoid address duplication and loss of data in the network.

In an environment with multiple vCenter Server instances, when a virtual machine is migrated, its MAC addresses are transferred to the target vCenter Server. The source vCenter Server adds the MAC addresses to a black list so that it does not assign them to newly created virtual machines

Compare and contrast vSphere Distributed Switch (vDS) capabilities

A vSphere Distributed Switch provides centralized management and monitoring of the networking configuration of all hosts that are associated with the switch. You set up a distributed switch on a vCenter Server system, and its settings are propagated to all hosts that are associated with the switch.

A network switch in vSphere consists of two logical sections that are the data plane and the management plane. The data plane implements the package switching, filteringǰ tagging, and so on. The management plane is the control structure that you use to configure the data plane functionality. A vSphere Standard Switch contains both data and management planes, and you configure and maintain each standard switch individually.

A vSphere Distributed Switch separates the data plane and the management plane. The management functionality of the distributed switch resides on the vCenter Server system that lets you administer the networking configuration of your environment on a data center level. The data plane remains locally on every host that is associated with the distributed switch. The data plane section of the distributed switch is called a host proxy switch. The networking configuration that you create on vCenter Server (the management plane) is automatically pushed down to all host proxy switches (the data plane).

The vSphere Distributed Switch introduces two abstractions that you use to create consistent networking configuration for physical NICs, virtual machines, and VMkernel services.

Uplink port group

An uplink port group or dvuplink port group is defined during the creation of the distributed switch and can have one or more uplinks. An uplink is a template that you use to configure physical connections of hosts as well as failover and load balancing policies. You map physical NICs of hosts to uplinks on the distributed switch. At the host level, each physical NIC is connected to an uplink port with a particular ID. You set failover and load balancing policies over uplinks and the policies are automatically propagated to the host proxy switches, or the data plane. In this way you can apply consistent failover and load balancing configuration for the physical NICs of all hosts that are associated with the distributed switch.

Distributed port group

Distributed port groups provide network connectivity to virtual machines and accommodate VMkernel trafficǯ You identify each distributed port group by using a network label, which must be unique to the current data center. You configure NIC teaming, failover, load balancing, VLAN, security, traffic shaping , and other policies on distributed port groups. The virtual ports that are connected to a distributed port group share the same properties that are configured to the distributed port group. As with uplink port groups, the configuration that you set on distributed port groups on vCenter Server (the management plane) is automatically propagated to all hosts on the distributed switch through their host proxy switches (the data plane). In this way you can configure a group of virtual machines to share the same networking configuration by associating the virtual machines to the same distributed port group.

For example, suppose that you create a vSphere Distributed Switch on your data center and associate two hosts with it. You configure three uplinks to the uplink port group and connect a physical NIC from each host to an uplink. In this way, each uplink has two physical NICs from each host mapped to it, for example Uplink 1 is configured with vmnic0 from Host 1 and Host 2. Next you create the Production and the VMkernel network distributed port groups for virtual machine networking and VMkernel services. Respectively, a representation of the Production and the VMkernel network port groups is also created on Host 1 and Host 2. All policies that you set to the Production and the VMkernel network port groups are propagated to their representations on Host 1 and Host 2.

To ensure efficient use of host resources, the number of distributed ports of proxy switches is dynamically scaled up and down on hosts running ESXi 5.5 and later. A proxy switch on such a host can expand up to the maximum number of ports supported on the host. The port limit is determined based on the maximum number of virtual machines that the host can handle.

 Configure multiple VMkernel Default Gateways

You might need to override the default gateway for a VMkernel adapter to provide a different gateway for services such as vMotion, Fault Tolerance logging, and vSAN.

Each TCP/IP stack on a host can have only one default gateway. This default gateway is part of the routing table and all services that operate on the TCP/IP stack use it.

For example, the VMkernel adapters vmk0 and vmk1 can be configured on a host.

vmk0 is used for management traffic on the 10.162.10.0/24 subnet, with default gateway 10.162.10.1

vmk1 is used for vMotion traffic on the 172.16.1.0/24 subnet

If you set 172.16.1.1 as the default gateway for vmk1, vMotion uses vmk1 as its egress interface with the gateway 172.16.1.1. The 172.16.1.1 gateway is a part of the vmk1 configuration and is not in the routing table. Only the services that specify vmk1 as an egress interface use this gateway. This provides additional Layer 3 connectivity options for services that need multiple gateways.

You can use the vSphere Web Client or an ESXCLI command to configure the default gateway of a VMkernel adapter.

 Configure ERSPAN

Port mirroring allows you to mirror a distributed port’s traffic to other distributed ports or specific physical switch ports.

Port mirroring is used on a switch to send a copy of packets seen on one switch port (or an entire VLAN) to a monitoring connection on another switch port. Port mirroring is used to analyse and debug data or diagnose errors on a network.

Create a port mirroring session by using the vSphere Web Client to mirror vSphere Distributed Switch traffic to ports, uplinks, and remote IP addresses.

Prerequisites

Verify that the vSphere Distributed Switch is version 5.0.0 and later.

  • Select Port Mirroring Session Type, to begin a port mirroring session, you must specify the type of port mirroring session.
  • Specify Port Mirroring Name and Session Details, to continue creating a port mirroring session, specify the name, description, and session details for the new port mirroring session.
  • Select Port Mirroring Sources, to continue creating a port mirroring session, select sources and traffic direction for the new port mirroring session.
  • Select Port Mirroring Destinations and Verify Settings, to complete the creation of a port mirroring session, select ports or uplinks as destinations for the port mirroring session.

Further information is available on pages 215 to 219 of vsphere-esxi-vcenter-server-65-networking-guide.pdf

Create and configure custom TCP/IP Stacks

You can create a custom TCP/IP stack on a host to forward networking traffic through a custom application.

  • Open an SSH connection to the host.
  • Log in as the root user.
  • Run the vSphere CLI command.
  • esxcli network ip netstack add -N=”stack_name”

The custom TCP/IP stack is created on the host. You can assign VMkernel adapters to the stack.

 Configure Netflow

Analyze virtual machine IP traffic that flows through a vSphere Distributed Switch by sending reports to a NetFlow collector. Version 5.1 and later of vSphere Distributed Switch supports IPFIX (NetFlow version 10).

  • In the vSphere Web Client, navigate to the distributed switch.
  • From the Actions menu, select Settings > Edit Netflow.
  • Type the Collector IP address and Collector port of the NetFlow collector. You can contact the NetFlow collector by IPv4 or IPv6 address.
  • Set an Observation Domain ID that identifies the information related to the switch.

To see the information from the distributed switch in the NetFlow collector under a single network device instead of under a separate device for each host on the switch, type an IPv4 address in the Switch IP address text box.

  • In the Active flow export timeout and Idle flow export timeout text boxes, set the time, in seconds, to wait before sending information after the flow is initiated.
  • To change the portion of data that the switch collects, configure Sampling Rate.

The sampling rate represents the number of packets that NetFlow drops after every collected packet. A sampling rate of x instructs NetFlow to drop packets in a collected packets:dropped packets ratio 1:x. If the rate is 0, NetFlow samples every packet, that is, collect one packet and drop none. If the rate is 1, NetFlow samples a packet and drops the next one, and so on.

To collect data on network activity between virtual machines on the same host, enable Process internal flows only. Collect internal flows only if NetFlow is enabled on the physical network device to avoid sending duplicate information from the distributed switch and the physical network device.

  • Click OK.