30th July 2020

NSX-T 3.0 Manual Health Check/ Best Practice Review

NSX-T 3.0 Manual Health Check/ Best Practice Review

Quick Disclaimer – This is not supported by or affiliated with VMware.

Why do a Manual BPR/ Health Check of NSX-T?

Currently, there is no tool that can be used to survey an NSX-T environment, therefore I decided to create my own healthcheck tool.

How did I do it?

1) Firstly, I used the VMware NSX-T Reference Design Guide 2.5, which I condensed in it into a 29-page Best Practice Review/Healthcheck.

2) I then ran the NSX-V Healthcheck analyzer tool on a NSX-V Test environment, picking out more general healthchecks.

3) By taking all of the best practices out of the document I had created and combining it with the NSX-V general healthchecks, I created a spreadsheet with a list of 50 BPRs & healthchecks.

4) Following this, I created columns for how to complete the check, where to find it in the NSX Manager and another column for more in-depth information.

5) Finally, I colour coded each row into green (manual checks I needed to do on the environment), blue (more general checks/talking points that need to be interpreted) and orange (questions that need to be asked to the customer- either because its quicker and easier to check or because I was unable to check).

The Excel Document

Below, I have linked the Excel document I created. Feel free to use this to conduct your own best practice review!

Recommandation/Best Practice How to find this? What does this mean?
VMware recommendation is to use the Policy toggle going forward Top right hand corner of the NSX Manager if this is not visible it is most likely a fresh install to see the toogle go to System > user interface settings  Making sure you use the profile tab and not the manager tab moving forward. The toggle is used to swap between manager and profile. The manager toggle is mostly used for when you upgrade from previous version as a replacement for the adavanded UI.  There is no tool yet that can migrate from manager to profile, the toggle is normally hidden from view by default but shows if you are upgrading you can also edit who has access/ can see this in System > user interface settings 
When creating a Transport Node, the administrator must choose between two types of N-VDS. The standard N-VDS or the N-VDS Enhanced (also refer as Enhanced Data Path in GUI). It is not recommanded to have both types of virtual switch. Select System > Fabric > Transport Nodes > in the transport node look at the host membership criteria From the Managed by field, select Standalone Host or a compute manager. Select the host. Click the N-VDS Visualization tab. Or Create a transport zone with N-VDS in the enhanced data path mode.
See Create Transport Zones. in that host membership criteria, choose enhanced datapath.
Both types of switch can coexist on the same hypervisor, even if they can’t share uplinks or transport zones. It’s just not recommended for common enterprise or cloud use cases.
The default two-tier hierarchical flooding mode is recommended as a best practice as it typically performs better in terms of physical uplink bandwidth utilization. Tier-0: Supports uplink capability to your physical infrastructure, supports connectivity to logical switches (single-tier topology), as well as being used as an interconnect with Tier-1 LRs. Tier-1: Supports connectivity to logical switches and to Tier0 (multi-tier topology)  networking > switching > replication mode The benefit of the two-tier hierarchical mode is that only two tunnel packets were sent between racks, one for each remote group. This is a significant improvement in the network inter-rack fabric utilization – where available bandwidth is typically less available than within a rack – compared to the head end mode that sent five packets. That number that could be higher still if there were more transport nodes interested in flooded traffic for “S1” on the remote racks. In this mode, the ESXi host transport nodes are grouped according to their TEP IP subnet. One ESXi host in each subnet is responsible for replication to a ESXi host in another subnet. The receiving ESXi host replicates the traffic to the ESXi hosts in its local subnet. The source ESXi host transport node knows about the groups based on information it has received from the NSX-T control cluster. The system can select an arbitrary ESXi host transport node as the mediator for the source subnet if the remote mediator ESXi host node is available question do you use the default two
To provide redundancy for centralized services and N-S connectivity, it is recommended to deploy minimum two edge nodes. System > Fabric > Nodes > Edge Transport Nodes A minimum of two Edge nodes is required on each ESXi host, allowing bandwidth to scale to 20Gbps. Further expansion is possible by adding additional Edge node VMs, scaling up to a total of eight Edge VMs. For multi-10Gbps traffic requirements or line rate stateful services, consider the addition of a dedicated bare metal Edge cluster for specific services workloads. Alternatively, the design can start with distributed firewall micro-segmentation and eventually move to overlay and other Edge services
Configuring two tier routing is not mandatory but recommended. Single-tier routing is a NSX-T Tier-0 gateway that provides both distributed routing and centralized routing along with other services such as NAT, DHCP, load balancers and so on. The segments (subnets) in a single-tier topology are connected directly to the NSX-T gateway. VMs connected to these segments can communicate E-W as well as N-S to the external data center.

Multi-tiered routing (two-tier routing) also provides distributed and centralized routing with the concept of ‘multi-tenancy’ as part of the routing topology. Here you will find both Tier-0 and Tier-1 gateways; this design allows provider admins (Tier-0 and tenant admins (Tier-1) total control over their respective services as well as policies. networking > connectivity > tier-0 or tier-1 gateways

So the customer might ask why bother with two tier routing if I can get everything just using single tier. Multi-tenancy doesn’t just allow for separation of tenants, but it also provides control boundaries in terms of who controls what. For instance, tenant administrators can control/configure the network and security policies for their specific tenants. A set of Questions to ask the customer. Do you want multiple tenants that need isolation? Do you want to give provider admin and tenant admin complete control over their services and policies? Do you want to use a Cloud Management Platform like openstack to deploy these tenants? Are you leveraging NSX-T to provide networking/security for Kubernetes? If the answer to any of these questions is yes then the advice should be to use multi-tier routing.
If the TOR switch supports BFD, it is recommended to run BFD on the both eBGP neighbors for faster failure detection. BFD (Bidirectional Forwarding Detection) is a protocol that can detect forwarding path failures. Select Networking > Networking Settings. Click the BFD Profile  tab. Click Add BFD Profile. Enter a name for the profile. Enter values for the heartbeat Interval and Declare Dead Multiple. Click Save. Select Networking > Tier-0 Gateways. To edit a tier-0 gateway, click the menu icon (three dots) and select Edit. Click Routing and Set for Static Route BFD Peer. Click Add Static Route BFD Peer. Select a BFD profile. See Add a BFD Profile. Enter the peer IP address and optionally the source addresses. Click Save Top-of-rack switching is a network architecture design in which computing equipment like servers, appliances and other switches located within the same or adjacent rack are connected to an in-rack network switch. Bidirectional Forwarding Detection (BFD) is a network protocol that is used to detect faults between two forwarding engines connected by a link.
Same VLAN segmentd can also be used to connect Tier-0 Gateway to TOR-Left and TOR-Right, however it is not recommended because of inter-rack VLAN dependencies leading to spanning tree related convergence. Each NSX-T segment is assigned a virtual network identifier (VNI) which is similar to a VLAN ID. It is the same as the Logical switches in NSX-V. Logical switches are called as “Segments” in NSX-T.  Networking > Routing, Select the tier-0 logical router, Click the Routing tab and select BFD from the drop-down menu. Click Edit to configure BFD. Click the Status toggle button to enable BFD.
You can optionally change the global BFD properties Receive interval, Transmit interval, and Declare dead interval.
A single N-VDS “Overlay and External N-VDS” is used in this topology that carries both overlay and External traffic. Overlay traffic from different overlay segments/logical switches gets pinned to TEP IP1 or TEP IP2 and gets load balanced across both uplinks, Uplink1 and Uplink2. Notice that, both TEP IPs use same transport VLAN i.e. VLAN 200 which is configured on both top of rack switches. Two VLANs segments, i.e. “External VLAN Segment 300” and “External VLAN Segment 400” are used to provide northbound connectivity to Tier-0 gateway
The “three N-VDS per Edge VM design” – It’s a mandatory to adopt this recommendation for NSX-T release up to 2.5. The newer design as described in section will not operate properly if adopted in release before NSX-T 2.5. (only relavant before 2.5 release) System > Fabric > Nodes > Host Transport Nodes. From the Managed by field, select Standalone Host or a compute manager. Select the host. Click the N-VDS Visualization tab. The multiple N-VDS per Edge VM design recommendation is valid regardless of the NSX-T release. This design must be followed if the deployment target is NSX-T release 2.4 or older. The design recommendation is still completely applicable and viable to Edge VM deployment running NSX-T 2.5 release. In order to simplify consumption for the new design recommendation, the pre-2.5 release design has been moved to Appendix 5. The design choices that moved to appendix covers
Starting with NSX-T release 2.5, single N-VDS deployment mode is recommended for both bare metal and Edge VM. System > Fabric > Nodes > Host Transport Nodes. From the Managed by field, select Standalone Host or a compute manager. Select the host. Click the N-VDS Visualization tab. Key benefits of single N-VDS deployment are: Consistent deployment model for both Edge VM and bare metal Edge with one N-VDS carrying both overlay and external traffic. Load balancing of overlay traffic with Multi-TEP configuration. Ability to distribute external traffic to specific TORs for distinct point to point routing adjacencies. No change in DVPG configuration when new service interfaces (workload VLAN segments) are added.
Tier 0 – Recommended for PAS/PKS deployments.
E-W routing between different tenants remains completely distributed. Tier 1 Recommended for high throughput ECMP topologies. Recommended for topologies with overlapping IP address space.
networking > routing > tier 0 or tier 1 Users can enable NAT as a network service on NSX-T. This is a centralized service which can be enabled on both Tier-0 and Tier-1 gateways. Source NAT (SNAT): Source NAT translates the source IP of the outbound packets to a known public IP address so that the application can communicate with the outside world without using its private IP address. It also keeps track of the reply. Destination NAT (DNAT): DNAT allows for access to internal private IP addresses from the outside world by translating the destination IP address when inbound communication is initiated. It also takes care of the reply. For both SNAT and DNAT, users can apply NAT rules based on 5 tuple match criteria. Reflexive NAT: Reflexive NAT rules are stateless ACLs which must be defined in both directions. These do not keep track of the connection. Reflexive NAT rules can be used in cases where stateful NAT cannot be used due to asymmetric paths (e.g., user needs to enable NAT on active/active ECMP routers).  Source NAT (SNAT) Can be enabled on both Tier-0 and Tier-1 Destination NAT (DNAT) gateways
Static routing and BGP are supported to exchange routes between two Tier-0 gateways and full mesh connectivity is recommended for optimal traffic forwarding. Full mesh topology occurs when every node has a circuit connecting it to every other node in a network. Full mesh is very expensive to implement but yields the greatest amount of redundancy, tier 0 gateway, BGP, t0-LR-guide routing, neighbours Only external interfaces should be used to connect a Tier-0 gateway to another Tier-0 gateway. Static routing and BGP are supported to exchange routes between two Tier-0 gateways and full mesh connectivity is recommended for optimal traffic forwarding. This topology provides high N-S throughput with centralized stateful services running on different Tier-0 gateways. This topology also provides complete separation of routing tables on the tenant Tier-0 gateway level and allows services that are only available on Tier-0 gateways (like VPN until NSX-T 2.4 release) to leverage ECMP northbound. Note that VPN is available on Tier-1 gateways starting NSX-T 2.5 release
It is always recommended to put the most granular policies at the top of the rule table. This will ensure more specific policies are enforced first. The DFW default policy rule, located at the bottom of the rule table, is a catchall rule; packets not matching any other rule will be enforced by the default rule – which is set to “allow” by default. This ensures that VM-to- VM communication is not broken during staging or migration phases. It is a best practice to then change this default rule to a “drop” action and enforce access control through a whitelisting model (i.e., only traffic defined in the firewall policy is allowed onto the network). Security > distruted firewall, Click the General tab for layer 3 (L3) rules or the Ethernet tab for layer 2 (L2) rules  granular policies can be assigned to specific users, specific groups, or all users. Granular Policies will allow you to configure policies for different groups of users, such as enforcing stricter security control for executives. In the data path, the DFW maintains two tables: a rule table and a connection tracker table. The LCP populates the rule table with the configured policy rules, while the connection tracker table is updated dynamically to cache flows permitted by rule table. NSX-T DFW can allow for a policy to be stateful or stateless with section-level granularity in the DFW rule table. The connection tracker table is populated only for stateful policy rules; it contains no information on stateless policies. This applies to both ESXi and KVM environments. Rules are processed in top-to-bottom order. Each packet is checked against the top rule in the rule table before moving down the subsequent rules in the table. The first rule in the table that matches the traffic parameters is enforced. The search is then terminated, so no subsequent rules will be examined or enforced.
A network-centric approach is not recommended in dynamic environments where there is a rapid rate of infrastructure change or VM addition/deletion. Question termining the appropriate policy and rules after a new application has been developed. This often is a very time consuming, manual and error-prone process involving various review cycles, and results in a complex set of rules based on network constructs such as IP addresses and Ports that are hard to tie to applications. In addition to that initial complexity, network-based security policies are not conducive to changing applications Network-centric is the traditional approach of grouping based on L2 or L3 elements. Grouping can be done based on MAC addresses, IP addresses, or a combination of both. NSX-T supports this approach of grouping objects. A security team needs to aware of networking infrastructure to deploy network-based policies. There is a high probability of security rule sprawl as grouping based on dynamic attributes is not used. This method of grouping works well for migrating existing rules from an existing firewall.
When defining security policy rules for the firewall table, it is recommended to follow these high-level steps: VM Inventory Collection – Identify and organize a list of all hosted virtualized workloads on NSX-T transport nodes. This is dynamically collected and saved by NSX-T Manager as the nodes – ESXi or KVM – are added as NSX-T transport nodes. Tag Workload – Use VM inventory collection to organize VMs with one or more tags. Each designation consists of scope and tag association of the workload to an application, environment, or tenant. For example, a VM tag could be “Scope = Prod, Tag = web” or “Scope=tenant-1, Tag = app-1”. Group Workloads – Use the NSX-T logical grouping construct with dynamic or static membership criteria based on VM name, tags, segment, segment port, IP’s, or other attributes. Define Security Policy – Using the firewall rule table, define the security policy. Have categories and policies to separate and identify emergency, infrastructure, environment, and application-specific policy rules based on the rule model. There is a recommended way in which to define security policies which are listed here
This list provides best practices and recommendation for the NSX-T DFW. These can be used as guidelines while deploying an NSX-T security solution. Use NSX-T tagging and grouping constructs to group an application or environment to its natural boundaries. This will enable simpler policy management.  Consider the flexibility and simplicity of a policy model for Day-2 operations. It should address ever-changing deployment scenarios rather than simply be part of the initial setup. Leverage DFW category and policies to group and manage policies based on the chosen rule model. (e.g., emergency, infrastructure, environment, application…) Use a whitelist model; create explicit rules for allowed traffic and change DFW the default rule from “allow” to “drop” Security > distruted firewall, Click the General tab for layer 3 (L3) rules or the Ethernet tab for layer 2 (L2) rules tagging look at inventory > virtual machines select a vm then go to manage tags For individual NSX-T software releases, always refer to release notes, compatibility guides, hardening guide and recommended configuration maximums.  Exclude management components like vCenter Server, and security tools from the DFW policy to avoid lockout. This can be done by adding those VMs to the exclusion list. Choose the policy methodology and rule model to enable optimum groupings and policies for micro-segmentation. Use NSX-T tagging and grouping constructs to group an application or environment to its natural boundaries. This will enable simpler policy management.  Consider the flexibility and simplicity of a policy model for Day-2 operations. It should address ever-changing deployment scenarios rather than simply be part of the initial setup. Leverage DFW category and policies to group and manage policies based on the chosen rule model. (e.g., emergency, infrastructure, environment, application…) Use a whitelist model; create explicit rules for allowed traffic and change DFW the default rule from “allow” to “drop” (blacklist to whitelist).
Jumbo Frame Support – A minimum required MTU is 1600, however MTU of 1700 bytes is recommended to address the full possibility of variety of functions and future proof the environment for an expanding Geneve header. As the recommended MTU for the N-VDS is 9000, the underlay network should support at least this value, excluding overhead. • system > Fabric > Profiles > Uplink Profiles > see MTU sizes networking and security > nsx edges > (double check a node) > manage > settings > interfaces > (select the trunk interface) > click edit > go to advance tab The VM MTU – Typical deployment carries 1500 byte MTU for the guest VM. One can increase the MTU up to 8800 (a ballpark number to accommodate future header expansion) in case for improving the throughput of the VM. However, all non-TCP based traffic (UDP, RTP, ICMP etc.) and traffic that need to traverse firewall or services appliance, DMZ or Internet may not work properly thus it is advised to use caution while changing the VM MTU. However, replication VMs, backups or internal only application can certainly benefit from increasing MTU size on VM.
It is recommended to spread the deployment of the NSX-T Manager Nodes across separate hypervisors to ensure that the failure of a single host does not cause the loss of a majority of the cluster. For a vSphere-based design, it is recommended to leverage vSphere HA functionality to ensure single NSX-T Manager node can recover during the loss of a hypervisor. Question do you spread the deployment of the manager nodes across separate hypervisors? Furthermore, NSX-T Manager should be installed on shared storage. vSphere HA requires shared storage so that VMs can be restarted on another host if the original host fails. A similar mechanism is recommended when NSX-T Manager is deployed in a KVM hypervisor environment.
It is recommended to reserve resources in CPU and memory according to their respective requirements. NSX-T Data Center uses vSphere resource allocation to reserve resources for NSX Edge appliances. You can tune the CPU and memory resources reserved for NSX Edge to ensure optimal use of resources on an NSX Edge.For maximum performance NSX Edge VM appliance must be assigned 100% of the available resources. If you customize resources allocated to the NSX Edge VM, turn back the allocation later to 100% to get maximum performance. For auto-deployed NSX Edge appliances, you can change the resource allocation from NSX Manager. However, if an NSX Edge appliance is deployed from vSphere, you can only manage resource reservations for that NSX Edge VM from vSphere. As per the resource requirements of the Edge VM deployed in your environment, there are two ways to manage reservations: Default values assigned to give 100% resource reservations. Custom values assigned to give 0–100% resource reservations. Additional considerations apply for management Cluster with respect to storage availability and IO consistency. A failure of a datastore should not trigger a loss of Manager Node majority, and the IO access must not be oversubscribed such that it causes unpredictable latency where a Manager node goes into read only mode due to lack of write access
The cluster VIP is the preferred and recommended option for achieving high availability with NSX-T Manager appliance nodes. system > overview > (click edit next to the virtual IP field) NSX-T Manager availability has improved by an order of the magnitude from a previous option; however, it is important to clear the distinction between node availability vs load-balancing. In the case of cluster VIP all the API and GUI request go via particular node, one cannot achieve a load-balancing of GUI and API. In addition, the existing session established on a failed node, will need to re-authenticated and re-established at new owner of the cluster VIP. The availability is also designed to leverage critical failure of certain services relevant to NSX-T Manager, thus one cannot guarantee failure in certain corner cases. The communication from northbound is via cluster VIP while the communication among the cluster nodes and to other transport node is done via IP address assigned to each manager node.
It is highly recommended to first adopt the basic option of LB persistence with single VIP for all access. Dependent on the situation you can move to external LB if needed. Networking > Load Balancer > Profiles > Persistence Profiles Add > Source IP Persistence So this basically means if a customer is using the load balancing services in NSX manager when they are configuring the load balancing profiles start by just using presistant profiles and an application profile on a single load balancer and scale out as needed The NSX-T Data Center logical load balancer offers high-availability service for applications and distributes the network traffic load among multiple servers. The load balancer distributes incoming service requests evenly among multiple servers in such a way that the load distribution is transparent to users.(Also make sure to explain the use of load balancers and why their environment would benefit from having one). Then expand on the different types of profiles.
An alternate method to distribute traffic is via using LAG, which would require the ESXi hosts be connected to separate ToRs forming a single logical link. This would require multi-chassis link aggregation on the ToRs and would be specific to vendor. This mode is not recommended as it requires multiple vendor specific implementations, support coordination, limited features support and could suffer from troubleshooting complexity. link aggregation groups TO configure System->Fabric->Profiles->Uplink Profiles. (enter desired LAG details) active or passive, load balancing method, number of desired uplinks, lag time out then add a teaming configuration However, many times existing deployment of compute may carry this type of teaming and often customer operational model has accepted the risk and knowledge set to operationalize LAG based teaming. For those existing deployment, one can adopt LAG based teaming mode, for compute only workload. In other words, if the compute host is carrying edge VMs (for North-South traffic and requires peering over LAG) traffic then its highly recommended to decouple the edge and compute with either dedicated edge hosts or edge and management. Please refer to a specific section which discussed disadvantage of mixing compute and edge VM
For the VM form factor, it is important to remember that the Edge Bridge will end up sourcing traffic from several different mac addresses on its VLAN vNIC. This means, that the uplink vNIC must be connected to a DVPG port group allowing: forged trasmit and mac learning or promiscous mode Both of the above capabilities is not supported on VSS while supported on VDS. This means it’s a strong recommendation is to use VDS when deploying Edge node form factor is the size, configuration, or physical arrangement of a computing device. If deployment is running vSphere 6.5 where mac learning is not available, the only other way to run bridging is by enabling promiscuous mode. Typically, promiscuous mode should not be enabled system wide. Thus, either enable promiscuous mode just for DVPG associated with bridge vNIC or it may be worth considering dedicating an Edge VM for the bridged traffic so that other kinds of traffic to/from the Edge do not suffer from the performance impact related to promiscuous mode.
BFD configuration recommendation for link failure detection is 300/900 mSec (Hello/Dead), however assure that BFD configuration match for both devices. Recommended BGP timer is set to either default or matching remote BGP peer Without BFD, recommended BGP timer is 1/3 Sec (Hello/Dead) networking > routing > tier 0 or tier 1 > routing > bgp >edit ( click timers and password, enter a value for BFD interval BFD is a detection protocol designed to provide fast forwarding path failure detection times for all media types, encapsulations, topologies, and routing protocols. In addition to fast forwarding path failure detection, BFD provides a consistent failure detection method for network administrators. Because the network administrator can use BFD to detect forwarding path failures at a uniform rate, rather than the variable rates for different routing protocol hello mechanisms, network profiling and planning will be easier, and reconvergence time will be consistent and predictable.
Between Edges:  By mapping segments to either BP1 or BP2, in stable condition, their bridged traffic will be handled by either Edge. This the recommended method for achieving load balancing. L2 bridging with NSX-T Edge Transport nodes uses Bridge Profiles. Each Bridge profile specifies which Edge Cluster to use and which Edge Transport node serves as the Active or Passive Bridge for the instance. System > fabric > profiles > edge bridge profiles configuring a range of VLAN IDs when mapping a segment to a bridge profile. Segments that have been configured for Guest VLAN tagging can now be extended to VLAN through an edge bridge. The feature is enabled by configuring a range of VLAN IDs when mapping a segment to a bridge profile. Segment traffic with a VLAN ID in the range is bridged to VLAN, keeping their VLAN tag. Traffic received on the VLAN side of the bridge with a VLAN ID falling in the configured range in the segment to bridge mapping is bridged into the segment, keeping their VLAN ID as a guest VLAN tag.
The bare metal configuration with greater than 2 pNICs is the most practical and recommended design. This is due to the fact that 4 or more pNICs configuration substantially offer more bandwidths compared to equivalent Edge VM configuration for the NIC speeds above 25 Gbps or more. The same reasons for choosing bare metal applies as in 2 pNICs configuration as discussed above. Question is this a bare metal configuration and do you have 2 or more physical NICs The configuration guideline with multiple NICs is discussed at Single N-VDS Bare Metal Configuration with Six pNICs. This design again uses single N-VDS as baseline configuration and separate of overlay and N-S traffic on a set of pNICs. The critical pieces to understand is the follow the teaming design consideration as discussed in the Single N-VDS Bare Metal Configuration with Six pNICs where the first two uplinks (uplink 1 and uplink 2) in below diagram associate with Load-Balance Source ID teaming assigning overlay traffic to first two pNICs. The N-S peering design remains the same with single pNIC in each of the associated uplink profile.
Recommendation is not to use the same DVPG for other types of traffic in the case of collapsed cluster design to maintain the configuration consistency general recommandation
For the bridging services, one must enable mac-learning on N-VDS which available as natively as compared VDS. In addition, the VLAN transport zone for the bridge must be different then the host N-VDS, as in this recommendation the dedicated N-VDS-B is used for bridging traffic. switching > switching profiles If the Edge is deployed on a host with NSX-T installed, it can connect to a VLAN logical switch or segment. The logical switch must have a MAC Management profile with MAC Learning enabled. Similarly, the segment must have a MAC Discovery profile with MAC Learning enabled.
For multi-rack availability, the recommendation is to keep availability model for Edge node recovery/restoration/redeployment restricted to rack avoiding any physical fabric related requirement (independent of L2 or L3 fabric). recommandation There are two forms of availability to consider for Edge VM. First is Edge node availability as a VM and second is the service that is running inside Edge VM. Typically, the Edge node VM availability falls into two models. In-rack verses multi-rack. In-rack availability implies minimum two hosts are available (for both ECMP and stateful services) and failure of a host will trigger either re-deployment of Edge node to available host or a restart of the Edge VM depending on the availability of the underlying hypervisor.
 Traditional vSphere best practice is to use four ESXi hosts to allow for host maintenance and maintain the consistent capacity. Question do you have four ESXi hosts?
The recommendation is to not place Edge VM with compute host in a configuration beyond two racks of compute as it will result in suboptimal performance and capacity planning for future growth. recommandation Either consider dedicated Edge Nodes cluster or sharing with management with sufficient bandwidth for Edge VMs. In general, this leads to common practice of deploying 4 pNICs host for Edge VMs regardless of where it’s hosted, dedicated or shared.
The bare metal Edge form factor is recommended when a workload requires multi-10Gbps connectivity to and from external networks, usually with active/active ECMP based services enabled. recommandation The availability model for bare metal is described in Edge Cluster and may require more than one Edge cluster depending on number of nodes required to service the bandwidth demand. Additionally, typical enterprise workloads may require services such as NAT, firewall, or load balancer at high performance levels. In these instances, a bare metal Edge can be considered with Tier-0 running in active/standby node. A multi-tenant design requiring various types of Tier-0 services in different combinations is typically more suited to a VM Edge node since a given bare metal node can enable only one Tier-0 instance.
The VM Edge form factor is recommended for workloads that do not require line rate performance. recommandation It offers flexibility of scaling both in term of on-demand addition of bandwidth as well speed of service deployment. This form factor also makes the lifecycle of Edge services practical since it runs on the ESXi hypervisor. This form factor also allows flexible evolution of services and elastic scaling of the number of nodes required based on bandwidth need. A typical deployment starts with four hosts, each hosting Edge VMs, and can scale up to eight nodes. The Edge Node VM section describes physical connectivity with a single Edge node VM in the host, which can be expanded to additional Edge node VMs per host. If there are multiple Edge VMs deployed in a single host that are used for active/standby services, the design will require more than one Edge cluster to avoid single point of failure issues.
For the management cluster, this design recommends a minimum of three KVM servers do you have three kvm servers
Enhance Data Path is not recommended for common datacenter application and deployment. recommandation
The Firewall table category aligns with the best practice around organizing rules to help admin with grouping Policy based on the category. Each firewall category can have one or more policy within it to organize firewall rules under than category. security > distrubuted firewall (check the firewall rules to they align with Vmware’s best practices)
Verify NSX Controller cluster has elected a master. Verification/ Check
On the NSX Controller that is owner for the VNI, verify all hosts with workloads using the VNI are listed in the connection table. log into nsx controller cli show control-cluster logical-switches vni 5000
Confirm that syslog is configured to log to VMware vRealize Log Insight or other syslog server. Question: do you have syslog is it configured to use log insight? Explain the benefits of syslog being configured to a syslog server.
Evaluate NSX Manager system events. vrealise network insight If vRNI is being used if not explain the uses of vRNI
Verify that NSX backups are configured and scheduled for regular occurrence. system > utilitiues > backup If the NSX Manager becomes inoperable, you can restore it from backup. While the NSX Manager is inoperable, the data plane is not affected, but you cannot make configuration changes. There are three different types of backups. You can use manual backups or Automated backups (these are recommanded).
Confirm each NSX Controller instance has the same set of IP addresses registered in the startup nodes list. confirm
Verify IPsec tunnels between NSX Controller instances have been established. log into the nsx controller cli and run show control-cluster network ipsec status show control-cluster network ipsec tunnels You can troubleshoot IPSec VPN tunnel connectivity issues by running IPSec configuration commands from the NSX Edge CLI. You can also use the vSphere Web Client and the NSX Data Center for vSphere REST APIs to determine the causes of tunnel failure and view the tunnel failure messages.
If the firewall on the ESG is enabled, evaluate firewall advanced parameters, such as TCP/UDP/ICMP timeouts. confirm On the firewall, a number of timeouts for TCP, UDP, and ICMP sessions can be specified to apply to a user-defined subset of virtual machines or vNICs. By default, any virtual machines or vNICs not included in the user-defined timer are included in the global session timer. All of these timeouts are global, meaning they apply to all of the sessions of that type on the host.
Configure a dedicated HA interface.  Networking > Networking Settings > Global Networking Config
You can configure the HA (high availability) mode of a tier-0 gateway to be active-active or active-standby.
Evaluate free available disk space Verification/ Check Having a full disk space can cause many issues
Evaluate NSX Manager disk usage. Verification/ Check Having a full disk space can cause many issues
check port mirroring tools > port mirroring You can monitor port mirroring sessions for troubleshooting and other purposes. ( Port mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network-monitoring device connected to another switch port)


Breaking it down

A detailed explanation of the Top 10 checks can be found on YouTube here


Once you have collected this data from the environment, the question is then how do you present this back?

You could just give a verbal summary of their environment and what you found, or you could present it back as a report, below is a sample of how this could look.

NSX-T Best Practice Review

Customer Name

Specialist Reviewer: Name

Table of Contents

1.0 NSX Component Summary…………………………………………………………………………….. 3

Table 1 – System Overview…………………………………………………………………………………………….. 3

Table 2 – Inventory Overview…………………………………………………………………………………………… 3

Table 3 – Security Overview…………………………………………………………………………………………….. 3

Table 4 – Networking Overview………………………………………………………………………………………… 3

2.0 Methodology……………………………………………………………………………………………………. 4

Table 5 – Best Practice NCA Review…………………………………………………………………………………. 5

3.0 General Guidance………………………………………………………………………………………….. 22

4.0 References…………………………………………………………………………………………………….. 23

5.0 Next Steps……………………………………………………………………………………………………… 23


1.0 NSX Component Summary


Table 1 – System Overview


System Overview  
System Load  
LDAP Servers  
Active Directory  
vSphere Clusters  
Transport zones  
Host Transport nodes  
Edge Transport nodes  
Host Clusters  
NSX Node Management  


Table 2 – Inventory Overview


Inventory Overview  
Virtual Machines  
Context Profiles  


Table 3 – Security Overview


Security Overview  
Distributed FW Policies  
Gateway Policies  
Endpoint Policies  
Networking Introspection NS policies  
Networking Introspection EW policies  


Table 4 – Networking Overview


Networking Overview  
Tier 0 Gateway  
Tier 1 Gateway  
VPN services  
NAT Rules  
Load Balancers  


2.0 Methodology


This best practice review was completed manually with the aid of the NSX-T 2.5 reference design guide (1) as well as the NSX-V health check output from the health check analyser 5.5 (2).


From these reference artefacts a list of best practices has been collated. The in scope environment has then been cross referenced against best practices and recommendations.  An overview of the environment comparative to this has been laid out in section 3.0. This best practice review has been completed for general guidance only.



Table 5 – Best Practice NCA Review

Finding ID Recommendation/Best Practice Customer Review What does this mean?
1     .



3.0 General Guidance


Following an inspection of the customer’s environment and cross referencing this against VMware’s best practices and recommendations and overall view considerations have been laid out below.


Upon viewing the customers environment, the customer stated it was new and greenfield, therefore everything had not been configured to the final architectural design. However, after investigating the environment, it has been concluded that this environment has been set up so far in accordance with the majority of VMware’s best practice guidelines and general recommendations.


Future architectural considerations:


4.0 References


  1. (September 2019), VMware NSX-T Reference Design.
  2. Health Check Analyzer 5.5.

5.0 Next Steps


Please contact