In this blog, we will highlight why Big Cloud Fabric (BCF) is the ideal underlay fabric for VMware NSX-T, and provide useful insights for admins looking to maximize benefits from your NSX-T deployment.

NSX-T is the networking and security component of the VMware software-defined data center (SDDC) stack. Using NSX-T manager, admins can set up logical networks to enable communication between virtual workloads residing on multiple hypervisors. An underlay network is required to interconnect the hypervisors and enable traffic between them.

 

In the following section we will discuss some of the challenges and inefficiencies when the underlay network is a legacy box-by-box network and how BCF mitigates these challenges and inefficiencies.

Challenges when deploying VMware NSX-T on a legacy box-by-box underlay

  1. Host network provisioning: ESXi/KVM nodes need to be connected to the physical fabric, and those specific ports on TOR switches need to be configured  manually with appropriate configurations for LAGs/LACP, etc.depending on the teaming policy of the N-VDS uplinks from each node. Each rack can have 20 to 40 ESXi/KVM hosts, resulting in 40-80 interfaces/LAGs/LACP configurations, which significantly increases the time to service enablement and also the scope of error.

  1. Transport VLAN configuration: In order for the GENEVE tunnels to be established between hosts, the underlay network needs to provide connectivity between Tunnel Endpoints (TEP) defined on these hosts. Network admins would need to define and trunk transport VLANs on multiple switches, increasing the number of touch points and possibility of misconfigurations. In case of VLAN-based (non-GENEVE) logical switches,  the number of VLANs that need to be defined and trunked in the underlay network would increase proportionally, leading to increased setup time and scope of misconfigurations/errors. 

  2. Visibility: In order to get end-to-end visibility, network & virtualization admins would need to gather overlay and underlay connectivity information from multiple consoles, and perform underlay/overlay correlation manually, which is extremely time consuming. Getting historical information to isolate a problem is a challenging and cumbersome process as it needs log scraping on multiple switches. As one might infer, the problem gets worse as the number of switches in the underlay increase.  

  1. Troubleshooting: To troubleshoot any underlay connectivity issues, network admins would need to perform box-by-box hopping to figure out the end-to-end path of the packets and isolate the switch causing the issue. This can significantly increase the time to restore services, leading to bad customer experience and potential revenue loss.

As highlighted above, a legacy box-by-box underlay can be cumbersome, error prone, time consuming and inefficient – it just does not scale operationally. You need an underlay that can operate at the speed of the VMs/Containers. 

 



Let’s discuss how BCF makes the life of a network admin easy by making the underlay more agile.

Deploying VMware NSX-T on BCF underlay

Host Network provisioning

As soon as the ESXi/KVM nodes are physically connected to BCF, they get auto discovered and provisioned as per the teaming policy of the N-VDS uplinks from each node, irrespective of the number of hosts connected to the BCF underlay. Using Big Cloud Fabric, there is no need to manually configure the switch and interface where the host connects, thus simplifying Host Network provisioning. No hard-wired port mapping needed -- a server link can be connected to any speed-appropriate switch port. BCF automatically re-provisions for the new port. Also any server can be placed in or moved to any rack at any time -- BCF controller does the heavy lifting of automatic logical-to-physical mapping through SDN intelligence while providing full topology visibility to the network admin.

Transport VLAN provisioning

Just like the public clouds use virtual private cloud (VPC) logical construct to build multi-tenant L2/L3 networks, BCF leverages an AWS-style VPC on-prem construct -- called Enterprise VPC (E-VPC) to deliver Cloud-Network-as-a-Service operational experience. 



BCF creates E-VPC for NSX-T, allowing logical isolation and delegated administration, to automate Transport VLAN/VLANs provisioning within the E-VPC, and trunk the VLANs on the appropriate host interfaces. Admins need not perform any manual configuration box-by-box as we saw with the legacy approach. 

 

As and when more networks are created on NSX-T, BCF automatically adds the configuration thus reducing wait time to provisioning a new host. Even when the host is moved from one rack to another, no network provisioning is required.

Visibility 

BCF provides visibility into the NSX networking environment as well as underlay networking in a single dashboard, making it easy for network & virtualization admins to make overlay/underlay correlation and get an end-to-end picture. 

 

With BCF Fabric analytics, network admins can not only see events, errors, logs, performance stats from all the underlay switches in the fabric, but also all the events from NSX manager and vCenter, all in a single console. Network admins can now easily visualize the current state of the physical (underlay) and virtual (overlay) network or go back in time and perform historical analysis right from a single dashboard.

Troubleshooting

With BCF Fabric Trace, admins can trace end-to-end packets between any connected TEPs across the fabric, with just one click and get hop by hop packet stats, thus enabling admins to restore services much more rapidly as compared to box-by-box underlay networks.

Summary

BCF, with its underlay orchestration capabilities, enhanced end-to-end visibility and one-click troubleshooting, provides a perfect underlay for VMware NSX-T

Thanks,
Sachin Vador
Sr. TME

 

Watch the Demo of BCF integration with VMware NSX-T : http://tiny.cc/u1zebz
Watch the Demo of BCF integration with VMware vSphere: http://tiny.cc/nz1jaz
Watch the Demo of BCF integration with VMware Cloud Foundation : http://tiny.cc/atqiaz