Big Cloud Fabric (BCF) is an ideal underlay fabric for a VMware Cloud Foundation (VCF) driven software-defined data center (SDDC). If you are an admin looking to deploy VMware Cloud Foundation, this blog will provide useful insights in how Big Cloud Fabric uncomplicates underlay operations for maximizing benefits from your VCF deployment.

VMware Cloud Foundation is an integrated software stack that bundles and automates deployment of compute (vSphere), storage (vSAN) and network virtualization (NSX) components. Admins no longer have to worry about making sure they are using the compatible versions on each of these products, or going through the deployment documents for each of the products, thus simplifying the SDDC deployment and have the environment ready in a few hours.  VCF admin focus can now shift to enabling business-related services and drive IT innovations. 

Big Cloud Fabric

 

Next, let's take a look at why choosing the right underlay is critical  for maximizing your VMware Cloud Foundation deployment experience. The SDDC components (vSphere, vSAN and NSX) get deployed on multiple different ESXi nodes. These nodes need to communicate with each other for Management, vSAN and vMotion traffic. This connectivity is provided by the underlay network  in the form of VLANs

Legacy Box-by-box Network Challenges when deploying VMware Cloud Foundation 

If you are choosing a typical legacy box-by-box underlay network for VMware Cloud Foundation then here is what needs to happen before one can start with the VMware Cloud Foundation deployment.

  1. Box-by-Box provisioning complexity: The underlay switches need to be manually deployed by loading the correct software version which is tested and qualified and then with appropriate configuration on a switch-by-switch basis. This underlay provisioning could take weeks to months starting from unboxing switches to successfully adding ESXi nodes for VCF deployment.

  1. Host Network Provisioning: The ESXi nodes need to be connected to the physical box-by-box network, and appropriate VDS configuration along with TOR switches need to be configured manually with appropriate configurations for LAGs/LACP, etc. Single rack may include 20-40 ESXi host implying 100+ endpoint configuration to get all the host working in a rack. A lot of these touch points has significant room for errors and takes a long time to make it operational.

  1. Infrastructure VLAN provisioning: VCF infrastructure VLANs for Management, vMotion and vSAN need to be manually provisioned on each TOR switches  to establish the connectivity required for the deployment to succeed. Depending on your network architecture, overlay network provisioning might be required, which can further complicate the deployment process. Once VLANs are provisioned, the deployment of VMware Cloud Foundation (Management Domain and SDDC manager) can be initiated.

  1. Workload Domains: As additional workload domains get deployed on VMware Cloud Foundation, network admins need to repeat all the above steps. This additional dependency adds delay in application / service on-boarding.

The above process with the legacy networking,  is cumbersome and error prone process and can significantly prolong the VMware Cloud Foundation deployment

Deploying VMware Cloud Foundation on BCF

Let's see in detail how BCF can expedite the VMware Cloud Foundation deployment by simplifying each of the above mentioned deployment steps. Underlay is fully abstracted and made logical, presented as a service, so that there is no hard dependency on the physical network constructs and/or topology.

Essentially BCF underlay becomes “invisible” to SDDC, thus enabling the underlay to operate at the speed of VCF.

Underlay provisioning

BCF provides Zero-Touch Fabric operations, once physical cabling is in place, the underlay provisioning is completely automated. No cumbersome, error-prone process like the legacy box-by-box approach.

The switches in the fabric are auto-discovered and auto-provisioned to form a leaf/spine fabric with right Switch Light operating system. Network admins need not worry about switch software versions or initial configurations.  This is also applicable if switches need to be RMA’ed in the future, all that the data center installation admin has to do is to remove the old switch and replace it with the new switch. They don’t even need to plug in cables in the same ports.

Automated Fabric

 

Automated fabric-wide hit-less software upgrades make future upgrades seamless. Network admins no longer need to worry about moving workloads around, writing complicated traffic draining policies or scheduling long maintenance windows for the upgrade.

Host Network provisioning

As soon as the ESXi nodes are physically connected to BCF, they get auto discovered and provisioned. In other words,  no need to manually configure the switch and interface where the host connects, thus simplifying Host Network provisioning. No hard-wired port mapping needed -- a server link can be connected to any speed-appropriate switch port. BCF automatically re-provisions for the new port. Also any server can be placed in or moved to any rack at any time -- BCF controller does the heavy lifting of automatic logical-to-physical mapping through SDN intelligence while providing full topology visibility to the network admin.
Host Automation

 VLAN provisioning

Just like the Public Clouds use the VPCs to build multi-tenant L2/L3 networks, BCF leverages AWS-style virtual private cloud (VPC) on-prem construct to deliver Cloud-Network-as-a-Service operational experience. Specifically, BCF creates Enterprise VPCs (E-VPC) on-prem for each of your VMware SDDC components allowing logical L2/L3 isolation and multi-tenancy. 

E-VPC



BCF automates L2 network provisioning (e.g. VLANs) within the E-VPCs so admins need not configure configure the VLANs manually box-by-box as we saw with the legacy approach. Even when host is moved from one rack to another, no network provisioning is required.

VLAN Configuration Automation

 

Workload Domains

As admins deploy Workload Domains on VMware Cloud Foundation, E-VPCs are created for each workload domain

vmware sddc

As an example, in your VMware Cloud Foundation deployment, vCenter for each workload domain get their own E-VPC allowing multi-tenancy and delegated administration.

foundation deployment


 Additionally, with E-VPC’s contextual Visibility and Fabric Analytics, admins get deep insights into vCenter or NSX networking right from BCF dashboard. With BCF’s one-click troubleshooting capabilities, the end-to-end (fabric-wide) path tracing of the packet quickly resolves the “Is it App or the Network” conondrum, without having to perform tedious, time consuming box-by-box hopping.

E-VPC’s contextual Visibility and Fabric Analytics

   

If you're heading to VMworld 2019 and would like to meet with our team on-site to learn more our solutions for the VMware SDDC, please request a meeting/technical demo here: https://bit.ly/2YVZFoq

Summary

BCF, with its Zero-Touch Fabric capabilities and E-VPC automation, makes the network fast and simple -- essentially invisible. It perfectly complements VMware Cloud Foundation, by delivering an ideal underlay for all VMware Cloud Foundation deployments.

Sachin Vador
Senior TME, Big Switch Networks

Watch the Demo of BCF integration with VMware Cloud Foundation: http://tiny.cc/atqiaz
Watch the Demo of BCF integration with VMware NSX-T: http://tiny.cc/030taz
Watch the Demo of BCF integration with VMware vSphere: http://tiny.cc/nz1jaz