In order to meet business’ demand for faster, flexible and more responsive infrastructure for applications, the concept of a Software-defined Data Center (SDDC) has become very appealing to CIOs.  VMware has been at the forefront of the SDDC vision, offering a complete SDDC solution with vSphere (server virtualization), NSX (network virtualization), Virtual SAN (Hyper-convergence / storage virtualization) and vRealize (SDDC management and orchestration suite).  Unfortunately, while the goal of a SDDC is to meet the business need for speed, the vast majority of current IT networking architecture suffer from a underlying physical network that is often the slowest, most inflexible and highly complex piece of the Data Center puzzle.  So the million-dollar question is – how do you design a physical network that is optimized for SDDC workloads?

To answer this important question, let’s drill-down further on key requirements for the physical network:

  1. Speed/Agility: Physical networks must be provisioned at the speed of VM creation.

They must also dynamically re-provision for VM mobility events, such as VMware vMotion or DRS

  1. Flexibility: Physical network change management (e.g. add/remove links, switches, servers; carry out network upgrades) must occur without impacting SDDC workloads or downtime

  2. Responsiveness: A physical network’s API interactions with SDDC systems need to be optimized for scalable, dynamic interactions

  3. Speed, Responsiveness: Network admins AND VM admins need to have visibility to each other’s environments to facilitate rapid resolution of issues.  They need to have troubleshooting tools to check for VM-to-VM connectivity across the fabric to minimize application downtime

Traditional box-by-box, hardware-defined networks have proven to be a total mis-match to modern SDDC requirements.  Deploying a new workload or application in a typical production network takes weeks, compared to provisioning a new VM that takes minutes. What is needed is a physical network that is also software defined, such that the entire pod network of many tens of leaf/spine switches operates like a single logical entity.  Big Switch’s Big Cloud Fabric is a leading example of an SDN leaf/spine fabric architected as a logical scale-out switch to meet the needs of SDDC workloads for speed, flexibility and responsiveness, as shown in the table below.



The fact that SDDC systems include compute, an overlay network and (hyper-converged) storage, it is critical that a physical network is able to also interact with all three components.  Additionally, SDDC systems deploy visibility and troubleshooting tools for the VM admin.  Hence, the physical network also needs to extend its own visibility into the SDDC tools for consistent operations and troubleshooting across both virtualization and network domains.

At Big Switch, our focus has been to offer comprehensive support for VMware SDDC solutions – vSphere, NSX, Virtual SAN and vRealize (see the table below).




  • BCF physical network automation and visibility for vSphere:  

    • Automatic ESXi host discovery and LAG formation to each leaf switch

    • Automatic BCF Layer 2 segment creation and VM learning

    • Network policy migration with vSphere vMotion / Dynamic Resource Scheduler (DRS)

    • Improved VM visibility and troubleshooting for both network and virtualization administrators

    • Resources: Blog, Video, Solution Brief

  • VMware NSX for vSphere overlay visibility and troubleshooting:

    • BCF underlay provides enhanced overlay visibility (VMs, logical switches, VNIs) to network admins as well as VTEP-to-VTEP troubleshooting across the entire leaf-spine-leaf fabric.

    • The VMworld tech preview demo shows further integration of Big Cloud Fabric with NSX HW VTEP, to enable bare-metal applications to interact with VMs on overlay network.

    • Resources: Video

  • Virtual SAN network automation and visibility: The “easy-button” networking for Virtual SAN administrators

  • BCF support for VMware Integrated OpenStack (VIO):

    • Enable enterprises to deploy production-grade OpenStack private clouds on VMware vSphere and NSX-v environments with BCF physical underlay

    • Resources: Blog, Video, Webinar

  • BCF Virtual Pod (vPod) for Multi-Orchestrated SDDC

    • Support integration with multiple vCenters (customer: U2Cloud)

    • Enable isolated SDDC environments across multiple tenants and across multiple SW version of vSphere, VSAN, NSX, and VIO

    • Ideal for multiple use cases: managed private cloud deployments, mergers and migrations, engineering (dev/test) labs

    • Resources: Blog

  • BCF’s GUI plug-in for vCenter for VM admin visibility & troubleshooting:  

    • Single pane of glass network visibility for virtualization admin

    • Visibility to host-to-leaf connectivity

    • VM-to-VM troubleshooting on leaf-spine-leaf fabric  

    • Resources: Video

  • BCF’s Content Pack for vRealize Log Insight:

    • Significant operational benefit for the VM admin visibility & log correlation

    • Centralized log view for multiple BCF PODS

    • Pre-configured dashboards for BCF

    • Pre-configured alerts to monitor BCF error conditions

    • Resources: Video


HW Vendor Choice: In addition, Big Cloud Fabric is a pure software solution that works with 3rd party open networking (whitebox or britebox) switches including Accton and Dell.  This again matches fully with SDDC principles where SDDC software is decoupled from the underlying “server” hardware.  Data Center operators now have complete vendor choice for hardware and software, while also ensuring that best-of-breed supply chain and support from trusted brands such as Dell.


Ready to test drive BCF at the speed of SDDC?


Prashant Gandhi

VP Products and Strategy


Additional Resources: