7 Signs No 2
The 2nd concrete sign of DC network transformation is deployment topology that leverages core & pod design principles. A pod is a self-contained unit of compute, network and storage; in our context, a “pod” is used interchangeably with a “pod network.” Most traditional network designs have been based on a 20-year architectural premise, based on a N-tier hierarchical topology (see diagram below). As data center traffic shifted to east-west, the N-tier architecture led to a highly oversubscribed network that impacted application performance, created resiliency nightmares and resulted in long and complex troubleshooting workflows. Cloud Giants figured out that a different model was needed in order to:

  • Deploy high performance, non-oversubscribed (bisectional bandwidth) networks
  • Bring operational simplicity and resiliency
  • Ensure scale-out by design
  • Innovate faster, at a per-pod granularity
  • Supercharge vendor choice

Cloud Giants address these challenges by adopting two architectural principles (see diagram below):

  • Pod design based on leaf/spine (Clos) fabric, which provides strong multipathing to eliminate oversubscription, provide traffic predictability, and ensure resiliency. Fabric topology scales out naturally, simply by adding a spine or a leaf switch
  • Pod interconnection via Core (sometimes referred to as “Spine Core”), thus allowing DC- wide scale-out simply by adding more pods

N-Tier Graphic 

 

Is there an example of Cloud Giant’s data center with Core & Pod design?

Certainly, Facebook’s Altoona data center, see: Introducing data center fabric, the next-generation Facebook data center network. It leverages a bounded pod design with 48 leaf switches and 4 spine switches, as shown in the diagram below.

Top Switches 

Similarly, Google also innovated in multi-tier Clos fabric architectures for over a decade and also disclosed their most recent 5th generation Jupiter design; see: ONS-2015 talk and Jupiter Rising SIGCOMM’15 paper. A summary version can be found here.

Does Sign #2 pass the “Principles of Transformation” test?

  • Must be based on technology/architectural construct: check, it is clearly an architecture
  • Has to be revolutionary: check, it is well established, core & pod is far superior compared to the legacy n-tier design
  • Need to have market traction: Several thousand customers have already adopted this design in production data centers.

Are mainstream IT organizations adopting core-and-pod design?

Most, if not all, data center refreshes are moving to core & pod designs. Even new application builds, which could be as few as 4 racks, are often deployed as pods, connected to an existing L3 core. Market research firms are also guiding customers towards Cloud Giants’ data center network designs, i.e. Making networks more “Googley.Gartner estimates that 40% of global enterprises will have a web-scale networking initiative by 2020.

How does pod-based design help bound the DC failure domain?

Each pod provides a defined failure domain (or blast radius), thus any failure does not cause a data center wide outage. Size of the pod determines the blast radius. For example, a 16-rack pod would have 32 leaf switches (dual leaf per rack) and 2 to 4 spine switches (based on level of oversubscription). This design supports 600+ servers (40 servers per rack) and 18K+ VMs (30 VMs per server). Mainstream IT organizations typically prefer blast radius of 200 to 500 servers, or 5K to 10K VMs.

Can innovation velocity and vendor choice be achieved with core-and-pod design?

Because each pod is a self-contained unit of network, it can be designed with the latest technology available at that time. Different pod versions, potentially built with different networking vendors’ products, reside in the same data center (see earlier diagram), connected to the same core network. A new pod, built with innovations of the day, is easily inserted in a brownfield data center. The core-and-pod design also makes it very easy to retire a pod, without impacting rest of the data center applications. No more 7-year architectural lock-in of legacy n-tier designs, which have led to massive innovation deficit and made networking a roadblock.

How is Big Switch adopting pod-based architecture?

Big Switch’s data center switching fabric – Big Cloud Fabric (BCF) – is designed ground up to be a pod fabric with open networking switches. The BCF design is part and parcel of Big Switch’s Cloud-First Networking philosophy, where cloud principles are first principles. A BCF deployment can start with a single leaf switch and can scale out to a very large pod fabric of 128 leaf switches and 12 spine switches. Also, BCF is 100% zero touch – simply rack & cable the switches and turn on the power. BCF auto-installs NOS software on all the switches in the fabric, auto-forms the L2/L3 fabric, and auto-configures switches so they are ready to forward traffic. Even SW upgrades of an entire 140-switch fabric can be completed in ~15 minutes in a hitless manner. 

Net-net core-and-pod design is a must for any new network deployment in the data center.  It would be hard to justify data center network transformation without it


Care to guess Sign #3?


Prashant Gandhi
VP & Chief Product Officer

Prashant is responsible for Big Switch's Cloud-First Networking portfolio and strategy, including: product management, product marketing, technology partnerships/solutions and technical marketing. Prashant has been instrumental in the product strategy and development of Big Cloud Fabric and Big Monitoring Fabric products. Additionally, Prashant is responsible for Big Switch led open-source initiative, Open Network Linux (ONL), to accelerate adoption of open networking and HW vendor choice. You can connect with Prashant on LinkedIn.