It’s hard to login to <insert any social media platform here> these days and not see something about containers. Having worked with quite a few customers on container networking projects over the past year, I wanted to dispel some myths around containers and more specifically, around container networking.

Myth: Containers exist only in the cloud

Almost every deployment example online focuses on how to deploy a container in the cloud, which would have you believe that it’s meant only for enterprises that have a cloud deployment running on traditional virtual machines. While running containers on virtual machines is probably something you cannot get around while running in the public cloud, the best way to get the most out of limited compute and memory resources for on premises deployments is to deploy them on bare-metal.

Myth: Containers will solve world hunger

Ok, maybe I overstepped a bit, but glad to see you are still with me. There will always be workloads that are probably better off being run in a VM or natively on bare-metal, but for most workloads, containers are typically a great option to build and scale your application. In my experience, most companies that are building greenfield private clouds are going with a container-first approach, containerize when you can, use virtual machines when you must. What this implies, however, is that there are many small, constantly appearing and disappearing workloads in your infrastructure that you need to build your security policies and network infrastructure for.

Myth: Container Networking is tough and complicated

You could make it really complicated. It really depends. First, let’s go through some challenges around container networking. Having mentioned that containers are ephemeral, there are really only two solid requirements for container networking - providing an IP address for containers and attaching them to the right segment/VLAN in your network.

(1) IP address assignment: In typical data center deployments, a DHCP server takes care of assigning IP addresses. However, since containers are spun up and down so quickly, having this functionality be a part of your network fabric is a better choice. Having a controller based fabric makes this even easier by having a single point of management that you can interact with using REST APIs.

(2) Assigning the right VLAN and network information: Although both overlays and a combined physical+virtual fabric address this requirement, the speed and the agility with which this is done varies greatly. With an overlay approach the physical fabric is unaware of changes happening at the virtual layer, which is a major hurdle to providing end-to-end visibility. In addition, with overlays, exposing pods/containers externally still requires additional configuration on the physical network fabric. Having an integrated physical and virtual deployment helps with both these issues by providing a single pane of glass for visibility as well as configuration changes.

Myth: Container networking requires overlays

When it comes to container networking, there are two high level approaches. One is using an overlay and underlay where you assume that the underlying network hardware configuration in your network is static and container networking is delegated to other software that is independent of the underlying infrastructure. Most vendors that have built products in this area have taken this approach because they were trying to solve a problem for customers who were primarily deployed in the cloud. As containers are starting to get attention from large enterprises that have either a combination of on-premises and cloud or on-premises only deployments, this doesn’t always make sense. The level of expertise required to troubleshoot and maintain this type of architecture is fairly involved.


The other, and arguably better approach is to deploy a network fabric that can be managed as one, which brings us to unified P+V or physical+virtual fabric. This approach uses a single network controller that can manage not only your physical network hardware but all the components required to provide networking for other workloads such as VM’s, containers etc. This allows operators to get visibility end-to-end and easily troubleshoot when there is a network outage.

Myth: Containers are not ready for production

While it’s certainly a new concept for some, containers themselves have been around since 2008. Docker made it easy to containerize applications and Kubernetes made it easy to orchestrate them at scale. As with any software, there will be a learning curve and issues, but having a robust open source community and having the right team of engineers has gone a long way with enterprises that are running containers in production. Building an infrastructure that can keep up with any type of workload, VMs, bare metal or containers, should not be a herculean task. After all, containers are just another workload on the network.


Namitha Kumar
Technical Solution Architect at Big Switch Networks