Why OpenFlow Hardware Control Matters in the Data Center
In the last week, there was some discussion about whether OpenFlow for physical switches matters for the data center. If some are to believe OpenFlow control for hardware switches is not only a bad idea, but also against the interest of users as well as switch vendors. http://bit.ly/TWQyus
If you feel confused, you are not alone.
Over the past year, at Big Switch, we have talked to over a hundred customers that operate data centers. About half of the deployments and pilots that we have done include physical switches that support OpenFlow. The majority of network architects within these firms had a very clear message for us: they want a converged Software Defined Networking solution that supports both physical and hypervisor switches. This is not surprising. Anyone who has run a data center will scoff at the idea of having yet-another pane of glass to manage the network. If I have to debug high latencies between end hosts, would I rather have two separate consoles to debug the problem or an integrated one? And if I want to program my network to react to latencies and mitigate the issue, would I rather have one API or two? The promise of Software-Defined Networking(SDN) is to operate both physical and hypervisor switches through a single framework.
In the short term, there are other important reasons for hardware support. I recently had a conversation with a customer that wanted to integrate 200 physical firewalls into the private cloud they were deploying in their enterprise. With even very simple support for OpenFlow in the ToR switches this is very doable. Without it, it is all but impossible.
And then there are organizations that don't virtualize their servers. Many SaaS companies and most of the largest Internet companies don't have multiple VMs per server and in many cases have no hypervisor switch at all. For these scenarios OpenFlow control of physical switches is really the only practical solution. It functions very well in real deployments.
So why are we seeing these arguments? There are, really, tacit debates between software vendors and hardware vendors. Very recently a customer told me that, for their network roadmap, they were looking at Cisco vs. VMware. This is not synergy, but competition. While it is tempting to think of a world where all of the logic for networking and thus all of the value moves into software, that won't happen any time soon. We still need hardware switches.
And we need hypervisor switch support. This is the one part of the recent assertions that is valid. In any heavily virtualized environment you do need Hypervisor switch support. You can't do it with hardware alone. Consider an instance with 10K VMs per rack and we have seen more. And with just a few thousand TCAM entries available at the top of rack there is no meaningful way to set ACLs. This situation doesn't equeate with hardware switches become unnecessary. When it comes to QoS, separating different network types for compliance or performance, inserting physical devices, monitoring or security of the virtual topologies you still need hardware support.
The overall trend we are seeing is that for your first small pilot, hypervisor-only solutions are a great way of trying out SDN. But as anyone who is deploying SDN at scale will find out, for large deployments SDN support in physical switches creates tremendous additional value and removes many of the roadblocks of scaling networks in private cloud architectures. As SDN matures and deployments get larger, we expect to see a lot of OpenFlow enabled hardware topologies.
– Guido Appenzeller