SDN and network virtualization: A reality check

The Software Defined Networking movement is still evolving, but profiles of SDN users are becoming more clear and we're getting a bead on some of the common evaluation criteria companies are using to gauge how to go forward. We also have a sense of when companies expect to start the process in earnest. 

Complicating the analysis of all of this, however, is the fact that there are a variety of SDN solutions available and a variety of ways to consume those solutions. Some providers, for example,  are focusing on the dynamic movement of virtual workloads and leveraging the server hypervisor and techniques such as encapsulation and tunneling, while others are striving to achieve software control of the network by using the OpenFlow protocol to manipulate the flow tables in switches.

Vendors such as VMware and Nuage Networks are, for example, in the first camp, while NEC is an example of a company pursuing the latter approach, which closely resembles the model provided by the Open Networking Foundation (ONF) with its emphasis on the centralization of network control.  HP is an example of a company that is attempting to bridge the gap by integrating its SDN solution with the VMware tools.

Cisco is in a class of its own. The company has a number of solutions that it refers to as SDN and some of these solutions fit into these two SDN approaches, but Cisco is different in that its Application Centric Infrastructure (ACI) leaves some of the control functionality in the switches and routers and it leverages dedicated hardware.

+ ALSO ON NETWORK WORLD Understanding SDN Vendor Ecosystems | Incremental SDN: Automating Network Device Configuration +

Regardless of the variations on the supply side, four distinct types of SDN consumers are emerging. The first type are hyperscale shops. Google for example, built its own SDN switches so it could run an SDN backbone to interconnect its data centers. But obviously there is a huge gap between what Google can do and what everybody else can do.

Network service providers, such as Pertino and AT&T, are another class of SDN consumer. Pertino launched a cloud-based Network as a Service product based on SDN more than a year ago, and in September 2013 AT&T announced its Domain 2.0 initiative, a key component of which is a commitment to implement SDN going forward.  Given the interest that providers such as AT&T have shown in SDN, combined with the fact that services such as Pertino's are already available, it is likely that the first taste of SDN that average IT shops get will be as consumers of services.

Large financial firms such as JPMorgan and Goldman Sachs represent a third class of potential SDN consumer. Some of these firms have been active in driving the agenda of the ONF, and many have conducted trials of SDN and related technologies. In March Matthew Liste, managing director, Core Platform Engineering at Goldman Sachs, speaking at the Open Networking Summit conference in Santa Clara, Calif., said he is optimistic about SDN but wouldn't divulge the firm's specific plans.

It is likely, however, that these very large enterprise shops will drive the adoption of SDN in the short- to mid-term.  In fact, given the amount of attention some of these financial firms have paid to SDN, it is a safe bet that some will start to deploy SDN in production networks sometime in 2015. 

The remaining class of potential SDN consumers: everybody else companies of all sizes and in virtually all industries.  Market research suggests this class of consumer has deployed very little SDN to date. I was the keynote speaker at the recent Network World Open Network Exchange (ONX) conference in Chicago and tested that belief by asking the 200 attendees how many had at least a modest SDN implementation. Only a handful of hands went up. 

Evaluation criteria

Given  the great interest in SDN but lack of deployment, here are some key criteria you can use to begin the basic job of evaluating SDN options.

The key (and most obvious) criteria is to identify the problem or problems IT is attempting to solve. For example, if IT wants to support the dynamic movement of workloads, than a network virtualization overlay is worth considering for. (See a description of the differences between the approaches.) However, this approach can't help IT respond to some other challenges, such as being able to centralize the provisioning of physical switches and routers. 

Other important criteria to consider is the role of hardware vs. software, and the degree to which control is centralized. VMware has been quite clear that it doesn't see a role for specialized hardware in the data center.  Cisco is on the other side of this discussion due to the fact that, as noted, ACI uses specialized hardware.

Regarding where the network control functions should be located, the majority of SDN solutions are based on centralizing control. An exception to that is Cisco's ACI, which leaves some control functionality in the switches and routers. 

The best way for IT organizations to use these two criteria to evaluate SDN solutions is to ask the vendors for the rationale behind how they designed their solutions. For example, what are the advantages of having specialized hardware other than increased performance?  How do IT organizations balance the fact that centralizing control simplifies the management of the switches and routers, but may result in a performance bottleneck?

A key criterion for evaluating any solution, but particularly a solution based on centralized control, is scalability. It is important to understand how many flow set-ups per second a solution can realistically support and how that changes if, instead of a single controller, there is a cluster of controllers.  It is also important to understand how many switches and routers a solution can realistically support. 

A criterion that also gets a lot of attention is whether an SDN solution is open.

Unfortunately, the term open is used in a variety of different ways, so the answers to this question can be confusing.  For example, some vendors refer to their product as being open if it is based on open source software or if it is based on a specification that they themselves developed and published. 

A somewhat more common use of open is support for industry standards such as the OpenFlow protocol, or if the solution is based on a specification that was developed by multiple vendors. 

While I think open is important, what IT organizations really need is interoperability. Interoperability is critical because, in the majority of cases, IT organizations will not get a complete solution, including business applications and L4 - L7 functions, from a single vendor. Because of this, you need a high level of assurance that the various components of the solution will interoperate smoothly.

And, again, the criterion that matters most to you will depend on your environment and goals. If your primary objective is to find a way to dynamically move virtual workloads, you'll probably reach for a network virtualization tool that leverages the server hypervisor. But if you want to be able control all of the virtual and physical elements in your network, you will probably lean toward a model that takes more hardware management into account.

A question of timing

Of course early adopters in large financial shops have probably worked all of this out by now, but when should other IT organizations start to get on board?  When will the technology be ready for what Geoffrey Moore called the "early majority" in his book "Crossing the Chasm?"

Most articles about technology adoption say one indicator that the chasm has been crossed is when 15% of companies are using the new technology. Other things I look for include there being a high probability the technology will work as advertised when implemented without requiring a boatload of intervention, and surprises relative to scalability and security will be limited. Another key indicator that a technology has crossed the chasm is that IT organizations have roughly the same ability to manage the new technology as they do traditional technologies. 

Since few IT organizations will implement SDN in the near term without doing a Proof of Concept (POC), I asked attendees at the recent Network World ONX show how many expected they would do an SDN POC in the next 12 months. About 25% of the attendees raised their hand.

Since the ONX attendees took two days out of their workweek to attend the conference, they likely work for organizations that have more interest in SDN than the average IT organization. And even some of the faithful at the conference will undoubtedly face shifting priorities, meaning some will undoubtedly not get around to a POC in the next year.

Because of these factors, my best guess is that the percentage of IT organizations that will actually conduct a POC in the next 12 months is around 10%. Of course, those companies that successfully conduct a POC will then move to a limited trial before they implement SDN in a production network.

I followed up the question about POCs by asking attendees a series of questions about the likelihood that their organization will have implemented SDN sometime in the next three years, either in their data center LANs, their WANs or their campus networks. I was impressed by the fact that almost all of the attendees believe that within three years they will have implemented SDN in their data center LANs. Roughly half of the attendees believe that within three years they will either have implemented SDN in their WAN or they will be using a WAN service that leverages SDN, and approximately a third believe they will have implemented SDN in their campus networks. 

Since overlay based network virtualization solutions are easier to implement than the other two classes of SDN solutions because they are independent of the underlying physical network, I believe this class of SDN solution will likely cross the chasm in 2016.  I believe the other two classes of SDN solution will likely cross late in 2016 or sometime in 2017.  

This story, "SDN and network virtualization: A reality check" was originally published by Network World.

What’s wrong? The new clean desk test
Join the discussion
Be the first to comment on this article. Our Commenting Policies

    0 Comments