The world’s first software-defined network (SDN) is one step closer to being launched, as more than 300 network engineers gather at Stanford University this week to prepare an OpenFlow 100G Ethernet SDN within the research project known as Internet2.
Scientists and academics at the Summer 2012 ESCC/Internet2 Joint Techs conference will put the finishing touches on the new network infrastructure, known as the Innovation Platform.
The goal of the Innovation Platform is to provide a research sandbox in which researchers in academia and big data initiatives can experiment in order to improve networking software and protocols for the Internet itself. This fits well within the mission parameters of Internet2, which was founded 16 years ago to do exactly the same thing.
Internet2’s purpose is often misunderstood by those in the IT community: Many people think of it as some sort of separate network that exists completely outside the infrastructure of the Internet for the sole use of academics who were less than thrilled about sharing their Internet with commercial interests and the public at large in the 1990s.
Like any stereotype, there is some truth to this description. As traffic on the Internet exploded when the World Wide Web was opened to commercial use, the scientists and academicians who helped build the Internet in the first place soon saw that bandwidth congestion from all this new traffic was seriously going to crimp their data style.
Thus began the integration of new infrastructure within the Internet in 1996, and the beginnings of Internet2, which was to provide big bandwidth to support the research and educational communities, as well as build new technologies to improve the Internet.
While Internet2 utilizes a very high-capacity backbone, the consortium maintains that it is not a physically separate network. The tubes, as it were, are the same as the Internet’s – only the routers and switches are different.
The work being done at this week’s conference is yet another milestone in a long list of achievements for the network, including the deployment of IPv6 in 2000, a 1TB/hour data transfer from CalTech to CERN in 2003, and planned support for 8.8Tbit/sec of capacity in 2013.
Progress for OpenFlow
Specifically, the Innovation Platform relies on OpenFlow, an open standard used by “commercial Ethernet switches, routers and wireless access points [providing] a standardized hook to allow researchers to run experiments, without requiring vendors to expose the internal workings of their network devices,” according to the OpenFlow About page.
When formally launched, the new software-defined network should be able to handle big data transfers much faster, as network scientists will be able to get into the guts of every switch and router in the network to optimize data transfers that right now can bog down big data analysis. Think of it as an elastic network, shifting to meet the demands of the traffic within.
The Innovation Platform will also serve as an important test bed for OpenFlow, which has been criticized for being “still too immature to be usable.”
100G networks in use throughout the Internet-at-large is vital to the success of big data and cloud computing, and the work of the Internet2 consortium will be a big part of making that a reality.
Image courtesy of Shutterstock.