If there was a real-time tag cloud for the RSA conference this year, three words would be in big bold letters: Security (of course), Cloud, and Virtualization. Paul Congdon, from HP's ProCurve Networking group gave us a view into the not-so-distant future where servers, like good house guests, knock before entering. In this case, it's the link they request, and to get it they will properly announce themselves and their intentions to allow the host to prepare to accommodate them.
This capability is a linchpin in removing the process bottleneck in provisioning new services in the data center. For most organizations, the network is manually configured. To keep up with the movement of the provisioning of virtual machines, the network needs to enable "plug and play".
Complexity Means Controls
The network is in a unique postion as a "pipe" as well as a "control" where it needs to know what communications go where and plays the role of traffic cop.
This means opening ports between servers, controlling traffic and setting monitors to make sure traffic is optimized. When things change, configuration does as well, especially when a new service is requested. Today, this is controlled by human processes and controls to keep the network up to date with the applications and servers that host them.
In the future, there is the opportunity to move forward in auto-configuration or even smarter handshakes.
In essence, to oversee this process a directory or resources or inventory would exist that allows the network to "know" what is in place within it. This is a new control point for the data center, and is a resource to the network.
Solutions in Protocols
802.1.x is technology that has been used in WiFi connections. One reason it was useful in that context is that it's expected that the link drops and reconnects frequently and so is seen as an opportunity for the physical link as well.
The potential upgrades to 802.1 would enable a richer dialog between the server as it starts up its networking process. This would allow the server to announce itself and its requirements (e.g. encryption) and allow the network to respond to these appropriately (e.g. set encryption key). This process can become a big win for configuration management where now, the server can come up in the network and be provisioned according to the policy.
All of this reminds us of the benefits of a company like Apple. Having the unique opportunity to control the model from end to end means have the ability to make better tools. We wonder if natural evolution will get multi-vendor shops a solution for all of their IT assets.
What will it take to get to a model-driven data center?
Photo credit: orinrobertjohn