The move to virturalization leaves no stone left unturned. It touched the public network via EC2 (and now a host of hosts) it formed the Cloud and fused a new generation of the Internet. Service orientated also hits the data centers and this means things like switches, servers, and disk.
At the core of the movement of virtualization movement is freedom of the physical environment. Optimize hardware performance and set the workload free. In the process of doing this, a promise of cost savings has set a off a storm in re-factoring the data center.
This is the first in a series of posts taking a look at areas of the data center and how an openness strategy become a driver for winning customers by bringing costs down.
We took a look at the storage landscape from the eyes of Hitachi Data Systems, "HDS".
Hu Yoshida, CTO of HDS. He gave us a practical overview on how the needle of enterprise costs are being reduced focusing on reducing operational costs.We spoke with
One thing the he mentioned was that Hitachi's HDS division was able to grow in the storage business in this tough climate, which is amazing considering it is an industry that follows economic spending as a whole.
Yoshida attributes part of this to HDS decision to deliberately disrupt their own "closed" box solution where storage and management are sold together. This allows IT shops to have more choice, and decouple vendors. He said that this was a big decision for the company, as it opened up more competition to a core business.
Protocol vs. API
Yoshida said that the team at HDS decided it was inevitable for this protocol level standardization to exist. His team felt that HDS needed to be a leader in this opportunity. He cited a customer that uses an HDS head as a management function that had NetApp behind it as a pattern they supported that several years ago would have been done by partnership rather than protocol level support.
Although in this scenario HDS didn't win "all tiers" of this storage solution, it was able to be a fabric and join a customer that "loves NetApp" and loves HDS too.
Mr. Yoshida said that his company decided to fully embrace the protocol level integration with the surrounding systems, instead of only releasing only APIs, as a means to allow more competition - and cooperation in the ecosystem through technology rather than selective partnerships.
Considering the TiersAn area of storage that is ripe for cost savings is supporting different types of solutions, e.g. production vs. development and classes of storage based on the application.
New Considerations for Tiered Storage, Hu examines reduction of costs.In his blog post,
Looking under the covers we see that there is a lot of questions to ask in the details of these strategies, and marketing matters in how solutions are perceived and how different types of hardware (for example Seagate vs. HDS) make a difference for buyers, and that to be a leader, it is key to have answers across the industry ecosystem.
When we look at the decision being on moving the cost needle down for operations management instead of hardware savings, it becomes clear that playing nicely pays. HDS is a company that plays on both sides of the storage spectrum (management layer and disk) and it's partnerships include relationships with HP (as OEM) and companies like Cisco and Brocade as go-to-market partners. It is tempting to "hardwire" solutions together, but it is a bigger win when instead these are loosely coupled and partner-ready.
Looking at it from the angle of cost reduction for open standards gives the pivot point to consider this natural tension offered by virtualization has a promise binding vendors together to optimize their solutions for the plug-and-play data center.
Does an open protocol powered data center reduce total costs? IBM, HP, Cisco, NetApp, Oracle...Hitachi thinks so, do you?