Oracle CEO Larry Ellison likes to walk into a presentation with raw numbers in hand. When he touts Oracle technology as faster, he prefers to say how much… sometimes inflating the number as he goes along. “A factor of five. A factor of 10! A factor of 15!“
What would Ellison have given for the opportunity to tout a factor of 400,000? Evidently not enough. The notion that a database manager would run six orders of magnitude faster if it resided in DRAM rather than on a hard drive, is decades old, just waiting for an Oracle or a Microsoft or a Dell to make it hapen. Instead, SAP is the company delivering the dramatic database performance boost.
Most database technology today is an upgraded form of the same database technology from yesterday, and it’s usually in the upgrader’s best interest to support its native platform. So it only makes sense that perhaps the most revolutionary database technology of the last quarter-century should come from a company that only recently acquired Sybase and entered the database industry: SAP, known as a leader in enterprise resource planning.
Quietly, SAP has been building momentum behind a technology called HANA. It’s not a particularly sophisticated concept, which is partly why SAP engineers don’t embellish its descriptions with charts. Essentially, it’s a data-organization system based around memory rather than hard-drive storage arrays. HANA may use storage arrays for backup and redundancy and traditional database engines – like SAP’s own Sybase – for managing data prior to its delivery to memory.
“In our customers’ minds, there is now absolutely no doubt that HANA now represents the database architecture for the future,” said Vishal Sikka, SAP’s CTO, in a press conference Tuesday afternoon. “For OLTP and OLAP [transactional and analytical processes], for structured and unstructured data, for legacy applications and new applications, a single database with, as you can see… pretty much unlimited scale.” From there, Sikka proceeded to tell stories of HANA deployments since it became generally available last June. One customer story independently cited the surreal-sounding 400,000x speed improvement, others cited less but were still awe-inspiring.
“When we built our first HANA box internally with commodity hardware, it cost us something like $700,000 to build it,” related SAP Global Solutions President Sanjay Poonen (right), in an exclusive interview with ReadWriteWeb. “Within six months, the costs of these [same] parts and commodity hardware to assemble and test it, decreased by 30%. The price of hardware is falling so rapidly.”
The trick for SAP is infusing its in-memory platform into the enterprise with a minimum of upheaval. It sees a place for itself in a new space that could reside alongside business’ existing database investments, and even alongside their new investments in Hadoop and “big data” stores. Think of a turbocharger that improves whatever car it’s in, but works so well that customers end up replacing most or all of the car over time anyway.
In fact – and this is perhaps the most impressive aspect of the story – SAP has established a pair of giant investment funds to support the effort. It will literally invest its own money in companies and customers who can develop new means for integrating HANA more seamlessly in existing environments.
“The Acceleration Fund will kick in to help customers fund some of that migration, so we can help ease them off that [older platform]… The database vendors, you’d think, are all set in concrete; they never change. That’s not always the case. Part of the trigger factor is, if you look at the amount of spending that companies have on databases – you add the license cost, the maintenance cost and, the biggest part, the DBA service cost – it’s too big. If you’re spending $100 million on database costs, and by consolidating a lot of this onto a platform that includes HANA, we can take that cost down from $100 million down to $70 million, or by 30%, would you spend an extra $5 million or $10 million with us, or what have you, to ensure you get the best, next-generation data platform? That’s the proposition… You can keep adding more bells and whistles, bells and whistles, to a Model T, but at some point in time, a Model T is a 1920s car. It’s not going to scale for what your [travel requirements] need to be.”
Not since the 1980s has a database manager been a single application. Today’s database platforms are comprised of multiple components, many of which come in shiny, metal boxes. Oracle likes to demonstrate that a platform migration can take as little as five hours, but in practice, the actual time is often measured in years. One factor is the lack of one-to-one correspondence between various manufacturer’s components.
If you consider HANA a different platform rather than an extension platform, this issue is magnified. SAP intentionally designed HANA to have fewer components. On Tuesday, the company’s executive vice president, Steve Lucas, tried to spin this into an advantage.
“Our customers asked us for help,” Lucas told the press. “They said, ‘The existing database players are not innovating quickly enough. We’re not getting from them what we need. Please help us!’ We said, ‘Okay.’ Vendors are pushing more, not less.
“Think about that for a minute: We’ve gone from transactional databases… to needing data warehouses, and on top of those data warehouses we need multidimensional stores and OLAP cubes, and we need data marts, and the list goes on and on, because it makes people more money. SAP is focused on helping customers consolidate those systems, solve many things with one solution – one – and save them money at the same time.”
SAP’s Poonen told RWW: “If you want to build an analytical infrastructure today, check out all that a CIO has to buy. If you wanted to buy an enterprise data warehouse, pay $5 [million] or $10 million to Teradata.” For data reporting, he added, you may go to SAP; for planning and budgeting, you might go to Oracle’s Hyperion. The cost for SAP’s BusinessObjects business intelligence (BI) platform may be another million. For predictive analytics, you might go to SAS and pay another $2 million.
With the exception of BusinessObjects, each of these extra components has a database engine, noted Poonen, that would be redundant to the underlying Teradata layer. “If you use Hyperion Planning on top of Teradata, you take the data out of Teradata, you flatten it and put it in a new structure called Essbase – a multidimensional cube structure. That’s completely redundant, it has all kinds of overlap, and you could ask the question, ‘Why isn’t that Essbase database that serves OLAP, multidimensional processing sitting inside Teradata? Why is it different?’ ‘Because they’re separate companies.’ That’s it. There’s no other reason. So when we looked at this architecture, we said, ‘Collapse all those layers into one in-memory database, because now we know what the destination state of that architecture needs to be.’ “
So HANA includes capabilities for reporting, planning and predictive analytics. “It really is kind of a better architecture than Teradata, Hyperion and SAS put together. That’s what we mean by collapsing those layers, and by doing that, you save an enormous amount of costs for the average CIO.”
Throwing the Switch
Because these multicomponent architectures and SAP’s HANA architecture share fewer “seams” with one another, it would appear on the surface that a migration plan would be rocket science. Poonen told RWW that SAP got a leg up on such planning early on by mastering the fine art of migrating data between other companies’ platforms, as part of SAP services.
For installations of Oracle databases atop SAP’s NetWeaver Business Warehouse (BW), the SAP president says his company has already constructed procedures that enable the move from Oracle to HANA, which one customer reports was completed in its entirety in two weeks. “You’re talking in weeks, maybe months, but not years. That’s because data structure is something we know about. And there’s 15,000 of them, not just five.”
For the last quarter-century, software architects have asked themselves how much ammunition and firepower would be required to achieve an upheaval in system architecture – not just storage and capacity, as is addressed by Hadoop, but in the way data is processed inside the hardware. Now we’re seeing SAP amassing numbers in the six-digit range. If SAP manages to attach this firepower to the cloud any time soon, Oracle, HP NonStop, IBM and the Dell/Microsoft combo may need to recompute and reload to compete.
Stock photo by Shutterstock.com