It’s perhaps one of the industry’s great ironies that today’s hottest enterprise technology is yesterday’s leftovers at Google. Hadoop, an open-source implementation of Google’s MapReduce technology, is all the rage in the enterprise as a primary tool for tackling Big Data, and probably will remain such for years to come.
But at Google, MapReduce may already be too slow and not nearly scalable enough.
This isn’t news. Mike Miller, CEO of Cloudant, made this point in 2012, and Bill McColl, CEO of Cloudscale, made it two years before that. As McColl argued in 2010, “the people who really do have cutting edge performance and scalability requirements today have already moved on from the Hadoop model.”
Which is another way of saying Google lives in the future.
I’ve told the story before about a wealthy friend telling me his money lets him “see into the future a few years” by affording expensive things today that will be cheap for everyone in the future. In a similar fashion, Google, not to mention other web giants like Facebook and Twitter, is building things today, to solve problems of scale and data processing, that will likely be commonplace for mainstream enterprises tomorrow.
Today Google’s data and scale problems are almost magical. Tomorrow they will likely be average.
Which may mean that peering into the future, whether you’re an entrepreneur or a venture capitalist, may be as simple as watching Google. While Facebook releases much of its code as open source, the place to gaze into Google’s soul is its treasure trove of published research. There you’ll find “Efficient spatial sampling of large geographical tables” and more information on “Spanner: Google’s Globally-Distributed Database.”
You will see, in other words, the future of enterprise computing, otherwise known as Google’s leftovers.