Imagine you are walking in London and want to book a trip to New York at the last minute: You hop online to search for a hotel. The smartphone you are holding helps you find an establishment that looks appealing. You tap a few buttons to send your credit card information.
Unbeknownst to you, however, there is only one available room left at this hotel for the night you need. You and and another traveler, from Brazil perhaps, are vying for the same space. That person is trying to make the reservation from a laptop at his office.
- Could these two transactions be handled at the exact same time?
- Can the hotel see a consistent view of this globally distributed data?
- Could the transactions be handled in a way that the hotel owner can identify where each request originated from?
- Could the hotel distinguish on which device the purchase was made and any other personal information required?
If the two transactions were being handled by a traditional SQL database with a siloed network configuration and little ability to scale on demand or access data in real time, the short answer would be, “No.”
As Jerry Chen, VP of VMware’s Cloud and Application Services noted recently: “The monolithic database of the past cannot meet the needs of modern applications.” What is needed, he added, are designs that support high-availability low-latency applications.
The Influence Of Big Data
Your experience with booking a room is a perfect example of how large amounts of important, fast-moving information – commonly known as Big Data – is creating problems and opportunities for customers and the companies that serve them.
While modern technologies provide faster access to higher volumes of both structured and unstructured information at any time of the day, making sense of the bits and pieces of unrelated data has required a radical shift in the way we organize networks and the Internet. It’s a challenge for even seasoned IT developers and database deployers.
In addition, real-world events such as the Arab Spring and the earthquake and tsunami in Japan have challenged engineers and developers to break through existing constraints and work wonders in hyperconnectivity using non-traditional methods.
Historically, companies sorted customer and transactional data in neat rows and columns of databases. These databases were built and managed by Structured Query Language (SQL) and stored in isolated, or siloed, networks.
But traditional, hard-disk-drive-based database products forged on SQL can no longer handle the deluge of Big Data in a timely fashion. The era of Big Data calls for new strategies and new approaches to queries.
The limitations of traditional SQL-based databases have prompted a slew of innovative developments in this direction starting with NoSQL distributed database models. These types of software platforms have caught the eye of decision-makers in the financial services, e-commerce, government and R&D communities.
NoSQL has the capacity to answer to their need to process Big Data sets as they happen. However, many NoSQL deployments still fail to cope with scenarios like the one detailed above because they cannot provide consistent speed or scale. Additionally, many NoSQL offerings lack maturity when dealing with data that needs to span multiple datacenters across vast distances.
Other Innovations Add Support
To keep up with the true volume and velocity of information from multiple sources, innovations in cloud computing are being paired with high-performance “in-memory” storage on database appliances created to address the obstacles of managing Big Data.
While cloud-based systems have helped develop online computing and software service delivery systems, IT developers and database deployers have adopted the cloud to take advantage of the ability to scale to globally distributed marketplace. Meanwhile, the addition of solid-state memory drives to storage devices has given companies an extra layer of quickly accessible data. These configurations allow for more rapid database queries and transactions – and thus the ability to deal with much larger data sets in real time.
But the raw technology is only part of the equation. No matter how powerful the exterior influences, IT developers and database deployers will always need tool sets that help them to design for their current Big Data database needs (such as our hotel booking example) as well as prepare for a future where Big Data needs to be addressed at multiple levels.
The tools must be customizable and easy to use – no matter the developer’s existing training. These offerings must also be easy to manage and easy to connect with new technology trends.
VMware’s GemFire and SQLFire software are designed to address just those needs of developers and deployers.
GemFire is a database software product that allows for data distribution, data replication, caching and data management at the exact moment the information is needed.
SQLFire is a distributed data management platform which functions and performs much like GemFire, but has a user interface and programming framework familiar to developers who are more comfortable programming in SQL interface and tools.
Both GemFire and SQLFire help companies move beyond concerns with speed and scale and tackle Big Data applications head on.
Image courtesy of Shutterstock.