Home Big Memory: An Interview with Terracotta CEO Amit Pandey

Big Memory: An Interview with Terracotta CEO Amit Pandey

I did an interview this morning with Terracotta CEO Amit Pandey about the fascinating new dimensions of in-memory data and its use for search. How excited I was when I learned that the public relations person had been transcribing the interview. It felt like a Mad Men moment!

It turned out well enough to include in its entirety. Pandley covers the fast evolving world of big memory. The term is catchy isn’t it? Amit certainly gives a pitch here but we should expect that in a format like this. The interview is insightful in it shows how fast data can now be accessed and what it means for the new world of real-time intelligence.

Alex Williams: So tell me what you’re seeing in the market these days?

Amit Pandey: Our most interesting thing is — I had briefed you on BigMemory, right? It has surpassed our expectations, which were high, in terms of how useful it is and how many people want it. Everything from the name itself, which stirs customer imaginations–the choice of name was quite good because it resonates with people’s desire to use more memory. You’re getting servers that are much larger with more memory. Most CIO offices feel an imperative to make data more accessible in real-time. Most people know that really the only way to do that is put it in-memory, where the applications can reach it very quickly. The name grabs people’s imaginations, they dig in and find it is what they thought it was, and they start doing proof of concepts and we go from there.

What we’ve seen is that with this product, with BigMemory, we have much more data on its performance now, and we’re able to do generally about 4-10x the level of density of other products and even compared to our own older, non-BigMemory products, in terms of how much hardware and how many servers it takes to achieve the level of in-memory data that customers want.

I’ll give you an example. There’s a large bank that had its data distributed across 30 servers to achieve the amount on in-memory data they need. With BigMemory, they were able to bring it down to two servers. That was a huge win for them because the cost of administration suddenly dropped. The complexity of running 30 servers isn’t even comparable to the complexity of running two.

Williams Okay, so that’s an example of that 4-10x density?

Pandey: Yeah. In this case it was a 15x increase. But theoretically it’s possible to do that use case on one server. We did tests with Cisco’s UCS server and we’ve been able to run 350 gig on a single UCS server. That’s bigger than a lot of people’s entire database. In the past, I would say 6-8 gigs per application instance was about the most you could do.

Williams: You said that’s bigger than most people’s database?

Pandey: 350 gigs? Sure. There are databases that go into terabytes and petabytes, but I would say the average database is probably around that size or smaller. So because in one instance of an application we can get that kind of in-memory storage, it’s a huge win. We’re seeing that’s very exciting to people because it simplifies their architecture and makes it a lot more elegant. I think you remember the reason for the 6-8 gigabyte limit was that the Java memory manager is very poor at handling bigger sizes, and BigMemory by-passes that. So that’s just a quick recap.

In general, the whole concept of doing things in-memory and in real-time is big on everyone’s agenda–real-time analysis, etc. The natural step for us was if we can put this much data in memory, people will want to do things with it. They’ll want to do searches and queries of that data to find out what their customers are doing, what their customers need, and so forth, so real-time analysis of that data is critical. We’re releasing a native search capability for Ehcache which basically lets you search as much memory as you would need.

Here’s an example of the kinds of searches customers are doing: we have a SaaS customer that does logistics for fast food restaurants. Normally, they run reports against a database–there are two issues with that. One is that they were not getting real-time data because they would batch stuff and write it back to the database and run their reports on a four hour basis or at the end of the day.

What customers really need is to find out at any given moment how many hamburger buns they have in inventory and how many have been used up so they know where they are and can do real-time management and lower costs and make sure they don’t run out, etc.

To do that against the database was very slow because it meant going across the network to the disk. It also meant the database was very overloaded and it meant spending tons of money expanding the database to get this done. To work against the database was taking them about 35-40 seconds to do some of these reports. But BigMemory with Search dropped their times from 35-40 seconds down to less than half a second.

Williams: So what were they were they trying to understand?

Pandey: Search was for inventory items like hamburger buns or other food items, and they were trying to figure out in real-time how much has been used right now and how much is left. They needed real-time analysis of data and doing real-time analysis on a database was both very slow and very expensive.

Williams: So instead of having to do that against a database, they can now use in-memory to do that a lot more efficiently?

Pandey: Exactly, because the data is right there. You don’t need to make a round trip to go across the network to the database. The cost is a lot lower because they’d have to buy a lot more database licenses to achieve the speeds they need.

Williams: It’s interesting how that can affect the supply chain, too.

Pandley: Sure, it can. Their customers won’t tolerate those kinds of search speeds. So the logistics company was looking for another solution. Without BigMemory, this company would probably have had to spend a lot of money upgrading to a really expensive solution like Oracle Exadata or something like that.

Williams: Where is this all going?

Pandey: Our customers use it for all kinds of things: we have travel reservation systems running on Ehcache, websites, online gaming systems, back-end medical patient records. In any places where you need to do a quick search and query of what your customers are doing in real-time, you can do that. You could do a search and say, how many people are currently logged in and playing my game who are 25 years old and live in Oklahoma, because I want to do a promotion in Oklahoma for those people right now. You can do that with a database, but it would be very slow and very expensive. With in-memory data you can do that really fast, and target those people quickly.

One thing I do want to make clear, Alex: we’re not saying we are replacing analytics in the database. We’re not providing all the heavyweight reporting capabilities that business intelligence tools offer today for databases and we’re not doing all the analytics. But what we are providing is a very simple, powerful, lightweight search where you can do real-time analysis of customer behavior and things like that. Over time we’ll make it a richer reporting set.

We’re working with BI and other vendors to provide hooks so that they can run their stuff against ours. It will become richer over time. So right now, we provide a simple lightweight thing that’s extremely useful for real-time analysis but you couldn’t really say it’s a business analytics tool yet because those have been developed over the years and the term “analytics” is loaded. We’re very careful to use the term “real-time analysis”. Over time, in the next 3 to 5 years, I see this getting richer. You’re already seeing all these companies (SAP, etc) talking about merging analytical and transactional together in one architecture. What we are doing is essentially that, we’re taking baby steps toward that.

Williams: With tablets available, you can see this data visually, that has an added impact.

Pandey: Yeah. The great thing is if you put search capability in the application, it’s sort of independent of the platform that uses it–it could be a phone, tablet, etc. Your platform can be leveraged by any of these devices. Obviously, mobile devices would be a big part of that.

Williams: Thanks for your time!

(Photo: Amit Pandey, CEO, Terracotta)

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.