Home Recommendation Systems: Interview with Satnam Alag

Recommendation Systems: Interview with Satnam Alag

In a recent post, we looked at recommendation systems, briefly reviewing how Amazon and Google have implemented their own systems for recommending products and content to their users.

We had the opportunity to speak with Satnam Alag, author of the recently published Collective Intelligence in Action, about what makes for a good recommendation system, where the technology is heading, and why Netflix is finding it so hard to improve its own system.

Disclosure: I wrote the forward to ‘Collective Intelligence in Action’, however I have absolutely no financial interest in the book.

ReadWriteWeb: In our recent post about Netflix, we identified four main approaches to recommendations: Personalized recommendation: based on prior behavior of the user; Social recommendation: based on prior behavior of similar users; Item recommendation: based on the item itself; And a combination of all three. Do you agree with the four approaches we laid out in our article?

Satnam: Those four categories are pretty comprehensive. I present an alternate classification of recommendation systems in my book. I lay out two fundamental approaches. The first approach, item-based analysis, determines items that are related to a particular item. When a user likes a particular item, related ones are recommended. The second approach, user-based analysis, first determines users who are similar to that user.

Further, there are two main approaches to finding similar items and similar users. For the first, content-based analysis, content associated with the item, especially text, is used to compute similarity. In the second, the collaborative approach, actions such as ratings, bookmarking, and so forth are used to find similar items. For the second, user-based analysis, a number of approaches have been taken, including ones based on profile information, user actions, and lists of the user’s friends or contacts. Of course, you can combine any these item/user and content/collaborative approaches to build a recommendation system.

The dimensions of the particular item and user space are helpful in deciding whether to use an item-based or user-based approach. Typically, an item-based approach is used to bootstrap one’s application when the number of users is small. As the user base grows, the item-based approach is augmented by a user-based approach.

ReadWriteWeb: Other than Amazon and Netflix, which Internet companies have most impressed you in their implementation of recommendation systems?

Satnam: Other than Amazon and Nextflix, Google News’ personalization is my personal favorite. Google News is a good example of building a scalable recommendation system for a large number of users (several million unique visitors per month) and a large number of items (several million new stories every two months), with constant item churn. This is different from Amazon’s, whose rate of item churn is much lower. Google decided to use collaborative filtering for its recommendation system mainly because of its access to the data of its large user base and because this same approach could be applied to other applications, countries, and languages. A content-based recommendation system perhaps could have worked just as well, but may have required language- or location-specific tweaking. Google also wanted to leverage the same collaborative filtering technology to be able to recommend images, videos, and music, for which it’s more difficult to analyze the underlying content.

Among start-ups, my personal favorite is the one we are developing at my current company, NextBio. It’s not available yet but should be next month. The key point about this particular recommendation engine is its strong use of an ontology, similar in concept to tags, to develop a common vocabulary for items and users. The system then makes use of profile information and user interactions, both short- and long-term, to provide recommendations. The system leverages both item- and user-based approaches.

ReadWriteWeb: What commercial opportunities do you forsee with recommendation systems over the next few years?

Satnam: A good personalized recommendation system can mean the difference between a successful and a failed website. Given that most applications now invite users to interact and to leverage user-generated content, new content is being generated at a phenomenal rate. Showing the right content to the right user at the right time is key to creating a sticky application. I would be surprised if most successful websites did not leverage recommendation systems to provide personalized experiences to their users.

ReadWriteWeb: Your book includes a discussion of collaborative filtering. Can you tell us a bit about how this fits into the overall picture of recommendation systems?

Satnam: In recent years, an increasing amount of user interaction has provided applications with a large amount of information that can be converted into intelligence. This interaction may be in the form of ratings, blog entries, item tagging, user connections, or shared items of interest. This has led to the problem of information overload. What we need is a system that can recommend items based on the user’s interests and interactions. This is where personalization and recommendation engines come in.

In my book, I take a holistic view of adding intelligence to one’s application, a recommendation engine being one way to do it. The book focuses on both content-based and collaborative approaches to building recommendation systems. It focuses on capturing relevant information about the user, information from both within and outside one’s application, and converting it into recommendations. One of the things you mentioned in your write-up on recommendation systems is that you would like to apply such a system to your website to recommend things to users. Someone reading my book should be able to create such a system using the techniques I demonstrate.

ReadWriteWeb: Netflix is offering $1 million to the team that can improve its recommendation algorithm by 10%. It’s been over 2 years now, with the leading company at 9.63%. There is some skepticism, though, that 10% will be reached anytime soon, because now the contestants are making only incremental progress. Do you expect the 10% mark to be reached soon?

Satnam: Netflix’s recommendation engine, Cinematch, uses an item-to-item algorithm (similar to Amazon’s) with a number of heuristics. Given that Netflix’ recommendation system has been very successful in the real world, it is pretty impressive that teams have been able to improve on it by as much as 9.63%. Of course, the Netflix competition doesn’t take into account speed of implementation or the scalability of the approach. It simply focuses on the quality of recommendations in terms of closing the gap between user rating and predicted rating. So, it isn’t clear whether Netflix will be able to leverage all of the innovation coming out of this competition. Also, the Netflix data doesn’t contain much information to allow for a content-based approach; it’s for this reason that teams are focusing on collaborative-based techniques.

The challenges to reaching the 10% mark are:

Skewed data: The data set for the competition consists of more than 100 million anonymous movie ratings, using a scale of one to five stars, made by 480,000 users for 17,770 movies. Note that the user-item data set for this problem is sparsely populated, with nearly 99% of user-item entries being zero. The distribution of movies per user is skewed. The median number of ratings per user is 93. About 10% of users rated 16 or fewer movies, while 25% of users rated 36 or fewer. Two users rated as many as 17,000 movies. Similarly, the ratings per movie are also skewed: almost half the user base rated one popular movie (Miss Congeniality); about 25% of movies had 190 or fewer ratings; and a handful of movies were rated fewer than 10 times.

The approach: The winning team, BellKor, spent more than 2,000 combined hours poring over data to find the winning solution. The winning solution was a linear combination of 107 sets of predictions. Many of the algorithms involved either the nearest-neighbor method (k-NN) or latent factor models, such as SVD/factorization and Restricted Boltzmann Machines (RBMs).

The winning solution uses k-NN to predict the rating for a user, using both the Pearson-r correlation and cosine methods to compute the similarities, with corrections to remove item-specific and user-specific biases. Latent semantic models are also widely used in the winning solution.

The BellKor team found it important to use a variety of models that compensated for each other’s shortcomings. No one model alone could have gotten the BellKor team to the top of the competition. The combined set of models achieved an improvement of 8.43% over Cinematch, while the best model — a hybrid of k-NN applied to output from RBMs — improved the result by 6.43%. The biggest improvement by LSI methods was 5.1%, with the best pure k-NN model scoring below that. (K for the k-NN methods was in the range of 20 to 50.) The BellKor team also applied a number of heuristics to further improve the results.

The BellKor team demonstrates a number of guidelines for building a winning solution to this kind of competition:

  • Combining complementary models helps improve the overall solution. Note that a linear combination of three models, one each for k-NN, LSI, and RBM, would have yielded fairly good results, an improvement of 7.58%.
  • A principled approach is needed to optimize the solution.
  • The key to winning is building models that can accurately predict when there is sufficient data, without over-applying in the absence of adequate data.

The final solution will be along the same lines, combining multiple models with heuristics. Contestants will probably reach the magic 10% mark in the next year or two.

ReadWriteWeb: Some people think the 10% mark can’t be reached with algorithms alone, but that the “human” element is required. For example, ClerkDogs is a service that hires actual former video-store clerks to “create a database that is much richer and deeper than the collaborative filtering engines.” It’s a similar approach to that of Pandora, which has 50 employees who listen to and tag songs. How far do you think algorithms can go in making recommendations?

Satnam: Recommendation systems are not perfect. A number of elements go into making successful ones, including approach, the speed of computing results, heuristics, the exploration and exploitation of coefficients, and so on. But it has been shown in the real world that the more personalized you can make recommendations, the higher the click-through rate, the stickier the application, and the lower the bounce rate.

Using humans to form a rich database for recommendations may work for small applications, but it would probably be too expensive to scale. I don’t see them competing against each other, human versus machine. Even with human/expert recommendations, one first needs to find a human/expert with tastes similar to those of the user, especially if you want to go after the long tail.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.