The Netflix Prize; any team that could beat its in-house recommendation engine by 10% in predicting which movies people would like would win a $1 million prize. It was a huge engineering challenge that more than 50,000 teams of computer scientists signed up to take. Today one team, a combination of four of the front running teams actually, announced that it has built a system that delivers a 10.05% improvement.In October 2006 online movie rental company Netflix announced a contest called
If that team withstands the month long period of scrutiny that begins now, it will not only mean fame and (some) fortune for them and a big boost for Netflix - it could signal a key turning point for recommendation technology on the web.
The international team, called BellKor's Pragmatic Chaos, is made up of researchers from AT&T, Yahoo! Research Israel, Commendo Research and Consulting in Austria and Montreal's Pragmatic Theory.
In January of this year, we took an in-depth at the Netflix Prize, asking if 2009 could be the year that the goal would be met. In that post we discussed a New York Times profile of the contest as well, where we learned that the company's existing recommendation engine called Cinematch is credited with driving 60% of Netflix's rentals. That system is especially good at predicting "long tail" movies, older more obscure titles that are less well known but make up 70% of what Netflix customers pick. Improvements in Cinematch's effectiveness plateaued in 2006 and the move to offer a big cash prize for outside innovators has captured the imagination of thousands of engineers and their fans.
How Does it Work?
How do you judge improvements on recommendations? Netflix provides contest participants with huge piles of anonymous data about what movies certain customers rated highly, then the teams built algorithms to predict which movies other customer profiles would rate highly based on past patterns. BellKor's Pragmatic Chaos says it can now guess what people will like with a 10% improvement over Cinematch's success rate.
That gets difficult when movies like Napoleon Dynamite, which some people loved and other people hated, get thrown into the mix. It's nearly impossible to predict whether a person will like films like that.
Most of the predictive recommendation systems entered in the Netflix Prize are reported to be quite similar - so we asked in January whether it was going to take a radical breakthrough to top 10% instead of just continued iteration.
That breakthrough may have come when the four teams put their heads together, or it may have been an iterative victory. Time and science will tell.
Some people believe that recommendation as a technology has the potential to be even bigger than search. In our favorite article on the subject, written eight-teen months ago now, Dr. Rick Hangartner, Chief Scientist at recommendation engine Strands, puts it like this:
In the near term, search engines will increasingly incorporate simple recommender technologies to handle approximate queries (e.g. "You asked for this, and based on similar queries/behavior by others, you might be looking for this."). But in the long term, the recommender industry will be larger, and recommender technologies will be more pervasive than the search industry and search technology as we know it. [Because there will be recommendation going on all over the web.]