Home Prediction Markets at confab.yahoo

Prediction Markets at confab.yahoo

A report from the yahoo.confab event on Prediction Markets, held Wed Dec 13 in
Silicon Valley. Written by Nitin Karandikar from The
Software Abstractions Blog
and edited by Richard MacManus. All photos are by David Rout for Yodel Anecdotal.

Can Prediction Markets make it easier to get at the knowledge that is already embedded
within an organization? Are we using the power of collective intelligence to make better
decisions?

These were some of the questions discussed at the confab.yahoo session yesterday on Prediction
Markets (see also R/WW’s
pre-event write-up
). For the uninitiated, Prediction Markets is a hot new
concept that is a fascinating blend of stock market action, knowledge discovery and
social dynamics. If you extend the idea that “Market price reflects all the available
knowledge of traders”, then by deliberately setting up a market based on questions of
interest, you can potentially discover probability distributions for answers – by
watching the price action.

Discussion on Predictive Markets Theory

In
the feature panel, moderator James Surowiecki , author of the
popular book “The Wisdom of
Crowds
“, explained that Prediction Markets make it easier to get at collective,
hidden knowledge within an organization – which leads to improved decision-making. He
described the jelly bean experiment detailed in his book: typically, the average
guess
of a large group of people about the number of jelly beans in a jar is
exceptionally good, usually within 3-5% of the real value. And interestingly, better than
almost all of the individual guesses.

Robin Hanson of George Mason
University, introduced by Surowiecki as the “father of Prediction Markets”, talked more
about the theory. He said that markets are made up of prices and speculators. Since the
price includes all factors, the market embodies a lot of knowledge about the future and
can be used to create a probability estimate of the likelihood of something happening.

Even if a prediction market is not perfect, Hanson said it beats the alternatives – such as
public opinion, public experts or private experts. Hanson’s most interesting point was
that as long as the market is large enough, the choice of participants is less critical;
choosing bozos does not necessarily imply a big penalty. But the market must be
well-informed.

Eric
Zitzewitz
of Stanford then talked about Event Studies and cautioned that analysis of
these markets can sometimes lead to the wrong conclusions, especially if the time period
is wrong; e.g. when looking at S&P 500 data, the positive effect of Donald Rumsfeld’s
resignation swamped out the negative effect of Democratic congressional control.

Eric offered several reasons why a market is a better way to get information than a
public opinion poll:

  – Opinion polls often ask the wrong question (such as ‘Whom did you vote for’,
rather than ‘Who do you think will win the election?’)

  – Markets offer a financial or reputation incentive, leading to greater
participation

  – Market participants are self-selected; typically people who think they know
more, trade more

  – A community grows around the market and becomes knowledgeable

  – Incidentally, studies have shown that there is no statistical difference in the
results when play money is used instead of real money

However, there are caveats:

  – An incentive is essential for this type of market to work (the information you
get is not free!)

  – Without adequate participation, the market can become illiquid

  – If bids are not anonymous, there is the risk of an “echo chamber effect” (one
person leads, and the others follow)

  – Traders may collude, rather than acting individually

Corporate forecasting and decision making panel

Participants for the corporate forecasting and decision making panel all
came from big companies. Bo
Cowgill
from Google (pictured to the right) talked about incentive systems for
participants of Prediction Markets. His main point was that social rewards like
status/reputation within your peers are much more effective than purely monetary rewards,
as incentives, since financial rewards are too small and awkward due to legal and
regulatory reasons. 

Leslie Fine of HP Labs
described the BRAIN software that HP has built for internal prediction markets. It has
the simplicity and robustness of a simple survey and is designed to take the bias,
manipulation and hierarchy out of the predictions of small groups. She described the
real-world constraints of running a market in the corporate environment: you need an
adequately large population knowledgeable about the topic, the players may be too busy,
and the market may become illiquid.

David Pennock of
Yahoo! Research described a case study of a Dynamic Parimutuel Market, used to power the
Yahoo O’Reilly Buzz game: it combines the metaphors of a stock market and a horse race.
It uses a Share-Ratio price function, so that the price goes up or down as something
becomes more or less valuable, respectively. Yahoo! has a public game going on now, based
on newsfutures
software, designed to predict which technologies will become more popular. He also made
some comments about Yootopia – an experiment using an internal Yahoo! currency called
yootles for group decisions, internal auctions, friendly wagers and so on.

Todd Proebsting of Microsoft, in his
case study, described a market he created to predict the validity of the testing schedule
for an internal tool, which correctly predicted the extremely low probability of making
the release date! The beauty of prediction markets, according to Todd, is that they can
deliver a message (sometimes an uncomfortable message) that cannot be denied; a large
part of the value of such a market is not only in predicting the answer to a specific
question accurately, but also in getting overall information and knowledge out of the
market to make better decisions. It’s a great way to get rid of the filtering between the
bottom and top of an organization, and bring up the topics that no one wants to talk
about.

Chris Hibbert talked about
Zocalo, his open source
prediction market software, which is available without restriction for creating
individual markets. He is trying to consolidate many of the features available from
different prediction market vendors into Zocalo. 

Finally, Adam Siegel of Inkling
described the lessons learned from failed markets he’s observed (he also plugged his new Beta site). He noted:

  – The most important point is that people often do not know exactly what a
Prediction Market is and what it can do

  – The market structure could be wrong (e.g. if it’s set up like an opinion poll,
it may be impossible to cash out)

  – The market description rules are poorly stated or not iron-clad (or they may be
iron-clad, but misinterpreted by most participants, such as the recent
N. Korea missile test controversy
)

  – The questions posed may be biased

  – The timeframe may be too long (e.g. how much will college graduates make in
2025?)

  – There may be no information available for traders to make a reasonable
judgement

Takeaways

By this point, everyone was waiting eagerly for pizza, and the questions were minimal.
To me, the most interesting takeaway points from the session (excuse me, confab) were the
following:

1. Prediction Markets are a great mechanism to extract knowledge already present
within the organization and to make better predictions

2. These markets highlight both the collective wisdom which no one person knows
individually, and common knowledge which no one is willing to talk about openly

3. They work properly only when they have an adequate number of knowledgeable
participants who work individually

4. Participants must have reasonable incentives (financial or social) to make their
efforts worthwhile

5. If the group is large enough, the ratio of experts vs amateurs does not have much
impact; often, the real experts are unexpected

6. The results of a Prediction Market are probabilities; they must be confirmed through
other, external, means


Written by Nitin Karandikar from The Software Abstractions Blog.

All photos are by David Rout for Yodel Anecdotal.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.