Home The Real-Time Web: A Primer, Part 2

The Real-Time Web: A Primer, Part 2

This is part 2 of a three-part series on the fundamental characteristics of the real-time Web.

In part 1 we looked at how the real-time Web is a new form of communication and creates a new body of content. The immediacy of the Twitter channel is a third fundamental characteristic of the real-time Web and one of its prime currencies, not surprising given the name of the space. Because of demand within the eco-system, quite a bit of effort is being made on storing, slicing, dicing, and disseminating information as quickly as possible. The fundamental implication of this activity (without any explicit markers being laid down) is that the velocity of information within the Web data system has just increased by an order of magnitude.

The pipes are moving data at the same rate: the speed of your data connection has not changed (although it is getting faster because of an independent effort by cable companies, telcos, and the like). What has changed is the flow of data from machine to machine on the Web and the processing that happens as information makes its way to users. Companies are making use of data that takes seconds to be published to the Web, as opposed to hours or minutes. Years ago, pages might have been crawled by search engines daily. With the advent of RSS, new posts would flow through the system within hours. With Twitter, the flow is propagated from company to company to user in real time.

As Eric Marcoullier of Gnip Central points out, this is not unlike how stock and options trading has been conducted for years, where micro-seconds in receiving and processing data make a difference in gaining competitive advantage. The difference here is that, instead of real-time trading data, we have real-time social Web data: data from individuals and companies about events, theories, products, people, articles, videos, and other things and ideas, all getting passed around and publicly available.

This facet of the real-time stream is having a profound impact on the infrastructure of the Web. New storage and retrieval methods are being developed to overcome the time lags of writing not just to disks but to traditional databases. Adaptations to traditional structured query languages are being made to index items directly from the stream. Search engines and search capabilities are being modified to make use of real-time inputs to influence the search results. This isn’t just a Twitter effect. This is an effect across all uses of the Web, because the expectation of access to real-time information is now permeating all websites and the infrastructure of the Web itself.

Unintended Consequences

The use of and overlay of real-time commentary and reaction with news and events is bound to have many useful benefits, as well as interesting and perhaps adverse side effects. Several news outlets have been quick to point out how a mob mentality can take hold when opinions on emotionally charged topics are instantly disseminated. Additionally, many attribute the severity of the 1987 stock crash to the lack of regulators that prevented automated systems from reacting to ongoing market conditions through an integrated loop of feedback. Even now, 20+ years later, unintended and unforeseen events continue to happen when derivatives and automation come into play.

For those prone to theorize, there are many fascinating questions to ponder. For example, the uncertainty principles states that the position and velocity of an atomic particle becomes less certain as that of another becomes more certain. If the analogy holds true, then does the veracity or truthfulness of news become less certain as the velocity of interest becomes more measurable. Likewise, what effects will the integration of the real-time stream have on the outcome of events, and how can conditions be influenced to ensure specific outcomes.

Public Conversations with Explicit Social Graphs Attached

Another characteristic of the real-time Web is that, unlike other real-time communication streams such as instant messaging, email, and the telephone, it is largely public. Also unlike these other channels, conversations within the real-time stream carry with them an explicit social graph. The audience of someone who publishes information on the real-time Web is not unknown, as might be the case in the blogging world. Each person (or company or organization) communicating on Twitter has followers, who in turn themselves have followers. Each message thus has a social graph attached to it, as does each echo or retweet of that message. Messages and message flow are for public consumption.

These social graphs also contain a fair amount of information identifying each user within the graph. The majority of Twitter profiles include a name, website, and short description. Additionally, third-party directories contain self-tagged categories, roles, interests, and specialties. (Profiles are identifiable because followers need enough information to be able to identify users within the public space.)

So many people and companies are interested in developing on top of the Twitter platform (and for the real-time Web in general) because of these characteristics (i.e. the openness of the channel, the availability of rich meta data, and the explicitness of the social graph) as well as the value derived from the content and interactivity. The value they get is in being able to monitor these streams and produce derivative value for Twitter users and news organizations, brands, retailers, organizations, politicians, and others who have an interest in what’s being said, who hears it, what they do with it, and what others do with that.

In a strange twist, unlike the unregulated derivatives markets on Wall Street, which have run into skepticism and calls for greater regulation, the derivatives markets in technology and social Web circles operate freely and are booming.

Social Graphs, Reputation, and Trust

Social graphs provide mechanisms by which to infer reputation and trust. Because the graphs of followers on Twitter are public, Twitter and third parties can employ algorithms to identify which profiles are legitimate and which are spam or cons.

Algorithms based on the page-ranking algorithms that Google uses, for example, can be used to rate users not just on the number of their followers but also on the strength of their followers. It’s almost a given that Twitter and other players in this space will have serious challenges in dealing with spammers and other disreputable users, but public social graphs are a great advantage in defending against these threats.

Read part 1 of this series, and stay tuned for part 3.

Guest author: Ken Fromm is a serial entrepreneur who has been active during both the Internet and Web 2.0 innovation cycles. He co-founded two companies, Vivid Studios, one of the first interactive agencies, and Loomia, one of the top recommendation, discovery, and personalization companies. He has worked at the leading edge of recommendations and personalization, interactive development, e-commerce and online advertising, semantic technologies and information interoperability, digital publishing, and digital telephony. He is currently advising a number of startups and looking at the next big thing in Web 3.0. He can be found on Twitter at @frommww.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.