Home The Real-Time Web: A Primer, Part 3

The Real-Time Web: A Primer, Part 3

This is part 3 of a three-part series on the fundamental characteristics of the real-time Web.

In part 1 and part 2, we looked at how the real-time Web is a new form of communication, creates a new body of content, is real time, is public, and has an explicit social graph associated with it. A final characteristic of the real-time Web is that it carries with it an implicit model of federation.

A number of sources both generate and consume real-time streams. As a result, many of these new companies are becoming communication carriers, passing their users’ real-time threads through their networks to other networks. This is more than simply being open (i.e. more than allowing data to be imported and exported). Just as in shipping and transportation and other communication industries before it (telephone, Internet packets, and email, to name a few), the real-time Web is developing a federated model of transmission whereby companies formally or tacitly agree to facilitate transmission and perform actions on behalf of end-users within the eco-system.

It’s hard to say whether this model has arisen because of a conscious strategic effort to build a new industry, or because building a fully closed world would have required just too many resources, or because of a collective effort among business friends and acquaintances to develop open products and open interactions so that cool new things could be created. It’s probably a combination of all three, but considering the history of the people at Twitter and FriendFeed (Paul Buchheit, one of FriendFeed’s founders, is credited with coining Google’s unofficial “Don’t be evil” slogan), the open and cool factors are probably a big part of the equation.

At this point, there seems to be a general willingness to accept and transmit messages from outside sources (carrying costs are not significant, and transmission is automated via APIs, and so overhead is minimal). That said, infrastructure costs are bound to increase, competition will heat up, illegitimate companies will spot opportunities, and monetization strategies will be devised, which will all put strain on this truly open exchange.

As in the past, formal carrier agreements could be set down, governments could decide to regulate markets, or other forces could come into play that would transform what is now essentially a free-for-all bazaar into a marketplace with hierarchy. All the same, the expectation of openness and transparent transmission will be difficult to counteract or stop. So, new companies that enter the space, even bigger and better funded ones, will have to adhere to the same model of federation that these pioneering companies have established.

Summary

Whether Twitter will remain the focal point of the real-time Web or be supplanted by another or several companies (as happened in the social network space, first with Friendster, then MySpace, and now LinkedIn and Facebook) is unclear. The underlying characteristics of the real-time Web, however, are defining the next major stage of the Internet and will spread throughout its infrastructure in years to come.

Broader trends on the Web point to users having discrete data and services follow them as they move around the Web. Fred Wilson, a principal of Union Square Ventures, has called this the “de-portalization of the Web,” and John Borthwick, CEO of betaworks, has co-opted Chris Anderson’s phrase “small
pieces, loosely joined” to describe the fast-moving risk-taking small companies that work in the space. Both individuals are leading investors in Twitter and other real-time Web companies.

The Internet is shifting from discrete units of websites and Web pages to discrete units of information (e.g. people, organizations, articles and videos, product offerings, store listings, and blog posts) and associated meta data (e.g. images, addresses, reviews, ratings) that move seamlessly around the Web, being slotted where appropriate. These units of information can be organized in ways that are relevant and personal to each individual, using data gleaned from social graphs as well as recommendation and personalization services that allow users to set their preferences.

In some cases, locations are integrated into these units as supplementary information. For example, Google and Yahoo now include map locations and reviews as part of their search listings. Their search engine algorithms read markup formats in the form of microformats and RDFa that are embedded on Web pages. These formats contain tags denoting names of people and organizations, geo-locations, and ratings and reviews. Both companies report great results from the inclusion of this data, both in increased click-through rates and reduced bounce rates. Support for other structured data is almost sure to follow. Reading tags on a page and doing something useful with them in a search result is not a novel concept, but the rapidly growing support of these tags across the Web is a clear sign that data is becoming much more identifiable and actionable.

This trend towards open and accessible data is even more obvious when you consider the real-time stream for all of the reasons mentioned above: atomic real-time messages, public accessibility, attached social graphs. In a sense, this is similar to the vision of the semantic Web. Tim Berners-Lee said at the TED conference in the fall of 2008, “Twenty years ago, I asked everyone to put their documents on this Web thing… Now I want you to put your data on the Web.” The difference is that the effort to make data accessible and more actionable on the real-time Web is being made through methods and interactions not necessarily prescribed by the W3C.

Tim Berners-Lee and the W3C use the term “linked data” to refer to the latter’s initiative to expose data and make it accessible. “Actionable data” might be a better term to use for the real-time Web because it doesn’t imply a particular approach but merely refers to the concept of making data more identifiable and independent. Linked data refers specifically to using RDF and other W3C protocols to link important concepts, a prescription that is overly complex and not likely to address many of the usage cases on the Web.

The real-time stream is a massive body of continously created and authentic content that by itself would be significant. But when it is added to and integrated with other information on other sites, and when derivatives can be created in a number of dimensions, this concept of actionable data reaches the tipping point. In non-Silicon Valley business circles, Twitter is criticized for not having a solid revenue model. Those on the inside (investors and advisers), however, believe the criticism is short-sighted. As with most communication platforms, the value of the network increases exponentially as the size of the network increases.

By having a low barrier to adoption, the network is able to grow quickly. Only after a critical mass has been reached, and after other companies and communities of interest have helped shape how the platform is used, will it become clear what people are willing to pay for. While they may not have a solid grasp yet of exactly how to make money, those who are building companies and investing in the space do know there will be opportunities. In their minds, the real-time stream is at an early stage in its cycle, one that will likely last 5 to 7 years.

If the real-time Web and its fundamental characteristics are widely understood, its benefits and opportunities can extend throughout the Internet and across all industries.

Read part 1 and part 2 of this series.

Guest author: Ken Fromm is a serial entrepreneur who has been active during both the Internet and Web 2.0 innovation cycles. He co-founded two companies, Vivid Studios, one of the first interactive agencies, and Loomia, one of the top recommendation, discovery, and personalization companies. He has worked at the leading edge of recommendations and personalization, interactive development, e-commerce and online advertising, semantic technologies and information interoperability, digital publishing, and digital telephony. He is currently advising a number of startups and looking at the next big thing in Web 3.0. He can be found on Twitter at @frommww.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.