Recently Tim O’Reilly and John Battelle released a white paper entitled Web Squared: Web 2.0 Five Years On. It focuses squarely, pardon the pun, on the intersection of social web technologies with the emerging Internet of Things (real world objects connected to the Internet).
The ‘web squared’ moniker is, commercially speaking, a none to subtle attempt to re-brand web 2.0. This had to be done so that the conference series of that name, which O’Reilly and Battelle jointly run along with the company TechWeb, remains relevant. But less cynically, the report also nicely applies Web 2.0 principles onto the emerging Internet of Things.
The term ‘web squared’ is defined in the report as “web meets world.” The squared bit also references that “the Web opportunity is no longer growing arithmetically; it’s growing exponentially.”
Collective Intelligence 2.0
The report starts by noting what O’Reilly and Battelle believe was the core proposition of ‘web 2.0’ back in 2004: “Web 2.0 is all about harnessing collective intelligence.” The pair go on to say that web 2.0 is currently being applied to areas they hadn’t predicted in ’04, such as mobile and internet-connected objects.
Specifically, sensors are providing a new source of data for web 2.0 techniques. As the report puts it, “collective intelligence applications are no longer being driven solely by humans typing on keyboards but, increasingly, by sensors.”
Where the report differs from the traditional view of Internet of Things is that it doesn’t view sensor data as just mechanical data from RFID tags and other non-human sources. The authors argue that humans are producing sensor data of their own, in particular using their mobile phones. They note that today’s smartphones “contain microphones, cameras, motion sensors, proximity sensors, and location sensors (GPS, cell-tower triangulation, and even in some cases, a compass).”
No matter what the source of sensor data, after it’s gathered collective intelligence can be applied to it. The authors term this a “virtuous feedback loop,” whereby sensor-based applications get better the more people use them.
Information Shadows
Another key point is that, much like in Web 2.0 apps, there is an entire ecosystem that uses and builds off the data. Real world objects have “information shadows” on the Web (this is a term originally coined by Mike Kuniavsky of ThingM).
The example in the report is a book, which has information shadows “on Amazon, on Google Book Search, on Goodreads, Shelfari, and LibraryThing, on eBay and on BookMooch, on Twitter, and in a thousand blogs.”
Do We Need RFID? It’d Be Nice…
One contentious point in the report is when it questioned whether RFID is actually required to make an Internet of Things. The authors argue that it isn’t:
“A bottle of wine on your supermarket shelf (or any other object) needn’t have an RFID tag to join the Internet of Things, it simply needs you to take a picture of its label. Your mobile phone, image recognition, search, and the sentient web will do the rest. We don’t have to wait until each item in the supermarket has a unique machine-readable ID. Instead, we can make do with bar codes, tags on photos, and other “hacks” that are simply ways of brute-forcing identity out of reality.”
This line of thought seems to parallel the argument usually put forth by web 2.0 proponents against the top-down Semantic Web: that it isn’t practical to expect publishers to enter metadata into their content, instead let it bubble up with a mix of collective intelligence and machine processing.
To hammer home this point, the report claims that “evidence shows that formal systems for adding a priori meaning to digital data are actually less powerful than informal systems that extract that meaning by feature recognition.” They use the example of a book: “an ISBN provides a unique identifier for a book, but a title + author gets you close enough.”
Good enough has always been a design principle on the Web, so this makes sense. However, much like the battles back in ’04-’05 to define web 2.0 (or dispute the existence of it) ultimately it’s a moot point. RFID tags will become more common place, it’s just a matter of time.
Let’s face it, a ‘smart’ RFID chip on a bottle of wine – one that knows its production and travel history, its temperature, its price relative to similar bottles of wine, etc – will beat human hacking anytime. But, as the report rightly notes, don’t expect that level of automation via RFID any time soon. Our recent post examining the current state of RFID clearly showed that it’s years away.
Conclusion
To say that sensor data can be both machine generated (e.g. by RFID chips) and human generated is perhaps trying too hard to force the web 2.0 world into the new emerging Internet of Things. But that’s neither here nor there. Where the ‘web squared’ report is spot on, is its point that applying collective intelligence to sensor data will be a rich vein of opportunity in the coming years.
Clearly the web 2.0 philosophy can and will merge with Internet of Things. The report by O’Reilly and Battelle is a great primer for that.