Tagwhat, an augmented reality creation and distribution system, publicly launched this week bringing the world of augmented reality into the realm of Web 2.0.
While other augmented realtiy browsers, such as Wikitude and Layar, provide the user with information overlays over live video, Tagwhat allows users to create these overlays.
According to the company, its mobile and web application represents a paradigm-shift in augmented reality.
Tagwhat also marks an important milestone in the evolutionary path of AR technology, representing a shift from the static Web 1.0 world of AR browsers to the participatory interaction of Web 2.0. Tagwhat is ‘create-and-share’ mobile AR, and is the first mobile augmented reality distribution system where anyone, not just developers, can create their own AR content and share with their friends anywhere in the world, in seconds, for free.
The app, which is currently available for Android and coming soon to the iPhone, offers integration with Twitter, Facebook and YouTube and allows users to create location-based content. This content is then viewable from within a Google Maps mashup on the website as well as from the video overlay browser.
The primary difference between Tagwhat and other AR browsers is the ability to create this location-based content from within the app. Layar and Wikitude deal with external content, such as nearby tweets, Wikipedia entries and Gowalla spots.
That’s not to say that Tagwhat won’t also have this content. At launch, Tagwhat will include “opt-in free and premium subscription channels and ‘smart’ advertising”, including channels for restaurants, pubs and nightlife, Wikipedia articles, and a “fully functional implementation” of Foursquare.
The addition of creation is indeed an important shift in augmented reality, but we can’t wait until it can be attached to content visually, rather than solely through GPS data. This is, of course, the next difficult step that we see being used in apps like oMoby, a visual search engine application. oMoby uses your smartphone’s camera to take a picture of an object and then attempts to identify it. This sort of visual search technology would make it possible to tag your specific car, for example. Then, if someone using the same application were to view that car in the browser, they would be able to see your “tag” as well. Think Robocop or Terminator and you get the gist.
So while we like what Tagwhat is doing and hope to play with it when it comes out for the iPhone (the company says it has been submitted to the app store and should be available in the near future), it really does little more than integrate standard geo-tagging technologies with AR video overlays we see in existing AR browsers. The shift from simple location tagging to visual tagging is the next big step.