Both iOS and Android have voice-controlled assistants that can hook into the main apps and help the user out. Who has the technological advantage in the long term: Apple’s Siri or Google Now? I think the answer is simply “Yes.”
Kontra, a pseudonymous tech thinker, published a compelling post on the question Monday, pointing to some of Apple’s advantages. For an artificially intelligent assistant, context is everything. Kontra argues that Apple’s ecosystem of native apps has the potential to provide more context about the user than Google’s Web-backed environment can.
Siri has access to much more sensitive, personal information than Google, because Apple is directly involved in all kinds of transactions. As Siri gains more access to third-party applications, it will be able to fine tune the context of what you’re asking because ambient information about who you are and what you do is right there on the phone.
Kontra’s example of what one means by “nice restaurant” is instructive. The highly subjective word “nice” can be approximated by looking at online reviews, but what it really needs is information about your budget, spending habits, diet and so on. Google can see some of what you’re interested in and shopping for, but it doesn’t have your Passbook or OpenTable purchase history to peruse directly.
There’s An App For Us
Kontra thinks Apple has the advantage here because we store that kind of information readily in native apps but don’t display it proudly on the Web. That’s why Google has to resort to tracking and watching our browsing habits to extrapolate the context. It’s more complicated than our straightforward relationships with native apps, so perhaps it’s more likely to fail.
More importantly, Google loses access to that information when people use iOS devices. Everything someone does with a native app is something he or she doesn’t do with Google on the Web. Google can still try to track iOS users when they’re on the Web, but that gives it an incomplete picture, making it more likely that Google will be wrong if it tries to guess.
Apple doesn’t have to do that. With Siri integration into apps, the sensitive transactions will happen on your own device. They won’t be shared with other services via the Web.
How Google Sees The World
But Google has enormous advantages over Apple that we can’t overlook. Its scale gives it enough data to be able to freely translate inputs between languages, automatically scan millions of videos for content, recognize objects in images, respond to world events in real time, and provide better mail, contacts and calendar support than Apple.
Kontra’s right that Google is playing catch-up on the technologies needed for the later versions of this technology. Google+ Local is a poor substitute for Yelp (or Foursquare, which Apple should buy). Google Wallet has found no purchase, while major companies are already signing on to Passbook. Google Play is working really hard to be as good as iTunes, but it looks hopeless. And in businesses like travel, hotels and daily deals, Google has to buy expensive companies to stay in the game. Apple just has to keep the lights on in the App Store.
But there’s one technological point that trumps all these business ones: Voice control is an interface. People will use the best interface to do what they need to do, period. Right now, slabs of glass with sensors and microphones are the best we’ve got, and Apple does seem to be better able to monetize those.
But Google maps the entire world and teaches cars to drive in it. Apple may be able to learn more about its customers, but Google knows more about the world and how people move through it. Its research with Project Glass is all about vision as an interface, imposing Google’s rich layers of data on the world directly to guide us through it. There may be conveniences there that Apple can never match.