Apple finally introduced the availability of the voice-command personal assistant app it paid $200m for today, called Siri. The military spin-off technology was both widely loved and often panned when it was available independently; it was either lovable Skynet or a fish on a bicycle, depending on who you ask. I tended towards thinking it the latter, myself.
Just as Augmented Reality “Heads Up Displays” have gained traction slowly in part because people feel like dorks looking at strangers around them through their camera phones, so too do many iPhone owners trained not to talk on their AT&T crippled mobile devices seem unlikely to take up the regular practice of talking to their phone. The first killer input interface that Apple really shook things up with was multi-touch screens. The next ought to continue in that direction, not away from it. It should have been Swype for iOS announced today, not Siri.
Siri allows users to perform simple tasks like look up nearby restaurants and then using APIs for 3rd party services, do things like make reservations at those restaurants. That might seem like magic – and in some ways it is – but it’s really just a speech-to-text interface collecting data for a search API.
Most of the commands demonstrated are things that users don’t really need to do with voice. What’s the weather going to be like today? I’d much rather click a button or two than speak out loud to no one in particular and hope my phone understands me.
Getting a stock quote by spoken request is just a gimmick. If people really wanted to do things like that, the super-hyped TellMe would have taken the consumer world by storm when it allowed dumb-phone users to call in search requests and get automated replies back at the turn of the century. Instead, that company pivoted into a voice platform in the cloud for enterprises – where use cases were clear – and then sold to Microsoft.
Voice is still unproven in the consumer space. It has some potential on the desktop for people who don’t type well, as a casual input interface that sends boatloads of data up to servers in the sky at Google, Microsoft, in the future Facebook and maybe now Apple. Those companies then learn from the way people put words together in natural sentences, in the voice mail they leave or the search queries they say out loud – and that data is used to educate giant Artificial Intelligence brains that then serve up recommended content and targeted advertising. That’s great for them, we’ll see how it is for you and I.
But what do I want as a user – on my iPhone? I want Swype! Swype is a keyboard program available on almost every smartphone in the world except the iPhone. It lets you put your finger on the keyboard and zip around from key to key with uninterrupted touch and speed. No picking up your finger, just zip zip making shapes. If you’ve ever typed the word you’re gesturing before – Swype will know what it is and autocomplete it.
It’s the fastest way to provide input on a mobile device. It’s fabulous and it’s incredible that Swype isn’t on iOS yet. I assume it’s because of Apple’s strict control over interface design and unwillingness to provide options in design. There are other alternatives too, many Android users say they like SwiftKey X even better.
But Swype is a much better mobile input interface than Siri and spoken word. Just like reading is a much faster way to ingest information than listening to someone speak, there’s too much overhead in speech input.
It’s absurd that there’s no Swype on the iPhone (though you can jailbreak your phone and get it) but it is not absurd that there’s no compelling reason to use Siri.
All the Artificial Intelligence stuff could have been implemented at the same time. I could Swype “Greek” and iOS could figure out that I was looking for nearby Greek food. You don’t have to talk to do that.
Time will tell, but I don’t think Siri is going to be a killer app on the iPhone. Will it be used more than the current iPhone voice control? We’ll see.