The 2002 movie Minority Report committed one unforgivable sin: it made UI designers want to design interfaces like Minority Report.
The most recent example of this is the SpaceTop 3D desktop, which MIT grad student and former Microsoft intern Jinha Lee unveiled at the TED show this week in Long Beach. The user interface, which probably falls under the umbrella of beautiful, though unnecessary technology, uses a series of cameras and projected images to create a “true” three-dimensional display: users put their hands “into” their computer monitor and “grab” files and other documents, moving them around. Gestures can be used for more complex actions, essentially serving as macros. (See a video of Lee’s desktop below.)
An awesome use of technology? Undeniably. Impractical? In a normal computer environment, almost certainly. And the fatal flaw? The keyboard.
Touch: Either Superfluous Or Arbitrary
For the last few days, I’ve continued to use the Chromebook Pixel, Google’s marvelous yet massively overpriced Chromebook. It’s clear what Google has done here: create a so-called “aspirational” Nexus-style device that ChromeOS developers will want to own, as well as develop for. But one of the routes that Google has chosen has included touch, a technology that adds an additional element of cost to its “Companion PC.”
At this point in its development cycle, there is absolutely no reason that the Pixel should have included a touchscreen. Everything you would want to do via touch – moving windows around, repositioning the cursor, switching tabs – can be done as quickly and with more precision using the touchpad or a connected mouse. As I’ve noted before, touch-enabled apps like Microsoft’s Contre Jour or the new ExploreTouch.ie site (which requires Windows 8, so they won’t work on the Pixel) provide the best showcase of touch-enabled apps on the Web. Google has done nothing to enable them. (Is there a potential partnership here?)
What the Pixel has done, in my mind, is brilliantly demonstrate just how arbitrary Windows 8’s emphasis on touch actually is. Touch redefined how Microsoft should lay out its user interface – pack the elements of the screen closely together, and fat-fingered users would become frustrated trying to distinguish between them. Instead, everything needs to be spaced out, which has the unfortunate side effect of increasing unused space.
(This isn’t necessarily a bad thing. I’ve accidentally closed tabs in Chrome because I mistakenly clicked the “X” to close them, rather than “grabbing” the title to shift them around.)
And there’s the cost. As Microsoft’s Tami Reller noted, the industry was caught short by the demand for touchscreen notebooks, which – and I’d agree with her – are the best way to experience Windows 8. But although prices are coming down, they’re still premium devices, and cost a bit more. The true hidden cost of touch comes when you come home, sit down, at your desk, and realize that the lovely desktop monitor you invested in for Windows 7 has been rendered obsolete. Replacing it will cost about $300 more.
From an ergonomic standpoint, however, a touchscreen display is a mixed bag – at least when you’re at a desk. To its credit, Microsoft designed Windows 8 so that touch feels natural; swiping left and right with a touchpad simply doesn’t reproduce the experience. All-in-one PCs, which already implement touch, are perfect for Windows 8.
Otherwise, a keyboard still remains the superlative form of input for a PC. Many of you can type 60 words per minute or more. Every time your fingers move from the keyboard, in one sense, it means lost productivity. Swipe. Type. Swipe, type. You get the idea. That’s precisely whey I fell in love with Tobii, an excellent eye-tracking technology which replaces your mouse with your gaze, and so neatly integrates with the keyboard.
Shackled To The Keyboard
The problem is that that keyboard is a boat anchor that weighs down the true vision of Windows 8.
At about 3:48 into Minority Report, we see the first vision of John Anderton’s computer: a curved sheet of glass, which serves as an overlay over a video wall. Tom Cruise’s character swipes through various video windows and data feeds while presenting the “evidence” to a judge and witness. No keyboard is present.
Instead, Anderton communicates by voice, asking the computer or his assistants for relevant information. The synergy fuses into a harmony of man and machine, embodied by the classical music Anderton uses as a backdrop.
Whether it be Windows 8 or something from Google, a true Minority Report future won’t be enabled until we can replace the keyboard with true speech recognition. This isn’t impossible, technically – but it may be so, culturally.
We know that both Apple’s Siri and Android’s own cloud-based speech recognition aren’t perfect, but constantly improve. Google is also feverishly polishing Google Glass, trying to make voice commands and dictation part of the interface.
We also know that we’re increasingly forced to work on top of one another. A purely speech-driven interface is problematic, both in the crowded environs of a trading floor, as well as the library-quiet culture of my previous employer’s sales staff. As cubes shrink – and as employers like Yahoo require workers to collaborate on-site – the problem will worsen. And I can’t even begin to imagine how you’d write software code via voice command.
The answer may be a further loss of privacy. In San Francisco, you can play a fun little game: is that person mentally deranged and talking to himself, or using Bluetooth? We’re a bit more used to it on the street. Within the office, things tend to be a bit different.
The alternative, though, may push even further into science fiction: subvocalization, or embedding a sensor into your larynx to allow users to quietly issue commands and dictation. Research in that direction will have to anticipate a voice-enabled future, too – one that, unfortunately, I’m pretty sure I don’t ever want to see.
Here’s that video of Jinha Lee’s transparent 3D desktop:
Lead image via leejinha.com