In a year’s time, presumably, we’ll all be living in the touch-screen world of Windows 8. But what happens to the millions of non-touch-enabled monitors already on laptops desktops around the world? Three words: Kinect for Windows.
Microsoft said this week that Kinect for Windows - a slightly different version of the the Xbox peripheral that lets you control the action with body movements alone - will be supported both by Windows 8 as well as Microsoft’s development tools, Microsoft .NET 4.5 and Microsoft Visual Studio 2012, in a new release scheduled for Oct. 8. Users can buy Kinect for Windows for $149 direct from Microsoft ($100 off the regular price) - but you’ll have to wait until the .NET/VS app framework Microsoft is seeding actually produces some usable Kinect software.
Kinect is a leading part of what Microsoft refers to as NUI, or Natural User Interface, “where the technology kind of disappears,” according to Microsoft CEO Steve Ballmer.
Many consumers - and virtually all gamers - are familiar with the Kinect peripheral for the Xbox, where a combination of infrared cameras and noise-cancelling microphones “sees” players, and can interpret hand-waving as the flapping of wings or a kick as a shot on goal. The Kinect can then decipher where the user actually “is,” either interpreting gestures as controls or even “transforming” him into a household object, like a chair.
But while Kinect for Xbox senses motion at a distance of feet, Kinect for Windows adds a “near mode” to sense a user waving his hands, pointing, or otherwise gesturing at up to 40 centimeters away from the screen. That changes the game.
Kinect For The PC
Razorfish, who has helped develop four projects based on Kinect for Windows. “That unlocked the possibility of having Kinect at work, and putting a Kinect sensor above your monitor.”“So suddenly we have something that makes sense in a desktop scenario, and we get closer to [the type of interface in] Minority Report,” where windows are being slid across the screen,” said James Ashley, a presentation-layer architect for the Emerging Experiences Group at
Ashley said multimonitor setups will benefit from being able to slide objects and windows around easily, and workers making a presentation will be able to easily interact with virtual objects.
But why do we need Kinect for Windows, when Microsoft’s Windows 8 is based upon touch? Utility and aesthetics.
Gestures vs. Touch
“Basically, there’s two different ways of interacting with the world,” Ashley explained. “What we know with the touch interface is that it doesn’t really do away with the keyboard and mouse. Obviously data entry is something we can’t do easily with touch interfaces. And there’s the same sort of thing, where gestures will make more sense than just doing touch.”
“Besides the data entry thing, you’re constantly dirtying up your monitor,” with the Windows 8 touch interface, Ashley added. “And that’s just annoying. Why do we have this amazing new interface, where, if you’re using it correctly, you’re constantly smudging your screen? So the gestural interfaces won’t necessarily introduce something superior, just something new: one more way of interacting with computers.”
One aspect of Kinect that’s held it back, according to Evan Lang, research director for “interactive experience” developer IdentityMine, is its poor resolution: at normal distance, the sensor can tell whether it’s a man or a woman with about 90% confidence. “For things like whether someone’s happy, or judge their facial expression, people usually stand too far away,” Lang said.
But we’re still waiting for the killer app that will take Kinect for Windows out of the development sphere and make it a consumer device. According to Ashley, who wrote a book on developing for the Kinect, Microsoft tried (and failed) to work gestures into Office, but has since decided to let the ingenuity of developers lead the way. And the first place they’re going is advertising.
Kinect For Windows' First Market? Interactive Ads
Microsoft launched Kinect for Windows at the Consumer Electronics Show in January, where Ballmer announced that the company would partner with United Health Group, Toyota, Telefonica, Mattel and American Express. No, that doesn’t mean you’ll use it in your Camry. Instead, advertisers apparently love it.
To date, interactive advertising via Kinect for Windows is virtually the only application that Microsoft or its partners have publicized. “Touch free interfaction really lends itself well to marketing,” IdentityMine’s Lang said. IdentityMine worked with Nissan to develop an interactive Nissan Pathfinder exhibit, where a IdentityMine spokeswoman noted that the company has worked with Kinect on numerous occasions, including both the Xbox and Kinect for Windows sensors.
At the Chicago Auto Show, IdentityMine built an interactive demonstration using a Kinect for Windows sensor, a monitor and a computer that’s running the Pathfinder application built with the Kinect for Windows SDK. The app, since deployed to 16 dealerships, was designed to show off the new Nissan Pathfinder SUV - before the car was physically available. The Kinect app (demonstrated in the video below) lets potential buyers virtually enter the vehicle, fold seats up and down, and open the tailgate, among other actions.
“The Pathfinder application using Kinect for Windows is a game changer in terms of the way we can engage with consumers,” said John Brancheau, vice president of marketing at Nissan North America, in a statement. “It’s a powerful pre-sales tool that has the potential to revolutionize the dealer experience.”
Razorfish’s Ashley, who has worked with clients on four Kinect for Windows projects himself, said that the Kinect for Windows technology still needs three key enhancements in both the PC and retail environments:
1. Fine finger detection (which rival startup Leap Motion says it can achieve)
2. Facial recognition, to determine if a user smiles or frowns while viewing the ad;
3. A commonly understood set of gestures, like the “pinch to zoom” interface of the tablet.
The far future of Kinect, Ashley believes, is the interactive mall, where augmented reality and gesture detection merge. There, users will receive information on their smartphones or Google Glasses, and interact with storefronts whose displays contain embedded Kinect sensors.
Lang pointed the possibility of a virtual dressing room, with realistic virtual clothes - a problem that many have tried and failed to solve.
“With enough stores, you’ll interact with complete malls,” Ashley said. “With enough malls, the technology will start moving beyond the mall.”
Perhaps, finally, onto your desktop PC.