In just a few days at the SIGGRAPH Asia Conference, MIT’s Media Lab will present a revolutionary interface that allows users to manipulate on-screen images with the wave of their hand. While we’ve seen gestural interfaces through the accelerometers in our smart phones and gaming-related devices, this system is different. MIT’s bi-directional display interface (BiDi) screen is capable of capturing both touch and off-screen gestures through the use of embedded optical sensors.
According to the project team, “The BiDi Screen uses a sensor layer, separated a small distance from a normal LCD display. A mask image is then displayed on the LCD. When the bare sensor layer views the world through the mask, information about the distance to objects in front of the screen can be captured and decoded by a computer.”
In the past ReadWriteWeb has covered Pattie Maes presentation of what she describes as “sixth sense” – a wearable interface where users interact with a camera, mirror and colored finger caps. We’ve also looked at other gesture-based interfaces like Microsoft’s Project Natal which encompass sensor-based cameras and voice recognition. Nevertheless, BiDi screen takes a different approach to spatial tracking. The system can be incorporated into a “thin LCD device” like a cellphone and it does not require the use of cameras, lenses, projectors or special gloves.
For a complete list of BiDi project specifications or for a look at some of MIT’s video demos, check out the project website.