“What are those glasses you’re wearing?”
“It’s called Google Glass. It can search the web, send email, take pictures and more.”
Thus begins a conversation I’ve had hundreds of times. I’ve never seen so much fascination with a new device as Google Glass. The original iPhone, which I bought the first day it was available, doesn’t even come close.
How did I get Glass? I was in the right place at the right time: Google I/O, the company’s annual developer event, on June 27, 2012. That day, U.S.-based developers got a chance to register for the Glass Explorer program, which gave them the option to plunk down $1,500 for a prototype unit when they became available in Spring of 2013.
My First Encounters with Glass
In January, 2013, four months before any devices were shipped, Google invited me and about 80 other developers to the first-ever Glass hackathon in San Francisco. Most of what happened during that event is still covered by a non-disclosure agreement, but it should be obvious that many of the hackers there were insanely smart. My team (Joseph Su and Andrew Lamonica) built a crude product barcode price checker, the first that I know of for Google Glass.
My time to pay up finally came on May 2nd, 2013. At the Google campus, I met with some folks from the Glass team to choose a color (“Cotton”) and get fitted. I’ve worn Google Glass every day since.
Many articles have already covered the Glass hardware; simply put, Google has done a good job designing the device. Glass is light and unobtrusive. The screen looks great, even in broad daylight, and the camera and processor are excellent for such a small form factor. Battery life, though, is the bottleneck: with heavy usage, I can discharge the entire battery in half an hour. I expect this to improve over time; it has to, for Glass to be viable.
Google has stumbled by not allowing prescription lenses with Glass out of the gate. Effectively, Google is saying that people who need glasses can’t use them, while people who have perfect vision have to get used to wearing glasses.
Glass In The Real World
Meeting people is definitely high on the list of Glass features. The device gets attention everywhere it goes—supermarkets, shopping malls, even traffic signals. Old and young, rich and poor, technophiles and technophobes. The people who have never heard of Glass ask what it is, and those who do know ask to try it on.
I’m happy to oblige; more than 300 people have used my Google Glass in the past 100 days. Most of these trials begin with “WOW!” followed by two minutes of confusion. The frustrating truth is that Glass is not intuitive for most first-time users. A three-year-old can pick up a smartphone or tablet, see rows of colored icons, and start performing actions. Glass requires considerably more practice.
As for the attention? It was great at first, but it can get tiring. More often than not, I carry Glass around my neck so fewer people will notice it.
There’s really no established etiquette for wearing a computer on your head, but after a few weeks, the rules became clear: in a restaurant, in a meeting, in a conversation, Glass has to come off. It’s like headphones for your eyes: even when they’re not in use, it’s a sign of inattentiveness and even disrespect to wear them.
Most owners know when to take Glass off, but I see some with the device glued to their faces. Simply put, there are no Glassholes, only assholes who happen to have Glass.
What Is Glass Really Good For?
Photography has been my most common use for Glass. As soon as it became available, I installed Mike DiGiovanni’sWinky, which enables the wearer to take a photo by winking. This dramatically increased my photo-taking—in fact, I created a photo log of my life. Video is also great; since I don’t have to hold a camera up in front of me, I can record effortlessly for as long as I want.
As one might expect, searching Google is simple—just press and hold the touch pad. How tall is the Empire State Building? (1,454 feet.) Who was the first person in space? (Yuri Gagarin.) When were the Pyramids built? (2,584 B.C.) If the answer is in Google’s knowledge graph, getting it is effortless.
In the Glass XE7 update, released in early July, Google added a web browser. I use it to read the news and look up movie ratings on IMDb. I stick to a handful of websites that are mobile-optimized; otherwise, reading text is too much effort. While browsing the web on Glass is useful, it’s one of the fastest ways to empty the battery.
Finally, Glass’ translation capabilities are impressive. Ask Google to translate any phrase into another language, and you’ll hear the spoken translation along with an English phonetic spelling on screen. Unfortunately, the time required to speak my desired phrase into Glass makes this feature impractical in many real-world settings; on a two-week trip to Japan, I didn’t use translation once. But it makes for an impressive demo.
Using The Glass UI
I like touch screens, because instead of acting through a representation of myself—like a mouse cursor—I can actually press, swipe and pinch on the objects I want to manipulate. That’s much closer to how the physical world works.
Glass is a strange hybrid of both approaches. The user taps and swipes on the side of the device to navigate and make selections on the screen, but there are very few visible cues for which gesture to use. I often see first-time users get stuck when trying to go up the hierarchy one level—say, from search results back to the home screen. (The correct gesture, not so intuitively, is to swipe down.)
Compared to a touchscreen, it takes longer to mentally construct a model of Glass’ menu hierarchy and map it to the gestures needed to navigate it. First-time users can’t build this mental model fast enough; that’s why they get confused. Some of this confusion could be solved by software, but part of it seems inherent to the device’s form factor. (I’m told that Apple has glasses-based prototypes of its own, but ultimately opted for a different form factor—the watch.)
The interface that will really make Glass work is mind control, formally known as a brain-computer interface. There are a number of promising technologies in this space, but none yet that would enable Glass. Once Glass has a good thought-controlled interface, I can imagine it being very successful.
The Problems With Glass
Glass is built to do many of the things my phone can do, but it does them half as well. It can search Google, but it’s cumbersome for reading web pages. It can send messages, but relies on imperfect voice transcription. It can post to Facebook, but there’s no way to read my friends’ updates. The problem with Glass is that it doesn’t do much of anything that takes advantage of its unique form factor.
Actually, all Glass needs to be is a platform for augmented reality. When I see text in a foreign language, translate it. When I look at a house for sale, tell me the asking price. When I look at a product, scan the barcode and tell me if it’s cheaper online. When I’m standing in a public place, let me travel backwards through time using Street View. When I look at a person, show me his or her professional history. Creepy? Absolutely. But really useful products have a way of succeeding despite their creepiness.
If Google released Glass today, it would fail. The current product is an order of magnitude short in capability, battery life and ease of use; even a 2014 release date seems too early to me. Google has called Glass a ten-year commitment. They’re going to need every minute.
So is Glass the future? Yes and no. Glass is the future like Windows XP Tablet Edition was the future. In both cases, it’s about a company trying to will a market into existence, but missing the humanity to make the product a success. Somebody, someday, will get this product right. It may be Google, or it may not. The future is up for grabs.