ReadWriteBuilders is a new series of interviews with developers, designers and other architects of the programmable future.
Today, we know what the future looks like—or we have a pretty good cheat sheet, anyway. More than half of the people in the U.S. are carrying around computers smaller than a deck of cards. Devices have started studying their owners like an alien race, jotting down observations about their habits, crunching the numbers. At the stuttering dawn of the Internet of Things, those same devices are talking to each other and comparing notes But why aren’t they doing it better?
Ask David Lieb, the founder of Bump, one of the top 25 best-selling iPhone apps of all-time and, coincidentally, one that illustrates just how an Internet of Things could really work for us.
Lieb, a now-rusty programmer, has been getting it for years now—four years, actually. In the early days of iOS, Bump was a way to swap contact info through a literal “bump” between two iPhones. Run into someone who you might like to get in touch with in the future, swap info effortlessly and be on your way.
Now Bump has blossomed into hands-down the most seamless service I’ve ever used for sharing files between devices—and as it turns out, we smartphone users need to do that a lot. The app doesn’t discriminate. These days you can share any kind of file—photos, videos, text files, MP3s, you name it—between any two devices by “bumping” them together. You can literally whack your laptop with your smartphone and have a whole album of photos appear on the screen. I do this a lot. I’ve perfected the “Bump bump.”
The thing is, Bump doesn’t use Bluetooth or NFC or any other irritating proprietary wireless standard. Instead it aggregates a cascade of mobile sensor data from the devices in question, all triggered by a jolt that gives, for instance, a phone’s accelerometer its cue to kick off a file transfer up to the cloud and then down to an opposite device.
No muss, no fuss—and most important, no need to fiddle with settings to tell the devices exactly where to send the data or how. They just know. Smart. And it’s a perfect example of how the Internet of Things should work.
Lieb’s Y-Combinator-fueled pet project was conceived as many great ideas are: to fix something that was both really specific and really, really annoying. Namely, how to make moving a chunk of data from point A to point B dead simple. Lieb’s app works like everything should, so when he talks about the future, I listen. I want to know how his brain works—and I wouldn’t mind learning the secret recipe behind his app’s killer formula while I’m at it.
ReadWrite: At a conference last year, I sat down to dinner at a table with about 10 other tech journalists from the Wall Street Journal, PC World, et cetera. In conversation, Bump came up and the group couldn’t even figure out how the app works.
We even tried to mess the process up by all “bumping” our phones together at once like crazy people. But, much to our chagrin, it still transferred files perfectly. So, um, how does Bump work?
David Lieb: Bluetooth is available, but it’s really hard to get it to work between devices. There’s wi-fi but everyone’s got to be connected to the wi-fi network. So we basically have built this really sophisticated system to make the user experience what we wanted. And the sophisticated system is that we monitor a bunch of sensors on the phone: where you are, the accelerometer … there’s this laundry list of six or seven other sensors or [kinds of] data that we can get our hands on.
[Taken alone] any one of those isn’t enough, but if you group them all together, the full set of sensor data is enough [to uniquely identify the devices involved in any given transfer]. And it’s dynamic. So we use whatever we need to know to get really good confidence. And it works really well, as it turns out. Very rarely do we mess up.RW: Aha, so the data set is more than the sum of its parts. Very cool.
When it comes to wireless [peer-to-peer] connectivity, we just keep waiting for “the one.” And of course, a couple of years ago, we thought that NFC was the one….
DL: As a technology [NFC] is great. But as a solution to a problem it doesn’t make any sense. As it turns out, you need a lot more than a chip in a phone to make something work really well. You need design, you need user experience, you need technology that achieves the goal rather than technology for technology’s sake.
The Near (And Far) Future
RW: So what’s next?
DL: The biggest shift that we’re going to see is just starting now. It’s going to be way bigger than touchscreens and way bigger than apps.
If you look at the history of computing for the last 50 years … the computer is this tool that you can use. It’s like a screwdriver. You need to do something so you grab your tool. As we go forward, phones know so much about us that they can predict what we want to do before we even think about it. This is just the beginning of what I call ‘the age of inference,’ where every app is going to flip from an intent-based app where I open it and say ‘I need to do these things now’ and turn into an inference-based app where it tells me ‘hey, you need to do this now’ or ‘here are some options of things you [might] want to do.’
And that will allow us to do so much more. Because the human brain can only hold so many things at once.
RW: How do you think this “age of inference” is coming along? Inference and prediction sounds a lot what’s Google’s up to with Google Now. And of course Google Glass is a hardware vessel for that kind of data delivery prescience.
DL: I think that Google is doing interesting things with Google Now—I would summarize their work so far as “predicting what information I may want to see” as a supplement or replacement to existing behavior of searching for that information. But I see that as just the beginning—there is far more that computers can do beyond predict information to display to me. What about interactions with services, other people, or places or businesses? I think we’ll start to see predictive computing or “inference” emerge more in these areas over the next several years.
Google Glass is a nice way to deliver information in a more contextual way, so I see it as an enabler for some of these higher level ideas on the future of computing. But to me the bigger shift is that from “I command the computer to do something for me” to “the computer infers what actions I’d like it to do for me.” I see that as the biggest shift in computing since the abacus.
In the last couple months, both Google and Apple have announced more sophisticated API support for developers for contextual awareness in apps. Phone platforms now are beginning to provide classifications such as “the user is riding a bike” and “the user is walking” which will greatly improve developer’s ability to focus on the application layer, rather than the tech layer.
RW: So what do you think is next next?
DL: Well, the future is always a bit hazy, but a few trends that are happening that I expect to continue.
- More devices: 30 years ago, people shared access to a single computer. 20 years ago people started getting their own family computers. 10 years ago everyone got their own computer. 5 years ago people got a mobile device in addition to a computer. 3 years ago people got a tablet too. In the future, I think the number of computing devices people have will continue to grow.
- More intimacy with those devices: 30 years ago you wouldn’t dare leave any personal info on a computer. 20 years ago you started saving your documents and files on them. 10 years ago people began keeping all their photos and videos on their computers. 5 years ago people began carrying their phones with them and using them for all of their communications and personal data.
I expect this to continue, to the point that our devices will know (and analyze) all the intimate details of our lives—who our friends are, who we should connect with, who we should meet, how healthy we are, etc.
Beyond Bump: Meet Flock
Lieb touts his second app as a model for this next era, the one in which devices make correct guesses about their users. For his next trick, Lieb created Flock— an app that address the Sisyphean task of regularly sharing the mobile photos we take. (I for one am guilty of snapping photos with my phone constantly … and then watching them accrue virtual dust.) Flock was designed to nudge people into sharing those photos that didn’t quite make it into a #latergram with the people who will actually care.
I’ve had Flock since I hung out with Lieb back in January at CES. The app is way rougher around the edges than Bump, unfortunately, but its conceptual purity remains. Like Google Now, a bold herald of the anticipatory systems trend, Flock wants to know you better than you know you.
When Lieb launched the photo sharing app, everyone likened it to Color (remember that hype?). But Lieb thinks Color, unlike Flock, never had a core user issue that it was trying to solve—and that was its downfall.
DL: An app has to have one brand for a user. If Bump did like three other things, people would be like “oh what is that app I wanted to use?” Keeping things really focused, especially on mobile, is super important.
I think one of the most important things that Apple did when they came up with the iPhone [user interface] was that they had this spatial grid of icons. What it allows is the user to spatially recognize ‘when I want to do this, I go here.’ Your thumb is going to that place before you even think about where it is.
RW: So you “bump” with Bump and you “flock” with Flock.
DL: It’s really interesting, because there are more seamless ways to do it. But because the cognitive hurdle is lower with Bump—I don’t have to think about anything, I just bump the phones together and it works—that’s what’s allowed Bump to get so big. It’s just dead simple. “I don’t want to think about how it’s working. I just want it to do what I want.”
Bumping Our Way To The Internet Of Things
Exactly.
As it stands, we’re still wrestling with what should be relics of a bygone technological era: battery life woes, wired connections, wi-fi deadzones, dropped calls—the list of would-be anachronisms goes on. How do we let our imaginations expand infinitely when we still can’t move information from point A to B without wrangling with USB cables and flickering signals?
If devices are all going to be chattering away to us and each other everywhere we go, then we better puts our heads down and get to work. Because it’s tough to build the future without a foundation of solutions to the problems plaguing us now. And those solutions, elegant workarounds like the ones Lieb builds, will pave the way.