Home With New Photos, Search, and Maps, Google’s Cloud Gets Smart

With New Photos, Search, and Maps, Google’s Cloud Gets Smart

A few years ago, Google’s cloud services focused on simply storing and managing objects: email, documents, music, and movies. The 2013 version of Google is now using the cloud to connect and build relationships between them, responding to and anticipating the desires of its users.

Google used its I/O keynote to describe how its vast array of servers is now applying intelligence from everything from music to maps. Google drew cheers when it launched a suite of photo-enhancement apps, including tools to automatically pull put the best pictures from a camera roll, enhance them, and feature them in a selected list of photos.

Google Maps will now automatically generate recommendations to preferred restaurants and destinations, and dynamically reroute users aroudn traffic. Google will even read your Google+ posts — if you allow it — and analyze their content, providing a hashtag for your readers to go deeper and explore the topic of your post even further.

Wow. All this makes Apple’s Genius music recommendation engine look positively ancient.

The Next Step: Putting Your Data To Work For You

Google’s currency has always been user data, and the transaction has always been a simple one: users contribute data, Google sells ads against it, and both sides prosper. Recent Google I/O conferences have placed a strong emphasis on devices as entry points for that data, especially photos and location.

This year, Google executives appeared to be ready to take the next step.

“We have almost every sensor we’ve every come up with” right in your smartphone, CEO Larry Page told attendees. Devices are used interchangeably, Page said, implying that data and how it’s interpreted should do the same. 

“Technology should do the hard work, so people can get on with doing things that make them happiest in life,” Page said.

Photos

Vic Gundotra, the senior vice president responsible for engineering at Google, introduced the new Photos experience. Google said earlier this week that it will now spread 15 GB of data among a user’s Photos, Drive, and Gmail storage.  But the new Photos experience will make “Google’s servers your new darkroom,” Gundotra said.

Specifically, Photos will now intelligently scan your photos and pull out the best ones, supposedly eliminating blurry and duplicate images. Enhancements like skin softening aim to smooth out wrinkles, and red eye reduction and noise filters will help sharpen photos automatically. Google will hunt for and display images that include smiles, not frowns. And an “auto awesome” feature wil automatically pull out a few photos and stitch them together, essentially making them an animated GIF.

For years, Google’s servers have only been used for storage. Now, the computing elements within them are being applied to the digital objects within them. Artists may dispute the results – shouldn’t I be able to take pictures of scowling children? – but enhancing user photos boosts Google+ and gives users another reason to upload their photos to Google.

Search 

Google, Microsoft, and Wolfram Alpha have engaged in an ongoing war in search for years, with Google jumping out to an early, enormous lead. Wolfram shifted the struggle away from results to answers. Microsoft’s point of attack is social. On Wednesday, Google called anticipatory search as the next frontier.

What is anticipatory search? It’s the sort of back-end data processing that would allow Google to answer the question “What time does my flight leave?” because it knows what flight you’re on based on your email, when the flight leaves thanks to the airline’s flight-status API, and how long you’ll need to get to the airport based on your location, traffic, real-time transit schedules and the like.

Google first introduced that capability with Google Now, the “cards” feature that shipped with Android 4.0. But the new Cards feature significantly broadens the scope of Google’s vision, adding elements like music, games, and public transportation, but also drawing further connections between the two. Being able to command Google to “show mew all my photos from New York” also takes Facebook Graph Search and adds a personal, Google-esque twist. 

Pulling out a feature from Google Glass – voice-triggered actions – Google also announced that a future version of search will “listen” for you to say “OK, Google” and then automatically trigger a search. 

Music

Google’s least important announcement of the day involved its new All Access subscriptions, where users will be able to stream milions of tracks from the Google Play library for $9.99 per month. Quite frankly, most of what Google announced has been done already by companies like Pandora, which auto-generates a stream of music based on a seed of a song or artist.

But Google Play’s new Listen Now capability will auto-suggest music based on tracks the user already owns, and what it knows about the genre, artist, tempo, and other components. Yes, it seems like an afterthought – and that’s sort of the point. 

Maps

Google also unveiled a rethinking of its Maps application, where Google now doesn’t provide directions, it directs: to places that the user frequently visits, to restaurants and other destinations that other users or reviewers recommend, and to locations that Google attempts to personalize in other ways.

You might argue that offering directions itself applies intelligence, sorting through numerous routes to the best destination. But the new Maps experience takes it to another level. 

Basically, here’s what it all means: data isn’t necessarily being devalued in the new computing landscape, but drawing relationships between the disparate elements have become increasingly important. From a consumer perspective, users should expect Google to ask for more and more data, fusing it together and increasingly adding context to it all.

That, increasingly, is becoming the business model of today’s Web. Google is just doing it as well or better than anyone else.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.