With Google’s Social Search experiment, Bing’s integration with Twitter and Yahoo!’s partnership with One Riot, social search clearly has both potential and momentum. But what will social search look like, and will it help us search better? And if it will, how?

I’ve written previously about how social search won’t replace traditional search, how social relevancy rank can be used to deliver good results, and why the concept of social search is a return to a familiar state rather than something to fear. Today, I’ll get more specific about the three flavors of social search that will improve user search experiences.

This guest post was written by Brynn Evans.

Collective Social Search

“Collective social search” is similar in concept to the wisdom of crowds, in that search is augmented by trends shared on a network (a la Twitter Trends) or results ranked against the real-time buzz of a group. Why might this be useful? Well, in some instances, we can’t immediately find the information we’re looking for; and pooled, aggregated data from the collective may point us to new avenues that expand our discovery process.

As of yet, no major search systems are doing this very well – and we don’t know what type of interface would be optimal for sharing this information. The Cloudlet plugin inserts tag clouds (based on keywords) into search results; but tag clouds are known to be more of a distraction than a utility. BingTweets has been touted as such a resource, but it really only offers Twitter and Bing results on two separate pages. OneRiot shows only collective data from the real-time stream, although it may be integrated with Yahoo! results soon. And we are still waiting to see how Google and Bing integrate the Twitter firehose into their traditional search results – as opposed to merely including them as additional document-like resources.

Equally important will be understanding when collective social data should be shared with users: while performing the search or after? And for which types of searches?

My research on search strategies begins to address this question. Collective guidance may be useful when users are exploring a search space, possibly because the search domain is not familiar to them (i.e. they lack knowledge of how to drill down to an answer), or because they are passively exploring a problem. I find myself doing this all the time when I prepare recipes to cook. I want to browse recipes from many different sources before I decide what my own recipe will consist of. I don’t have a specific recipe in mind (it’s not an urgent, active request), and therefore I don’t necessarily know when I’ve found what I’m looking for.

That said, it’s hard to determine from keyword strings how active or passive a user’s search is; i.e. it may be quite difficult to determine the type of search they’re performing or how far along they are in their search process (“exploring” or “narrowing”?). Furthermore, the utility of collective social data for mainstream consumers will be limited, mainly because it doesn’t come from trusted sources, unlike “friend-filtered social search” (see next section).

Friend-Filtered Social Search

Friend-filtered social search is approximately what Google is doing with its social search experiment: providing social data that your peers, friends of friends and wider “social circle” have shared. This data could appear alongside traditional search results
(as with Google) or be exclusive results from within your peer network (as with TuneIn).

This is useful if your friends have shared relevant links, blog posts or tweets about a topic that you’re searching for. If you were gathering ideas about, say, “the future of the desktop,” you would see thought pieces, write-ups and links to projects from the main search algorithm, as well as stuff your friends are saying about applications they’ve encountered recently. If you trust your friends, they may serve as reliable filters, pointing you to relevant information.

The three major limitations of this approach are:

  1. Your friends may have no archived social content that’s relevant (or available) to your query. Searching within your Facebook network quickly demonstrates this problem. For this reason, augmenting traditional algorithms with friend-filtered social data may be better, rather than relying exclusively on data from one person’s small exclusive network.
  2. Current implementations are limited to keyword matching; whereas, searches that retrieve related posts based on topic, theme or timeframe might expose a wider set of results and combat the niche-social-network problem. This approach would be computationally harder than keywords alone, and exposing enough of the appropriate context remains a problem (see next item).
  3. Understanding the context in which a post or link was shared is important. Without this, keyword- and even topic-matching might not convey to the user the relevance of a search result. Google provides limited context at the moment (showing only how you know a user, the source of the post and a short snippet). More testing is needed to learn how much and what kind of context is appropriate for social search content.

Similarly there is the issue of when friend-filtered social search would be relevant during a search. My instinct is that it will be useful throughout a search and for many types of searches (it is, after all, just another type of search result). This is critically different from collective social search and collaborative search.

Collaborative Search (a.k.a. Question-Answering)

“Collaborative search” is when two or more users work together to find the answer to a problem. This could look like IM-based question-answering (a la Aardvark ), Yahoo! Answers (which is relatively passive and asynchronous) or over-the-shoulder two-person search. In all of these cases, people speak to each other using natural language, which is incredibly useful for open-ended queries (e.g. “What is ‘design thinking’?”) or queries about unfamiliar domains (e.g. law, health, business, depending on your background). Such conversations, even not real-time ones, can assist people who don’t know the right keywords to use (what’s known as the “vocabulary problem“).

My research has looked at the benefits of questionanswering and at people’s processes and preferences during search. Many users report that they want to attempt to search on their own first, or don’t wish to interrupt their colleagues before they have given it a shot independently. This suggests that early social support should be passive (as with presenting collective or friend-filtered social data).

But later in the process, if the searcher gets stuck on a problem, they often turn to a colleague for help. If systems had a way of identifying difficult queries or search-process inefficiencies, they could offer more explicit social support to searchers. Perhaps the system could identify a domain-specific expert from the user’s extended social circle. Information that this person has shared could be presented to the user, or this person could be suggested as a resource to chat with or email (depending on availability and preferences).

It should be clear by now that these three flavors of social search are complementary. Each has its pros and cons and is appropriate for different kinds of searches and during different stages of the search process. A powerful “social search engine” would be “smart” by making use of all three, while also exploiting the value of traditional algorithms.

Photos by: Who Wants to Be?, Claudia Lim and brewbooks.

Guest author: Brynn Evans is a PhD student in Cognitive Science at UC San Diego who uses digital anthropology to study and better understand social search.