When Facebook announced changes as to how it will conduct online research, there was one glaring omission in its new guidelines: There’s no mention of how the social network will treat its users moving forward.
Facebook faced quite the backlash from its emotional manipulation study published earlier this summer, in which it deliberately showed some users more positive or negative posts to see how they affected mood. In an effort to placate its critics with more transparency, the company issued new guidelines on Thursday to help it conduct online experiments more responsibly.
The framework includes a a more thorough vetting process for research proposals; a review panel that includes “senior subject-area researchers” and members of multiple teams at Facebook; a six-week training program to educate employees on research practices; and a new research website to publish Facebook’s academic studies.
Facebook wants you to blindly trust it to be better, and not to worry about potentially becoming a participant in an experiment you didn’t sign up for. But Thursday’s blog post doesn’t instill that much confidence.
“What’s glaringly missing in this statement is the word ‘ethics’,” said Reynol Junco, an Iowa State professor and faculty associate at Harvard’s Berkman Center, in an interview. “There’s really no discussion of how they’re going to address the ethical concerns, and who their ethical experts are going to be, and what their ethical review process looks like.”
I spoke with Junco earlier this year, and he said the problem with the Facebook study—and what made it different from the research other companies conduct as a form of A/B testing—was the potential for harm in its experiments. As he said at th time:
Is what you get from the research worth doing the intervention, and if the answer is yes, what are you going to do to minimize the effects?
Facebook is silent in this regard.
When Facebook first published the emotional contagion study, one of the biggest concerns was that the company did not get informed consent from users—meaning people had no idea they were a part of an experiment. Facebook manipulated people psychologically without getting their consent first.
The mood manipulation study may have been legal, but perhaps not ethical. According to The Atlantic, the experiments took place before any of the researchers consulted an institutional review board, which exist primarily to ensure the protection of human research subjects. Facebook’s recent blog post says it will engage with the academic community, but doesn’t say if it will seek approval from review boards before doing similar research.
The Electronic Privacy Information Center, a privacy watchdog organization, filed a complaint with the Federal Trade Commission claiming Facebook broke the law when it ran the experiment. That’s because the social network didn’t state specifically in its data policy that user information could be used in research.
Facebook has revised its policy since, although it’s not yet clear whether it that change sufficient “informed consent” for future research purposes.
“The devil’s in the details—it’s a nice statement, but how is this going to work in practice?” Junco said. “I don’t see any talk about how … strong the user protections are going to be. They don’t really say how this isn’t going to happen again—is it just going to happen again, and they’ll say, look, we have clear guidelines and we have a panel?”
The guidelines are a good start, though, and increased transparency is at least somewhat promising sign. Facebook plans to apply the guidelines to both internal and public-facing experiments, for what that’s worth.
Lead photo by Robert Scoble