Dear WeatherCat Netizens,
I'm sure most of you have heard about the 2012 study conducted by researchers at the University of California, San Francisco that deliberately manipulated the news feeds of almost 700,000 Facebook users. The news has generated considerable controversy, but careful thought about the matter has been more difficult to come by.
A good example of the response from the Internet culture is this piece on
"Good Morning Silicon Valley:"
How do you Like it now? Facebook may have messed with your mind.
While the rest of the article isn't quite so naive it starts with a very puzzling assertion:
It could?ve happened to you: Facebook experimented with manipulating users? emotions through their News Feeds. The research ? which found that emotions can be contagious ? involved almost 700,000 people, a small percentage of the social network?s more than 1 billion users. Among the biggest questions: Who cares?
There are two more balanced articles to be found in a report on by BusinessWeek:
Facebook's Emotional Manipulation Test Was Unethical?and So Is the Rest of Social MediaThis piece argues that manipulating the users of Facebook is basically unavoidable given the Facebook profit model. In order to demonstrate that ads on the system work, the Facebook staff need to try to extract the greatest effect from the information users experience.
This second piece from the Register makes a persuasive argument that the researchers in this case did not have a legitimate claim of implied consent:
FACEBOOK 'manipulated emotions' in SECRET EXPERIMENT on usersI wanted to offer some perspective on this as someone who had to file a human subject's protocol as part of PhD work. Whenever researchers expect to interact with human subjects beyond purely passive observation in public settings, they are expected to produce a document that is reviewed by a human subjects committee to insure that the work won't be destructive.
Human subject's protocols basically consist of two thrusts:
consent and
harm management. The second component is reasonably clear, but keep in mind that some research is extremely invasive and risky. Testing new medications to cure cancer might kill the patient. Researchers doing this sort of work are expected monitor the patients medically and provide medical aid if the "cure" turns out to be potentially lethal.
That brings us back to the first thrust:
consent. The example of the new cancer drug is a good way to imagine this aspect of a human subjects protocol. It isn't enough to inform a patient that they are taking a new drug, the researchers are expected to go through all the risks. Consent means the research subject understands what they are about to go through, the risks involved, and the options to deal with those risks as the experiment proceeds. Consent isn't about merely asking for permission, it is about informing the subject thoroughly about what might happen -
AND - allowing them a way out if they don't like what is being proposed.
Whether or not the legalize of the Facebook user contract appears to open the door for such work from a legal standpoint, it is beyond any doubt that the researchers in this case did not get legitimate consent from these users from a research protocol point of view. The Facebook user contract doesn't have any language about those with mental conditions being expected to have to cope with more difficult circumstances than are actually exaggerated or fictitious.
According the CDC, 1 in 10 adults in the United States suffers from depression. So something on the order of 70,000 individuals in this study were suffering from depression at the time. It isn't merely the case that the researchers of this study had no control over how depressed Facebook users would react to this manipulation, they couldn't even identify the potential sufferers. It should be clear that this is an extremely dangerous sort of research practice and I do hope these researchers will face disciplinary action for their part in this research.
Unfortunately, there is no academic committee overseeing Facebook's own research and development staff. As the BusinessWeek article points out, Facebook is conducting experiments of this type all the time and doing so for a difficult reason to refute: without revenue, the company will die. While the need to survive cannot be easily dismissed, the point about depressed people (and all other sorts of mental illnesses) applies. Could Facebook be exacerbating the health conditions of its users and not be in a position to even detect it?
Mark Zuckerberg may well be headed to replace Bill Gates as the cyber-CEO everybody loves to hate, but in this case I think the issues go beyond any company and raise issues that have loomed since the
dot-com crash. In short:
"Is it reasonable to expect so much from the Internet for free?"When Good Morning Silicon Valley asks:
Among the biggest questions: Who cares?, we all should get worried. Alas, the people who should get most worried may not be able to: those who are
"addicted" to the Facebook way of interacting with the Internet. Once you have invested too much into a particular sort of Internet infrastructure - it may be all but impossible to extricate yourself from it - no matter how evil it might become. Worse still, is Facebook evil for trying to survive? The root of this evil lies with the users themselves who insist they can have their Facebook for free when everybody knows:
"there is no such thing as a free lunch."Sincerely, Edouard