There were some good arguments on this topic, swinging between aesthetic rebuttals to logical deconstructions. Here are four I liked:
1. Tal Yarkoni, Director of the Psychoinformatics Lab at University of Texas, Austin, writes on his blog,
“… it’s worth keeping in mind that there’s nothing intrinsically evil about the idea that large corporations might be trying to manipulate your experience and behavior. Everybody you interact with–including every one of your friends, family, and colleagues–is constantly trying to manipulate your behavior in various ways. Your mother wants you to eat more broccoli; your friends want you to come get smashed with them at a bar; your boss wants you to stay at work longer and take fewer breaks. We are always trying to get other people to feel, think, and do certain things that they would not otherwise have felt, thought, or done. So the meaningful question is not whether people are trying to manipulate your experience and behavior, but whether they’re trying to manipulate you in a way that aligns with or contradicts your own best interests. The mere fact that Facebook, Google, and Amazon run experiments intended to alter your emotional experience in a revenue-increasing way is not necessarily a bad thing if in the process of making more money off you, those companies also improve your quality of life. I’m not taking a stand one way or the other, mind you, but simply pointing out that without controlled experimentation, the user experience on Facebook, Google, Twitter, etc. would probably be very, very different–and most likely less pleasant.”
2. Yarkoni’s argument brings us to these tweets.
Didn’t get it? Chris Dixon explains.
If you a/b test to make more money that's fine, but if you a/b test to advance science that's bad? I don't get it.
— Chris Dixon (@cdixon) June 29, 2014
I didn’t spot these tweets. TechCrunch did, and it brings up the relevant comparison with A/B testing. A/B testing is a technique whereby web-designers optimize user experience engineers by showing different layouts to different user groups, then decide on the best layout depending how which users responded to which layouts. Like Dixon asks, is it okay if it’s done all the time on sites that want to make money by giving you a good time?
You’d argue that we’ve signed up to be manipulated like that, not like this – see #4. Or you’d argue this was different because Facebook was just being Facebook – but the social scientists weren’t being ethical. This is true. To quote from the TechCrunch piece,
A source tells Forbes’ Kashmir Hill it was not submitted for pre-approval by the Institutional Review Board, an independent ethics committee that requires scientific experiments to meet stern safety and consent standards to ensure the welfare of their subjects. I was IRB certified for an experiment I developed in college, and can attest that the study would likely fail to meet many of the pre-requisites.
3. The study that appeared in the Proceedings of the National Academy of Sciences, which it appears not many have read. It reports a statistically significant result that emotions are contagious over Facebook. But as Yarkoni demonstrates, its practical significance is minuscule:
… the manipulation had a negligible real-world impact on users’ behavior. To put it in intuitive terms, the effect of condition in the Facebook study is roughly comparable to a hypothetical treatment that increased the average height of the male population in the United States by about one twentieth of an inch (given a standard deviation of ~2.8 inches).
4. Facebook’s Terms of Service – to quote:
We use the information we receive about you in connection with the services and features we provide to you and other users like your friends, our partners, the advertisers that purchase ads on the site, and the developers that build the games, applications, and websites you use. For example, in addition to helping people see and find things that you do and share, we may use the information we receive about you:
… for internal operations, including troubleshooting, data analysis, testing, research and service improvement.
IMO, the problems appear to be:
- The social scientists didn’t get informed consent from the subjects of their experiments.
- What a scientific experiment is is not clearly defined in Facebook’s ToS – and defining such a thing will prove very difficult and is likely never to be implemented.
To-do: Find out more about the IRB and its opinions on this experiment.