If you were able to get away from your computer over the weekend, you might have missed the Internet uproar over an academic study that was done in coordination with Facebook.
The study sought to find out how people would respond to negative or positive streams of commentary in their news feeds. To find out, of course, would require users to see that sort of jolly or negative commentary. Facebook worked in conjunction with Cornell University and U-Cal San Francisco researchers to make that happen.
Over about the course of a week in early 2012, nearly 700,000 users were subjected to experimentation, apparently unwittingly. (All users agree that their user data can be used for research purposes in signing Facebook’s terms and conditions.) The researchers were able to reduce posts that used words with positive and negative emotional implications from those users’ feeds, and then tested to see whether those users’ own posts reflected what they had seen during that week of drummed-up positivity or negativity.
“People who had positive content experimentally reduced on their Facebook news feed, for one week, used more negative words in their status updates. When news feed negativity was reduced, the opposite pattern occurred: Significantly more positive words were used in peoples’ status updates,” one researcher said in a press release.
The situation is obviously ethically iffy (at best) and has a whole lot of users upset. The term “emotional manipulation” rarely comes across well.
But that Facebook would do something like this can’t really be all that surprising to anybody, can it?
One of Facebook’s data scientists admitted the study was somewhat creepy over the weekend, and suggested the company’s internal policies have changed since the time the study was conducted in 2012. There’s no obvious reason not to take him at his word; the company probably has tightened up some. But we’re still talking about an organization that derives nearly all of its revenue from advertising. Manipulating your news feed is part and parcel with that process.
Let’s just shoot back into late last year. That’s when we started to hear about the plight of small business owners, who said they saw their “reach”—the amount of news feeds their posts show up in—cut down significantly. If you, as a consumer, followed your neighborhood restaurant, it suddenly became less likely that you would see that restaurant’s posts announcing the night’s specials or reminding you about trivia night. Facebook’s motivation was to get more businesses to pay to reach its audience. The end result for consumers meant fewer updates from the businesses they chose to follow, and more from those they did not choose to follow but who paid to reach them. That is one recent, high-profile example of Facebook toying with the news feed in a way that directly impacts what its end users see.
Anyway, none of that is meant as a condemnation of Facebook as a service, nor is it meant as an endorsement of the practice at hand. No question, directly adjusting the emotional connotation of the content its users are exposed to for research purposes comes off a little more mad scientist than pushing ads into your news feed. But the instance should serve as a gentle reminder: your news feed is Facebook’s to study, and its advertisers’ to benefit from. It’s overly simplistic to rely on the old cliche that if you’re not paying for a service, you’re the product. Facebook obviously needs to placate its users lest it lose them. But at the end of the day, your news feed is part of the assembly line that leads to advertisements meeting your eyeballs, not the other way around.