When it comes to dealing with violent and/or potentially offensive content, Facebook has made a lot of missteps. Now, the biggest social network in the world is looking to find a satisfactory medium between a completely hands-off approach and stifling gatekeeping that would (and has in the past) elicited cries of censorship.
Facebook is beginning to show warnings on top of content flagged as graphic, forcing users to agree to continue before watching or viewing said content. The company is also looking to restrict all such content among its younger user base (13-17).
The past couple of years have seen Facebook flip and flop around when it comes to how the company wants to deal with graphic content on the site. In 2013, Facebook bowed to public outrage, online petitions, and harsh criticism from family groups and made the decision to ban a graphic beheading video that had been circulating around the site.
Fast forward a few months, and Facebook was singing a different tune. The company reversed the ban on the video, and in doing so instituted a new policy to govern similar content.
Soon after, Facebook made an official change to its community standards. Here’s Facebook’s current stance on graphic content:
Facebook has long been a place where people turn to share their experiences and raise awareness about issues important to them. Sometimes, those experiences and issues involve graphic content that is of public interest or concern, such as human rights abuses or acts of terrorism. In many instances, when people share this type of content, it is to condemn it. However, graphic images shared for sadistic effect or to celebrate or glorify violence have no place on our site.
When people share any content, we expect that they will share in a responsible manner. That includes choosing carefully the audience for the content. For graphic videos, people should warn their audience about the nature of the content in the video so that their audience can make an informed choice about whether to watch it.
But here’s the thing – expecting people to share content in a responsible manner and hoping that they’ll warn people that they’re about to see someone’s head being chopped off is naive at best. Facebook isn’t naive about these sorts of things – not really. That’s why the company laid the groundwork for this latest move way back in 2013.
“First, when we review content that is reported to us, we will take a more holistic look at the context surrounding a violent image or video, and will remove content that celebrates violence. Second, we will consider whether the person posting the content is sharing it responsibly, such as accompanying the video or image with a warning and sharing it with an age-appropriate audience,” said Facebook at the time. And the company did experiment with warnings for graphic content – but they never went wide.
Now, it appears they are. A new warning is reportedly appearing for some on top of a video of the death of policeman Ahmed Merabet, who was killed in Paris by a terrorist involved in the Charlie Hebdo attacks.
“Why am I seeing a warning before I can view a photo of video?” asks a recently-posted question on Facebook’s help page.
“People come to Facebook to share their experiences and raise awareness about issues that are important to them. To help people share responsibly, we may limit the visibility of photos and videos that contain graphic content. A photo or video containing graphic content may appear with a warning to let people know about the content before they view it, and may only be visible to people older than 18,” says Facebook.
A Facebook spokesperson told the BBC that “the firm’s engineers were still looking to further improve the scheme” which could “include adding warnings to relevant YouTube videos.”
Apparently, Facebook was pressured both externally and internally – from its safety advisory board – to do something more to protect users (especially kids) from graphic content.
Facebook : un avertissement avant de regarder une vidéo potentiellement choquante http://t.co/utxAzb92G6 pic.twitter.com/kZraYoRAYn
— Thomas Coëffé (@Thom404) January 14, 2015
Of course, there’s a whole other group of people that Facebook is worried about protecting.
Video is a-boomin’ on Facebook. Facebook serves, on average, over a billion video views per day – almost one per user – and in the past year, the number of video posts per person has increased 75% globally and 94% in the US. And this is important to advertisers. What’s also important to advertisers? That their smoothie ads aren’t running up against beheading videos.
Adding warnings to graphic content in a smart move. Not only does it allow Facebook to allow the content on the site and thus dodge the “free speech!” cries, but it lets advertisers feel more safe about advertising on the site. It also puts the onus on users – hey, we told you it was bad but you clicked anyway … your choice!
Remember, Facebook isn’t a haven for free speech. It never will be. Facebook doesn’t owe you free expression. The company can do whatever it wants and censor as much content as it pleases. Considering that, a little warning before graphic content is better than no content at all, right?
Image via Mark Zuckerberg, Facebook