Social media platforms have developed into incredibly useful resources for professional and citizen journalists, and have allowed people to learn about and read stories that may never have been published in traditional media. Sharing on just one of a few large platforms like Facebook, Twitter, and YouTube may mean the difference between a story being read by a few hundred versus tens of thousands of people.
Unfortunately, these same platforms have taken on the role of censor. They have created moderation policies to increase polite speech on their platforms, but simply put: they are not very good at it. These moderation policies are applied in imbalanced ways, often without an appeal process, sometimes relying on artificial intelligence to flag content, and usually without transparency into the decision-making process. This results in the censorship and blocking of content of all types.
Globally, these content takedown processes often ignore the important evidentiary and journalistic roles content can play in countries where sharing certain information has consequences far beyond those in the U.S. We recommend any intermediary takedown practice include due process and be transparent, as recommended in our Manila Principles. And, as these examples demonstrate, social media platforms often make censorship decisions without due process, without transparency, and with end results that would make most people scratch their heads and wonder.
We’re regularly documenting censorship and content takedowns like these on Onlinecensorship.org, a platform to document the who, what, and why of content takedowns on social media sites. Onlinecensorship.org is a project of the Electronic Frontier Foundation (EFF) and Visualizing Impact.
While there are hundreds, and possibly thousands of examples, here are seven of the most egregious instances of social media platforms censoring journalism in 2017.
Social media platforms can contain video or photographic evidence that can be used to build human rights abuse cases, especially in situations where the videos or photos aren’t safe on a hard drive due to potential loss or retaliation, or in instances where larger organizations have been blocked. But first-hand accounts like these are at constant risk on platforms like YouTube and Facebook. YouTube in particular has implemented artificial intelligence systems to identify and remove violent content that may be extremist propaganda or disturbing to viewers, and according to a report in the Intercept, removed documentation of the civil war in Syria. Facebook meanwhile removed photos and images of abuses by the Myanmar government against the Rohingya ethnic minority.
In November, Katie Notopoulos, a journalist for Buzzfeed, was banned from Twitter after a seven-year old tweet was reported by several people all at once. She was “mass-reported”, or subject to a campaign where many people reported her, for a 2011 tweet that read “Kill All White People.” After this, her account was locked until the offending tweet was removed. Twitter’s inconsistent content policies allow for this sort of targeted harassment, while making it difficult to know what is and what is not “acceptable” on the platform.
In December, Facebook banned all links and all publications from independent Ukrainian news website Liga.net. They’ve since restored the links and posts, and are completing an internal investigation. According to Liga, Facebook told them they were banned because of “nudity.” A Facebook representative told us that they were blocked because they had “triggered a malicious ad rule.” Organizations can be banned and given confusing answers about why it’s happening and what they can do about it due to murky moderation policies. A single platform with this sort of lack of transparency should not be able to flip a switch and stop a majority of the traffic to an entire domain without offering a concrete explanation to affected users.
In August, the Indian government asked Twitter to suspend over two dozen Twitter accounts and remove over 100 tweets—some belonging to journalists and activists—that talked about the conflict in Kashmir, or showed sympathy for Kashmiri independence movements. The Indian government claimed the tweets violated Section 69A of India’s Information Technology Act, which allows the government to block online content when it believes the content threatens the security, sovereignty, integrity, or defense of the country.
The Indian government reported the tweets and Twitter accounts, and Twitter contacted the users explaining they would be censored. There were no individual explanations given for why these tweets or accounts were chosen, beyond highlighting the conflict in Kashmir.
Pulitzer prize-winning journalist Matthew Caruana Galizia was locked out of his Facebook account after sharing four posts that Facebook deleted for violating the social network’s community standards. The four posts contained allegations against Malta’s prime minister, his chief of staff, and his minister of energy. The posts included images of documents from the 11.5 million documents in the Panama Papers leak, a collection put together by the International Consortium of Investigative Journalists, of which he is a member.
It’s unclear what community standard Facebook applied to delete the photos and lock the account, although it seems that it was due to the materials containing private information about individuals. Facebook has since announced that material that would otherwise violate its standards would be allowed if it was found to be “newsworthy, significant, or important to the public interest.” However, the expectation that Facebook moderators should decide what is newsworthy or important is part of the problem: the platform itself, through an undisclosed process, continues to be the gatekeeper for journalistic content.
Alex Zaragoza, a writer for San Diego CityBeat, had links to her article removed from Facebook because, according to them, it was an “attack.” The article, entitled “Dear dudes, you’re all trash,” critiqued men for their surprise and obliviousness in the light of multiple, high-profile sexual harassment scandals.
Presumably, the post ran afoul of Facebook’s policy against “hate speech,” which includes attacks against a group on the basis of gender. But as ProPublica noted this summer, those standards aren’t applied evenly: “White men” are a protected group, for example, but “black children” aren’t.
If Facebook is going to continue to encourage publishers to publish their stories on the platform first, it needs to consider the effect its rules have on journalistic content. They’ve made efforts in the past to modify their standards for historically significant content. For example, they decided after much controversy to allow users to share images of the iconic Vietnam war photo of the ‘Napalm Girl’, recognizing “the history and global importance of this image in documenting a particular moment in time.” They should perhaps consider doing this for contemporary newsworthy content (especially content that expresses valuable critique and dissent from minority voices) that would otherwise run afoul of their rules.
The Kingdom of Saudi Arabia is one of the world’s most prolific censors. American companies—including Facebook and Google—have at times in the past voluntarily complied with content restriction demands from Saudi Arabia, though we know little about their context.
In June, Medium complied with requests from the government to restrict access to content from two publications: Qatar-backed Al Araby Al Jadeed (“The New Arab”) and The New Khaliji News. In the interest of transparency, the company sent both requests to Lumen, a database which has collected and analyzed millions of takedown requests since 2001.
In September, Snap disappointed free expression advocates by joining the list of companies willing to team up with Saudi Arabia against Qatar and its media outlets. The social media giant pulled the Al Jazeera Discover Publisher Channel from Saudi Arabia. A company spokesperson told Reuters: “We make an effort to comply with local laws in the countries where we operate.”
This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.
LIKE WHAT YOU’RE READING? SUPPORT DIGITAL FREEDOM DEFENSE TODAY!
Categories: Electronic Frontier Foundation