Contact Us

Social media users face unexplained censorship of content related to Israeli massacres in Gaza on Instagram and Facebook

As Israel imposed an internet blackout in Gaza on Friday, social media users posting about the grim conditions have contended with erratic and often unexplained censorship of content related to Palestine on Instagram and Facebook. Since Israel launched retaliatory airstrikes in Gaza after the October 7 attack, Facebook and Instagram users have reported widespread deletions of their content, translations inserting the word “terrorist” into Palestinian Instagram profiles, and suppressed hashtags.

Agencies and A News MIDDLE EAST
Published October 31,2023
Subscribe

As Israel imposed an internet blackout in Gaza on Friday, social media users posting about the grim conditions have contended with erratic and often unexplained censorship of content related to Palestine on Instagram and Facebook.

Since Israel launched retaliatory airstrikes in Gaza after the October 7 attack, Facebook and Instagram users have reported widespread deletions of their content, translations inserting the word "terrorist" into Palestinian Instagram profiles, and suppressed hashtags.

Instagram comments containing the Palestinian flag emoji have also been hidden, according to 7amleh, a Palestinian digital rights group that formally collaborates with Meta, which owns Instagram and Facebook, on regional speech issues.

Numerous users have reported to 7amleh that their comments were moved to the bottom of the comments section and require a click to display. Many of the remarks have something in common: "It often seemed to coincide with having a Palestinian flag in the comment," 7amleh's U.S. national organizer Eric Sype told The Intercept.

Users report that Instagram had flagged and hidden comments containing the emoji as "potentially offensive," TechCrunch first reported last week. Meta has routinely attributed similar instances of alleged censorship to technical glitches. Meta spokesperson Andy Stone confirmed to The Intercept that the company has been hiding comments that contain the Palestinian flag emoji in certain "offensive" contexts that violate the company's rules. He added that Meta has not created any new policies specific to flag emojis.

"The notion of finding a flag offensive is deeply distressing for Palestinians," Mona Shtaya, a nonresident fellow at the Tahrir Institute for Middle East Policy who follows Meta's policymaking on speech, told The Intercept.

Asked about the contexts in which Meta hides the flag, Stone pointed to the Dangerous Organizations and Individuals policy, and cited a section of the community standards rulebook that prohibits any content "praising, celebrating or mocking anyone's death." He said Meta does not have different standards for enforcing its rules for the Palestinian flag emoji.

It remains unclear, however, precisely how Meta determines whether the use of the flag emoji is offensive enough to suppress. The Intercept reviewed several hidden comments containing the Palestinian flag emoji that had no reference to any banned group. The Palestinian flag itself has no formal association with any militant group.

Some of the hidden comments reviewed by The Intercept only contained emojis and no other text. In one, a user commented on an Instagram video of a pro-Palestinian demonstration in Jordan with green, white, and black heart emojis corresponding to the colours of the Palestinian flag, along with emojis of the Moroccan and Palestinian flags.

In another, a user posted just three Palestinian flag emojis. Another screenshot seen by The Intercept showed two hidden comments consisting only of the hashtags #Gaza, #gazaunderattack, #freepalestine, and #ceasefirenow.

"Throughout our long history, we've endured moments where our right to display the Palestinian flag has been denied by Israeli authorities. Decades ago, Palestinian artists Nabil Anani and Suleiman Mansour ingeniously used a watermelon as a symbol of our flag," Shtaya said. "When Meta engages in such practices, it echoes the oppressive measures imposed on Palestinians."

Instagram and Facebook users have taken to other social media platforms to report other instances of censorship. On X, formerly known as Twitter, one user posted that Facebook blocked a screenshot of a popular Palestinian Instagram account he tried to share with a friend via private message. The message was flagged as containing nonconsensual sexual images, and his account was suspended.

On Bluesky, Facebook and Instagram users reported that attempts to share national security reporter Spencer Ackerman's recent article criticizing President Joe Biden's support of Israel were blocked and flagged as cybersecurity risks.

On Friday, the news site Mondoweiss tweeted a screenshot of an Instagram video about Israeli arrests of Palestinians in the West Bank that was removed because it violated the dangerous organizations policy.

Meta's increasing reliance on automated, software-based content moderation may prevent people from having to sort through extremely disturbing and potentially traumatizing images. The technology, however, relies on opaque, unaccountable algorithms that introduce the potential to misfire, censoring content without explanation. The issue appears to extend to posts related to the Israel–Palestine conflict.

An independent audit commissioned by Meta last year determined that the company's moderation practices amounted to a violation of Palestinian users' human rights. The audit also concluded that the Dangerous Organizations and Individuals policy — which speech advocates have criticized for its opacity and overrepresentation of Middle Easterners, Muslims, and South Asians — was "more likely to impact Palestinian and Arabic-speaking users, both based upon Meta's interpretation of legal obligations, and in error."

Last week, the Wall Street Journal reported that Meta recently dialed down the level of confidence its automated systems require before suppressing "hostile speech" to 25 percent for the Palestinian market, a significant decrease from the standard threshold of 80 percent.

The audit also faulted Meta for implementing a software scanning tool to detect violent or racist incitement in Arabic, but not for posts in Hebrew. "Arabic classifiers are likely less accurate for Palestinian Arabic than other dialects … due to lack of linguistic and cultural competence," the report found.

Despite Meta's claim that the company developed a speech classifier for Hebrew in response to the audit, hostile speech and violent incitement in Hebrew are rampant on Instagram and Facebook, according to 7amleh.

"Based on our monitoring and documentation, it seems to be very ineffective," 7amleh executive director and co-founder Nadim Nashif said of the Hebrew classifier. "Since the beginning of this crisis, we have received hundreds of submissions documenting incitement to violence in Hebrew, that clearly violate Meta's policies, but are still on the platforms."

An Instagram search for a Hebrew-language hashtag roughly meaning "erase Gaza" produced dozens of results at the time of publication. Meta could not be immediately reached for comment on the accuracy of its Hebrew speech classifier.

The Wall Street Journal shed light on why hostile speech in Hebrew still appears on Instagram. "Earlier this month," the paper reported, "the company internally acknowledged that it hadn't been using its Hebrew hostile speech classifier on Instagram comments because it didn't have enough data for the system to function adequately."