A coalition of Palestinian and global digital groups, representatives of the #StopSilencingPalestine campaign, received a formal response from Meta to inquiries raised in a March 14, 2024 letter to Nick Clegg, Meta’s President of Global Affairs. The letter was sent following a meeting held with the company’s executive and senior staff on February 22, 2024. It reiterated our concerns about the systemic censorship of Palestinians and pro-Palestinian advocates on Meta’s platforms, the proliferation of speech inciting violence against Palestinians, and the urgent need for policy shifts to better protect digital rights during times of crisis.

In its response, Meta laid out in generic terms how they dealt with harmful content in the ongoing events in Israel and Palestine, adopted temporary measures “to keep people safe” and explained its approaches towards AI and automation, government takedown requests, and evidence retention. However, Meta failed to outline, in concrete terms, the steps it has taken to identify the negative impacts of its policies and actions on human rights and how it is mitigating them, particularly in light of the plausibility of genocide and the perpetration of other atrocities in Gaza. While we welcome Meta’s continuous engagement with us, we find the response to be insufficient to provide the level of transparency needed to allow adequate scrutiny of Meta’s actions.

In particular, the company’s response highlighted several issues that warrant further examination:

On temporary measures

The implementation of temporary measures, including the reduction of the confidence threshold required by Meta’s machine learning classifiers to identify and automatically demote content the company has a “high confidence that it violates [their] policies”, is concerning as it might lead to over-moderation of content about Palestine and Israel in times where documentation of what is happening is crucial. There is thus a need for increased transparency and detail regarding the adjustments that were made.

The use of machine learning classifiers to identify potentially harmful content, particularly with the admission that confidence thresholds may vary by language, raises questions about algorithmic bias and over-moderation. This is particularly concerning for Arabic language versus English or Hebrew content given previous reports about content filter adjustments leading to disproportionate censorship of Palestinian content. The Wall Street Journal reported on how, following the October 7 attack, Meta manipulated its content filters to apply stricter standards to content generated in the Middle East and specifically Palestine. Specifically, it lowered the threshold for its algorithms to detect and hide comments violating Community Guidelines from 80% to 40% for content from the Middle East and to just 25% for content from Palestine. Furthermore, Meta’s change of default settings to restrict who can comment on public posts in order to address “unwanted or problematic comments”, even when they do not violate its rules, is not protective of users’ freedom of expression.

Despite the company’s claim that “most” temporary measures have been lifted, there is a lack of clear evidence supporting this assertion, prompting further inquiry into the status of those “temporary measures”. It is equally concerning if Meta has indeed lifted most of its temporary safety measures, which were last updated on their website on December 8, 2023, despite Israel’s ongoing war on Gaza and the rise in settler violence across the West Bank. The safety measures require a continuous heightened human rights due diligence. Meta’s lack of public update on its response to the ongoing war and its reversal of its safety measures suggests that the company’s actions were selectively prioritising some users rather than all users in the aftermath of Hamas’ attack on Israel on October 7, 2023. With the war entering its 7th month, and the proliferation of hate speech and genocidal rhetoric continues, we cannot help but see this as another indicator of Meta’s discriminatory approach to people’s safety and human rights.

On human rights due diligence

In its response, Meta says that it has conducted “ongoing, integrated human rights due diligence throughout”. However, it does not indicate how it is mitigating the negative impact of their actions or which changes or updates they have made to mitigate and address the over-moderation of Palestine-related content and widespread censorship issues as documented by several civil society organizations and previously outlined in Meta’s 2022 human rights due diligence report.

Furthermore, the company did not mention any specific actions it is taking to prevent the spread of incitement to genocide on its platforms, particularly from Israeli government officials and politicians, especially in light of the recent provisional measures order issued by the International Court of Justice and repeated statements by UN experts and other bodies on the increase of genocidal incitement and dehumanizing rhetoric online.

Meta has also failed to answer our question on whether it has conducted any heightened human rights due diligence to assess how its current content moderation and curation practices may have impacted conflict dynamics or may have contributed to violations of international human rights law, international humanitarian law, and international criminal law by parties to the conflict. The UN Guiding Principles on Business and Human Rights makes it explicitly clear for companies operating in situations of armed conflict to respect international humanitarian law, and “treat all cases of risk of involvement in gross human rights abuses such as genocide as a matter of legal compliance, irrespective of the status of the law where the business activity is taking place”. We, therefore, remain gravely concerned about Meta’s lack of serious engagement on this issue.

On government takedown requests

We remain concerned about the lack of clarity and transparency surrounding voluntary government takedown requests, including those from the Israeli Cyber Unit and possibly other internet referral units. While Meta mentions in its response that they review all government requests “following a consistent global process”, the process they link to refers to requests concerning content that may violate local law. As such, users are not notified when their content is removed as a result of government reporting. This raises questions about accountability and the potential for censorship without user awareness.

On evidence retention

We welcome Meta’s efforts to allow accountability mechanisms to make requests for extended content retention, and to train such mechanisms on this process. However, Meta shows a lack of recognition of the crucial role of civil society organizations in identifying potentially relevant content to support human rights investigations. Given the importance and time-sensitive nature of such requests, if Meta does not formally allow civil society organizations to also issue requests for extended retention, potential evidence might not be flagged in time and be deleted before it can be used for accountability. Additionally, transparency around initial and extended retention timeframes is crucial to ensuring that potential evidence is not impacted by the long duration of investigations and accountability processes.

As the coalition continues to advocate for non-discrimination, transparency, and accountability, Meta must demonstrate a genuine commitment to upholding human rights and protecting all voices on its platforms in these precarious times.