top of page

Content Moderation Legislation Highlights Indifference to the “Bad Speech, Good Evidence” Dilemma

Updated: Oct 17, 2022

Marah Ajilat (J.D. Candidate, Class of 2025) is a contributor to Travaux. Her interests include human rights, business, and tech. Marah holds a B.A. in Politics from Oberlin College with a double minor in French and International Relations. Before law school, Marah worked with social justice-oriented nonprofits and progressive political candidates, leveraging digital advertising tools and big data to influence elections and expand fundraising efforts. Currently, she serves as a Thelton E. Henderson Scholar and a Startup Law Initiative Fellow. She is fluent in Arabic and French, and has working proficiency in Russian.


2022 is a consequential time for social media content moderation—the screening of online posts to determine whether these photos and videos comply with government or platform-specific policies involving hate speech, misinformation, and violence. Several governments have challenged social media companies’ role in content moderation. Some legislatures, like the French National Assembly and the German Bundestag, believe that social media companies are not doing enough to rein in hate speech and falsehoods. In contrast, other governments such as the subnational ones in Florida, Michigan, and Texas think that social media companies are going too far in restraining users’ speech. Entwined in these debates is investigators’ critical use of social media content as evidence against human rights abuses and war crimes—which could be unfairly constrained if courts, companies, and legislatures cannot appropriately balance content moderation with open access to information.


Increasing Restrictions on Content Moderation


To begin with, some jurisdictions have taken an aggressive stance against content moderation. Social media companies like Google, Meta, and Twitter may now be sued for moderating social media content in Texas. On September 16, 2022, the United States Court of Appeals for the Fifth Circuit held that Texas House Bill 20, which prohibits large social media platforms from censoring content created, shared, or received by users in Texas based on “viewpoint,” was constitutional. This bill is not the only one of its kind: lawmakers in Michigan and Florida concerned about social media platforms’ alleged censorship of conservative and right-wing media have introduced similar legislative proposals. This is the second time that the Fifth Circuit decided the issue after it vacated its initial stay of a Texas trial court’s preliminary injunction against the bill. It is unclear whether this bill will survive strict scrutiny in the U.S. Supreme Court.


The plaintiffs—trade organizations representing these social media companies—argued that they had a First Amendment right to choose what to publish or refrain from publishing. Corporations’ right to free speech, or “commercial speech,” is a longstanding principle of American constitutional law. The Fifth Circuit, however, rejected their argument. They reasoned that these platforms act like “common carriers,” such as railroads and telephone companies, rather than newspapers. Therefore, as common carriers, they lack the right to moderate content. The Texas bill defines “censor” as “to block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.” In effect, social media companies must now publish content they would otherwise ban from being created, shared, or received by users in Texas. Otherwise, they risk being sued for engaging in “viewpoint” discrimination.


The only four exceptions to the Texas law are content that:

  • can be censored by social media companies under federal law,

  • “is the subject of a referral or request from an organization with the purpose of preventing the sexual exploitation of children and protecting survivors of sexual abuse from ongoing harassment,”

  • “directly incites criminal activity or consists of specific threats of violence” against people or groups based on their “race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge,” or

  • is “unlawful” under the U.S. Constitution, the Texas Constitution, and federal and state law.


This decision poses countless problems for users and social media companies alike, from practical concerns involving which content Texas has jurisdiction over to substantive concerns on the “viewpoints” that warrant protection. Some have warned that social media platforms may no longer be able to remove “content touting Nazism,” videos promoting terrorism by groups like the Islamic State, misinformation, or hate speech. Others have cautioned that social media companies may end up applying the Texas law to all users to stop anyone from slipping through the cracks. According to Matt Schruers, president of the Computer and Communications Industry Association (one of the trade organization plaintiffs), the risks posed to Americans from "[f]orcing private companies to give equal treatment to all viewpoints on their platforms” are enormous, regardless of how offensive or dangerous those viewpoints are.


Risks of Content Moderation for Human Rights Investigations


Many debates surrounding the recent global push for content moderation reform have focused on content moderation’s effects on users’ and social media companies’ free speech. But others have a stake in the matter: human rights investigators.


With the emergence of technology that has enabled social media content to be used as evidence in the courtroom, human rights practitioners have increasingly turned to Facebook, YouTube, and other platforms to perform discovery. This is especially true for international criminal investigators, who often have very limited access to witnesses on the ground and lack the coercive power to compel state cooperation. Witnesses and survivors of human rights violations have also increasingly turned to social media, posting photos and videos with the hope of reaching international investigative bodies or other pathways of achieving accountability.


The censoring of objectionable content, as critical as it is for harm-free social media, complicates investigators’ jobs. First, the content most at risk of censorship is also most likely to be of potential value to investigators and prosecutors. Second, once a platform takes down such content, investigators and lawyers have no recourse. Censorship of content violating Meta’s “Community Standards,” for instance, is unlikely to be revoked for the purpose of using this content as evidence in court. Courts and social media companies may argue that a person’s right not to be exposed to harmful content outweighs instrumentalizing violence—the “bad speech, good evidence” dilemma. This leads investigators to “race against [platforms],” collecting and preserving content before it is reported or detected by content moderation algorithms. Unsurprisingly, these algorithms often outpace investigators. So, when criminal justice interests clash with fundamental rights, it would be incredibly difficult to reconcile, for example, the importance of shielding youth from white supremacist propaganda with allowing investigators access to videos of Russian war crimes in Ukraine.


Renewed Commitments to Digital Evidence Lockers


Some companies have demonstrated that the pursuit of justice for human rights victims and freedom from online harm can be reconciled. In 2021, Meta voluntarily shared with “millions of items that could support allegations of war crimes and genocide” with the Independent Investigative Mechanism for Myanmar. However, neither Meta’s human rights policy nor its most recent annual report say how it seeks to build on its experience working with Burmese human rights investigators. Still, Meta’s actions have at least started similar conversations among other platforms like TikTok, Twitter, and YouTube over how to balance preventing “bad speech” and protecting “good evidence.”


This content moderation momentum presents a much-needed opportunity for lawmakers and social media companies to invite human rights practitioners to the table. In May 2022, four high-ranking Democratic Party members of the U.S. House of Representatives called on the chief executives of Meta, TikTok, Twitter, and YouTube “to preserve content, and the metadata associated with this content, potentially providing evidence of war crimes and human rights violations in Ukraine.” This was novel, as past hearings on bills restricting content moderation in Florida, Michigan, and Texas did not include testimony from human rights practitioners regarding the impact of content moderation on their work. More legislatures should follow the House’s example, invite human rights investigators to the table, and consider the implications of censoring violent content in the absence of a systematic mechanism to preserve and share probative data with relevant authorities. Moreover, human rights practitioners have presented several models that may answer the question of what should happen to valuable censored content once it disappears. For instance, UC Berkeley’s Human Rights Center mapped various models for building a “digital locker” of evidence that investigators may access.


Because access to digital evidence is still voluntary in most jurisdictions, it is now incumbent upon social media companies to ground their data policies in the interests of human rights defenders and on judicial forums like the U.S. Supreme Court to interpret laws in a way that appropriately balances open access to information with the need to stop hate speech and misinformation.


1 comment
bottom of page