Background
Members of the Rohingya community — a largely Muslim ethnic minority in Myanmar — filed a class action against Meta alleging that anti-Rohingya hate speech spread on Facebook incited genocide against their villages. The plaintiffs pointed to Facebook’s algorithmic content-delivery system, introduced in 2009, which they said promoted “toxic posts” because users interacted with them more, driving advertising revenue. In Myanmar, where Facebook was pre-loaded on mobile devices starting in 2011, the platform became the dominant information source for tens of millions of people who had limited digital literacy.
The plaintiffs argued that Meta’s design choices — algorithmic amplification of inflammatory content, “social rewards” like likes and shares that encouraged posting hateful material, and inadequate Burmese-language content moderation — made Facebook a catalyst for real-world violence. They sought at least $150 billion in damages under California tort law.
The Northern District of California (Judge Yvonne Gonzalez Rogers) dismissed the case on timeliness grounds without reaching the Section 230 question. Plaintiffs appealed to the Ninth Circuit.
The Court’s Holding
The Ninth Circuit affirmed the dismissal, but on different grounds: Section 230 of the Communications Decency Act barred all of the plaintiffs’ claims. Writing for the panel, Judge Ryan Nelson applied the three-part test from Barnes v. Yahoo!: Meta is a provider of an interactive computer service; the plaintiffs’ claims sought to treat Meta as a publisher of third-party content; and Meta did not make a “material contribution” to the anti-Rohingya posts themselves. Because Facebook’s algorithm promoted content based on engagement metrics rather than the specific viewpoints expressed, Meta was not an “information content provider” and retained its Section 230 shield.
On the choice-of-law question, the court found that even assuming California’s governmental interest test could theoretically require applying Myanmar law instead of Section 230, the plaintiffs failed to carry their burden. Myanmar lacked any statutory or case law imposing tort liability on social media companies, so its interest in the dispute was “hypothetical” rather than real.
Two notable concurrences signaled deep unease with the result. Judge Berzon, joined by Judge Fletcher, wrote that Ninth Circuit precedent has “unduly expanded” Section 230 immunity and stretched the term “publisher” past the point of recognition. She urged the full court to reconsider en banc whether Section 230 should extend to algorithmic recommendation of content. Judge Nelson himself wrote separately to acknowledge that “this Court has over-read Section 230,” creating an “all-purpose liability shield” that strays from the statute’s original text.
Key Takeaways
- Algorithmic amplification remains protected. Under current Ninth Circuit precedent, boosting third-party content through engagement-based algorithms does not strip a platform of Section 230 immunity, even when the boosted content contributes to real-world harm.
- Two of three judges want to narrow the law. Both the author of the majority opinion and a concurring judge explicitly called for rethinking how broadly Section 230 applies — a rare signal that the court may be open to narrowing platform immunity in a future en banc case.
- Foreign law is unlikely to displace Section 230. The court’s choice-of-law analysis suggests that plaintiffs will struggle to invoke foreign countries’ rules to escape Section 230, particularly when the foreign jurisdiction lacks developed case law on social media liability.
- The “material contribution” test remains the dividing line. Platforms lose Section 230 protection only when they materially contribute to the creation or development of unlawful content — generic algorithmic sorting does not cross that threshold.
Why It Matters
This case sits at the intersection of the most significant debates in internet law: whether platforms should be liable for harms caused by their algorithms, and whether a 1996 statute designed for a fledgling internet still makes sense in the age of AI-driven content feeds. The Rohingya genocide is among the most horrific events linked to social media, and the court’s ruling that Section 230 still applies — even as the judges writing the opinion express discomfort with the result — underscores the gap between current law and the scale of harm that algorithmic amplification can cause.
For tech companies, the decision preserves the status quo: engagement-based algorithms remain shielded from tort liability. But the concurrences are a clear warning. If the Ninth Circuit takes this issue en banc, or if the Supreme Court revisits its 2023 Gonzalez v. Google analysis, the legal landscape for algorithmic recommendation could shift dramatically.
Your browser cannot display this PDF inline.
Download the full opinion (PDF)