Background
The Rohingya are a largely Muslim ethnic minority in Myanmar who have faced decades of persecution, culminating in what plaintiffs describe as genocide carried out by military and civilian forces. When Facebook launched in Myanmar in 2011, Meta partnered with telecom companies to pre-load the app onto mobile devices, making Facebook the primary gateway to the internet for tens of millions of people in the country.
Plaintiffs — two displaced Rohingya women proceeding under pseudonyms — filed a putative class action alleging that Facebook’s design amplified anti-Rohingya hatred and helped incite real-world violence. Their core theory: Facebook’s algorithmic content recommendation system, introduced in 2009, promoted posts that generated high user engagement. Because inflammatory, hate-filled content tends to attract more interaction than benign posts, the algorithm systematically boosted anti-Rohingya content across Myanmar. This created a feedback loop — users who posted hateful content received “social rewards” (likes, comments, shares), encouraging them to post more. Plaintiffs alleged that Meta knew about this dynamic, lacked adequate Burmese-language content moderation, and failed to act despite internal warnings.
Meta removed the case to the Northern District of California and moved to dismiss. The district court dismissed the amended complaint as untimely. On appeal, the Ninth Circuit affirmed on a different ground — Section 230 immunity — without reaching the timeliness question.
The Court’s Holding
Writing for the panel, Judge Ryan Nelson applied the Ninth Circuit’s three-step Barnes test for Section 230 immunity. The court held that all three elements were satisfied and that Meta was entitled to dismissal.
Choice of law. The court first addressed whether California’s choice-of-law rules could require application of Myanmar law in place of Section 230. Applying California’s governmental interest test, the court concluded that plaintiffs failed to carry their burden. While Myanmar has an obvious interest in protecting its citizens from violence, the court found that Myanmar’s interest in regulating American internet companies’ liability was “insufficiently incorporated into the positive law of the country” to override Section 230’s application.
Section 230 analysis. At Barnes step one, the parties agreed Meta operates an interactive computer service. At step two, the court held that each of plaintiffs’ claims — including negligence and strict product liability — sought to hold Meta responsible as a publisher of third-party content. Although plaintiffs framed their claims as product design defects, the court found this was distinguishable from Lemmon v. Snap (which allowed a product liability claim to survive Section 230 because the harm flowed from a platform feature — a speed filter — rather than from third-party content). Here, the alleged harm was inextricably tied to the content of third-party posts. At step three, the court held that Facebook did not “materially contribute” to the third-party content. The platform provided “neutral tools” — essentially blank text boxes — and its engagement-driven algorithm did not specifically target or generate anti-Rohingya content. Under Dyroff, algorithmic content recommendation constitutes publishing conduct, not a material contribution to the content itself.
The concurrences. Both concurring opinions agreed the result was compelled by circuit precedent but sharply criticized the court’s broad reading of Section 230. Judge Berzon, joined by Judge Fletcher, argued that the court’s precedent has “stretched the term ‘publisher’ past the point of recognition” and urged en banc reconsideration. She endorsed the Third Circuit’s approach in Anderson v. TikTok (2024), which — building on the Supreme Court’s Moody v. NetChoice decision — held that algorithmic recommendations are a platform’s own first-party speech, not third-party content immunized by Section 230. Judge Nelson concurred separately to argue that the court has strayed from the original public meaning of “publisher” in Section 230 and that state choice-of-law rules can never displace federal law under the Supremacy Clause.
Key Takeaways
- Algorithmic amplification remains protected. Under current Ninth Circuit precedent, a platform’s use of engagement-maximizing algorithms to recommend third-party content is publisher conduct shielded by Section 230 — even when the algorithm foreseeably amplifies harmful content at scale.
- The product-design workaround has limits. Framing platform harms as product liability claims does not circumvent Section 230 when the alleged defect is inseparable from how the platform curates and displays third-party speech. The Lemmon exception applies only where the harm stems from a platform feature independent of user content.
- A circuit split is deepening. The Third Circuit’s Anderson v. TikTok holds that algorithmic recommendations are a platform’s own expressive product — not immunized by Section 230. Two of three Ninth Circuit judges in this case endorsed that view, and Judge Berzon “even more emphatically” urged en banc review, citing both Anderson and the Supreme Court’s Moody v. NetChoice.
- Foreign law cannot displace Section 230. The court held that even under California’s governmental interest test, plaintiffs could not invoke Myanmar law to avoid Section 230. Judge Nelson went further, arguing the Supremacy Clause independently prevents any state choice-of-law rule from directing application of foreign law that conflicts with federal statutes.
Why It Matters
This decision highlights a growing tension in Section 230 law. The majority opinion faithfully applies existing Ninth Circuit precedent to shield Meta from liability for one of the most devastating alleged harms ever attributed to social media — the role of algorithmic amplification in inciting genocide. But the two concurrences signal that a majority of this panel believes the precedent is wrong, creating strong momentum for en banc review.
The deepening circuit split between the Ninth Circuit’s broad reading and the Third Circuit’s Anderson framework makes Supreme Court intervention increasingly likely. For platforms, the case reaffirms that — for now — Section 230 provides robust protection for algorithmic content recommendation in the Ninth Circuit. For plaintiffs and policymakers, the concurrences offer a detailed roadmap for how courts could narrow that protection, either through en banc review or by following the Third Circuit’s lead in treating algorithmic recommendations as a platform’s own first-party speech.
Your browser cannot display this PDF inline.
Download the full opinion (PDF)