Background
The Commonwealth of Massachusetts brought suit against Meta Platforms, Inc. under state consumer protection and unfair business practices laws, alleging that Meta deliberately designed Instagram’s features — including its algorithmic feed, notifications, and infinite scroll — to maximize engagement among minors, causing addiction, anxiety, depression, and other harms. Meta moved to dismiss, arguing that Section 230 of the federal Communications Decency Act immunizes it from liability because the claims were fundamentally about content displayed on the platform.
The case is part of a nationwide wave of litigation by state attorneys general and individual plaintiffs against social media companies over their impact on children’s mental health. The central legal question — whether Section 230 protects platform design decisions as opposed to specific content moderation choices — has divided courts across the country.
The Court’s Holding
Justice Wendlandt, writing for the court, drew a sharp distinction between two types of claims. Claims that seek to hold Meta liable as the publisher of third-party content — for example, for failing to remove a specific harmful post — remain protected by Section 230. But claims targeting Meta’s own design choices — the algorithms, features, and interface decisions that make the platform addictive — are not about publishing third-party content at all. They are about Meta’s own product design conduct.
The court held that Section 230 was never intended to immunize platform companies from liability for how they engineer their products. The statute protects platforms from being treated as publishers of user-generated content, but it does not create blanket immunity for every business decision a platform makes simply because that decision affects how content is displayed. Massachusetts’s claims target the architecture of addiction — the deliberate design of features that exploit psychological vulnerabilities in children — not the content those features deliver.
Key Takeaways
- The ruling creates a clear framework for distinguishing content-based claims (shielded by Section 230) from design-based claims (not shielded), which other state courts may adopt.
- The decision allows Massachusetts’s consumer protection lawsuit against Meta to proceed to discovery, where internal documents about Instagram’s design choices could become public.
- The ruling aligns with a growing judicial trend limiting Section 230’s scope to its original purpose — protecting platforms from being treated as publishers — rather than as a universal shield against all technology-related litigation.
- Social media companies now face potential liability in Massachusetts for product design choices that target or disproportionately harm minors.
Why It Matters
This decision could reshape the legal landscape for social media regulation. For years, Section 230 has been the technology industry’s most powerful legal defense, shielding platforms from liability for virtually anything connected to user content. The Massachusetts SJC’s holding that design-based claims fall outside Section 230’s protection opens a new front in the battle over platform accountability — one that focuses not on what users post, but on how platforms are engineered to manipulate behavior.
For parents, educators, and child safety advocates, the ruling validates the theory that social media harm is not just about bad content — it is about addictive design. For the technology industry, it signals that the era of near-total Section 230 immunity may be ending, at least for claims rooted in product design rather than content moderation. With dozens of similar cases pending across the country, other state supreme courts will be watching Massachusetts’s lead closely.
Your browser cannot display this PDF inline.
Download the full opinion (PDF)