Federal and state legislators are considering (and some states have adopted) bills that would provide for government oversight of the content policies and moderation decisions of social-media platforms. The fate of such laws will depend largely on whether courts decide that online speech should be governed by the First Amendment’s traditional protections for editorial discretion or some other standard tailored to the online medium.
Courts have confronted this question with the development of each new communications medium throughout the twentieth century. As First Amendment law evolved – from film, to broadcasting, cable and satellite communications, and traditional common carriers – the trend over time was toward increasing levels of protection. And in 1997, when the Supreme Court first addressed what standard should govern internet communications, it found “no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.” Reno v. ACLU, 521 U.S. 844, 870 (1997). As the medium has matured and become a prominent fixture of everyday life, some say the issues should be reconsidered.
The question was framed squarely in a shadow-docket decision at the end of May in which Justice Samuel Alito wrote “[i]t is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies[.]” NetChoice, LLC v. Paxton, 142 S. Ct. 1715, 1717 (2022) (Alito, J., dissenting). Justice Alito was dissenting from an interim ruling that reinstated a preliminary injunction blocking a Texas social-media law.
Although Justice Alito did not address the merits of social-media regulation, and, as a dissenter, did not speak for a majority, he was joined by Justices Clarence Thomas and Neil Gorsuch. Justice Elena Kagan also dissented, but without signing on to Justice Alito’s opinion. While this does not dictate how the Court will ultimately decide the merits should this or another case like it comes before the Court, it indicates that a sizable minority of the Court is undecided about the constitutional status of internet speech.
This is the latest in a series of dramatic developments that squarely raise how much latitude the government has under the First Amendment to regulate social-media platforms. Just a week before the Supreme Court’s Paxton decision, the Eleventh Circuit applied the First Amendment to uphold a preliminary injunction of a similar Florida social-media law. NetChoice, LLC v. Att’y Gen., Fla., 34 F.4th 1196 (11th Cir. 2022). The court found that social-media platforms’ content-moderation decisions are protected by the First Amendment because they are “closely analogous to the editorial judgments” made in more traditional media, which the Supreme Court has held are protected. Id. 1213. The Eleventh Circuit’s reasoning echoed that of District Judge Robert Pitman when he preliminarily enjoined the Texas social-media law. Judge Pitman wrote that “[s]ocial media platforms have a First Amendment right to moderate content disseminated on their platforms,” and that the state could not justify regulation by analogizing them to public forums or common carriers. NetChoice, LLC v. Paxton, --- F. Supp. 3d ---, 2021 WL 5755120, at *7 (W.D. Tex. Dec. 1, 2021). The State of Texas appealed Judge Pitman’s ruling, and three days after oral argument, a Fifth Circuit panel stayed the injunction without opinion. The court noted that the panel is “not unanimous.” NetChoice, LLC v. Paxton, 2022 WL 1537249, at *1 (5th Cir. May 11, 2022).
The contrast between the Eleventh Circuit decision (upholding the injunction of the Florida law) and the Fifth Circuit’s stay order (which would have permitted the Texas law to go into effect) strongly suggests it will not be long before the issue reaches the Supreme Court.
Efforts in Florida to Enforce Political “Balance”
The laws adopted in Florida and Texas arose from efforts to prevent “Big Tech” from discriminating against conservative viewpoints. Florida led the way with passage of Senate Bill 7072, which the district court described as “an effort to rein in social-media providers deemed too large and too liberal.” NetChoice, LLC v. Moody, 546 F. Supp. 3d 1082, 1096 (N.D. Fla. 2021), aff’d, NetChoice LLC v. Att’y Gen., Fla., 34 F.4th 1196 (11th Cir. 2022).
The Florida legislation incorporated three statutes: Section 106.072, which prohibits large social-media platforms from barring from their sites any candidate for public office; Section 501.2041, which prohibits platforms from suppressing from circulation and exposure unpaid content posted by candidates for public office or journalistic enterprises, and requires platforms to publish their moderation standards and apply them in a “consistent manner”; and Section 287.137, which allows the state to disqualify from public contracting any social-media company accused of violating antitrust laws. Id. 1086-89.
The Florida law did not take effect, however, because the U.S. District Court for the Northern District of Florida preliminarily enjoined the law, holding it likely preempted by Section 230 of the Communications Decency Act, 47 U.S.C. § 230, and a violation of the First Amendment. The court observed that “[w]here social media fit in traditional First Amendment jurisprudence is not settled, but “three things are clear”: (1) social media companies are not state actors and therefore cannot violate the free-speech rights of users; (2) the First Amendment “applies to speech over the internet, just as it applies to more traditional forms of communication;” and (3) under existing precedent, the state’s authority to regulate free speech is not bolstered by the size and power of social-media platforms. Id. at 1090-91.
The court rejected the state’s argument that social-media platforms should be treated as common carriers even as it acknowledged the differences between traditional media and online platforms. Id. at 1091-93. It held that the laws at issue “are about as content-based as it gets” and therefore subject to strict scrutiny. Id. at 1093-94. It found the plaintiffs were likely to succeed on their First Amendment claim, and that it would reach the same conclusion even under intermediate scrutiny. Id. at 1094-95. The court observed that provisions of the Florida law were “riddled with imprecision,” but declined to decide whether statutory vagueness provided an independent ground for its decision. Id. at 1095.
The Eleventh Circuit upheld the district court’s order enjoining the law’s restrictions on content moderation but vacated and remanded the decision to the extent it applied to certain disclosure or data access provisions. Att’y Gen., Fla., 34 F.4th at 1230-31. With respect to the constitutional status of online providers, the court agreed that “[s]ocial-media platforms like Facebook, Twitter, YouTube, and TikTok are private companies with First Amendment rights.” Id. at 1210. Applying precedent that protects editorial judgments by traditional media (including Miami Herald Pub. Co., v. Tornillo, 418 U.S. 241 (1974)), it held that moderation decisions, including “whether, to what extent, and in what manner to disseminate third-party created content to the public are editorial judgments protected by the First Amendment.” Att’y Gen., Fla., 34 F.4th at 1212.
But even if content-moderation decisions are viewed as conduct, the court concluded, they are inherently expressive. Thus, “[w]hen platforms choose to remove users or posts, deprioritize content in viewers’ feeds or search results, or sanction breaches of their community standards, they engage in First-Amendment protected activity.” Id. at 1213. Accordingly, it held the result is the same “[w]hether we assess social-media platforms’ content-moderation activities against the Miami Herald line of cases or against our own decisions explaining what constitutes expressive conduct.” Id. See Coral Ridge Ministries Media, Inc. v. Amazon.com, Inc., 6 F.4th 1247, 1254-55 (11th Cir. 2021) (choice of what companies qualify for online charity donations is inherently expressive). Because the Eleventh Circuit focused its decision on First Amendment grounds, it did not address preemption under Section 230. Id. at 1209.
The court rejected the state’s efforts to “evade – or at least minimize – First Amendment scrutiny” by finding that social-media platforms have not functioned as common carriers historically and cannot be constitutionally shoehorned into that regulatory category. Att’y Gen., Fla. 34 F.4th at 1219-21. It observed that “[n]either law nor logic recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier.” Id. at 1221.
The Eleventh Circuit held that “[a]ll but one of S.B. 7072’s operative provisions implicate platforms’ First Amendment rights and are therefore subject to First Amendment scrutiny.” Id. at 1223. The court upheld the preliminary injunction with respect to provisions that ban de-platforming candidates (§ 106.072(2)), de-prioritizing or “shadow-banning” content about candidates (§ 501.2041(2)(h)), and de-platforming or “shadow-banning” journalistic enterprises (§ 501.2041(2)(j)); require consistency for moderation decisions (§ 501.2041(2)(b)); require platforms to explain individual moderation decisions (§ 501.2041(2)(d)); prohibit changes to editorial policies more than once every 30 days (§ 501.2041(2)(c)); and require giving users the ability to “opt out” of moderation (§ 501.2041(2)(f), (g)). (“Shadow-banning” is a practice whereby a platform depresses dissemination of a particular user’s content, constructively banning that user by preventing their content from reaching an audience.)
While the Eleventh Circuit upheld those aspects of the injunction barring intrusions on editorial activity by social-media platforms, it found that certain provisions of the Florida law did not have such an effect and were not properly enjoined. It reversed the district court and vacated the injunction with respect to a requirement that social-media platforms publish their standards and definitions for enforcing policies that include censoring content, de-platforming, or shadow banning (§ 501.2041(2)(a), and a requirement giving users access to their data for at least 60 days after their accounts have been terminated (§ 501.2041(2)(i). The court reasoned that the rule requiring platforms to publish their standards was subject to some level of First Amendment scrutiny, but that this was likely satisfied under the test set forth in Zauderer v. Office of Disciplinary Counsel of Supreme Court, 471 U.S. 626 (1985). See Att’y Gen., Fla., 34 F.4th at 1230.
Efforts in Texas to Prohibit “Censorship” of Conservative Political Views
The Texas social-media law was adopted for a similar purpose as the Florida law, and although its operative provisions differed, was enjoined under the same First Amendment reasoning. The law in question, HB 20, prohibits large social-media platforms from “censoring” their users’ expression or their “ability to receive the expression of another person” based on “(1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression; or (3) a user’s geographic location in this state or any part of this state.” Paxton, 2021 WL 5755120, at *1. The district court preliminarily enjoined the Texas law, finding it violated the platforms’ First Amendment rights and rejecting the state’s claims that large social-media platforms should be considered public forums or regulated as common carriers. Id., at *6-14.
The court started with the premise that social-media platforms are private entities that have a First Amendment right to make moderation or editorial decisions regarding content posted on their sites. Id., at *7. Drawing on Supreme Court precedent involving newspapers, Tornillo, 418 U.S. at 241, parade organizers, Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Boston, 515 U.S. 557 (1995), and corporate newsletters, Pac. Gas & Elec. Co. v. Pub. Utils. Com., 475 U.S. 1 (1986), and, citing previous cases extending First Amendment protection to online speech, Reno, 521 U.S. at 870, it concluded that private entities that exercise editorial judgment “cannot be compelled by the government to publish other content.” Paxton, 2021 WL 5755120, at *6-7.
Judge Pitman observed that social media platforms “routinely manage … content, allowing most, banning some, arranging content in ways intended to make it more useful or desirable for users, sometimes adding their own content,” id. *8 (quoting Moody, 546 F. Supp. 3d at 1090), as distinguished from common carriers that historically are “engaged in indiscriminate, neutral transmission of any and all users’ speech.” Id. (quoting United States Telecom Ass’n v. FCC, 825 F.3d 674, 742 (D.C. Cir. 2016)). Texas’s fiat proclamation that social-media platforms are common carriers “does not impact this Court’s legal analysis.” Id., at *8 n.3.
Nor was the analysis affected by differences in the technical methods used to make content decisions. Judge Pitman wrote that focusing on the fact that platforms use algorithms and artificial intelligence to make most moderation decisions rather than “our 20th Century vision of a newspaper editor hand-selecting an article to publish” is a “distraction.” Id., at *8. Accordingly, he concluded, “HB 20’s prohibitions on ‘censorship’ and constraints on how social media platforms disseminate content violate the First Amendment.” Id., at *9. The court also enjoined the law’s disclosure requirements for how moderation decisions are made, as well as requirements that platforms publish acceptable-use policies and transparency reports. It also blocked requirements that platforms provide a complaint system and appeal process for users to dispute moderation decisions. Id., at *10-11. Finally, Judge Pitman found the provisions of HB 20 were unconstitutionally vague, and that his decision would be the same whether he applied strict or intermediate scrutiny. Id., at *12-13.
A divided panel of the Fifth Circuit stayed Judge Pitman’s preliminary injunction without opinion. As previously noted, however, the Supreme Court set aside the stay decision by a 5-4 vote. Assuming the Fifth Circuit upholds the Texas law in a decision on the merits, it would present a clear circuit split and pave the way for the Supreme Court to address the relevant First Amendment standard for regulating moderation decisions by large social-media platforms.
California Proposes to Regulate Social-Media “Addiction”
The Florida and Texas bills are not the only state legislative efforts that implicate the First Amendment status of social-media platforms.
In California, AB 2408 would hold social-media platforms liable for employing “addictive” content-delivery algorithms if proven to cause reasonably preventable harms to users under the age of 18. Private plaintiffs have already filed more than a dozen federal lawsuits seeking to hold social-media platforms responsible for teenage suicides, eating disorders, and other psychological conditions through product-liability and negligence causes of action focused on these platforms’ allegedly addictive content-delivery features. AB 2408 would codify the duty of care underlying these claims into a statutory obligation enforceable through punitive civil penalties – up to $252,500 “per violation” plus attorneys’ fees – by local prosecutors and the California Attorney General. To avoid liability, platforms would almost certainly need to change the algorithms that direct and promote users of all ages to certain types of content, while hiding or demoting other posts (the bill gives the companies until April 2023 to “cease development, design, implementation or maintenance” of features found to be addictive).
If enacted, AB 2408 could face legal challenges under the First Amendment for the same reasons as HB 20 and SB 7072. Although the bill’s proponents state it only addresses a “design feature” like the non-editorial “speed filter” application at issue in Lemmon v. Snap, Inc., 995 F.3d 1085, 1091-92 (9th Cir. 2021), the bill substantively creates liability for editorial decisions that platforms make through their algorithms about what kinds of content to promote. Those “decisions about what content to include, exclude, moderate, filter, label, restrict, or promote” are “protected by the First Amendment” not unlike decisions by “a newspaper or news network.” O’Handley v. Padilla, --- F. Supp. 3d ---, 2022 WL 93625, at *14 (N.D. Cal. Jan. 10, 2022) (citing cases); see also Att’y Gen., Fla., 34 F.4th at 1210-12. And courts have held that these decisions retain their protection even if “algorithms do some of the work that a newspaper publisher previously did.” Paxton, 2021 WL 5755120, at *8. “[T]he core question,” these courts have explained, “is still whether a private company exercises editorial discretion over the dissemination of content,” irrespective of “the exact process used” to exercise it. Id.
Nor would it likely matter that AB 2408 intends to protect underage users. The Supreme Court has interpreted the First Amendment to prohibit the imposition of civil liability that “restrict[s] the ideas to which children may be exposed,” and held that the general exercise of editorial discretion cannot be “suppressed” or penalized “solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 794-95 (2011) (quotation omitted). A facial challenge to AB 2408 would carry a reasonably strong chance of success.
A second California bill seeks to expose how social-media platforms make the editorial decisions that AB 2408 subjects to liability. AB 587 would require platforms to (i) submit quarterly reports to the Attorney General disclosing the platform’s content-moderation policies and practices, including the rules, guidelines, and definitions that “automated” content-moderation systems use to enforce the company’s policies, and to (ii) submit further quarterly reports regarding the “total number” of times the company “flagged,” “actioned,” “removed,” “demonetized,” or “deprioritized” content, the number of times such content was viewed or shared by users, the number of times those content-moderation decisions were appealed or reversed “disaggregated” by “each type of action,” the “category of content,” the “type of content,” the “type of media,” and “how the content” was flagged or actioned. Any “material” omission from any of these mandated disclosures would expose a platform to a $15,000 civil penalty per violation.
These disclosure requirements raise serious First Amendment concerns. At the outset, AB 587’s content-moderation-disclosure requirements – akin to requiring a newspaper to track, record, categorize, and disclose the number of letters to the editor it rejected by subject, article, author, date, and so on – could be found to impose an impermissible “intrusion into the function of editors.” Tornillo, 418 U.S. at 258. The Supreme Court has observed that these kinds of disclosure requirements may “subject the editorial process to … official examination merely to satisfy curiosity or to serve some general end such as the public interest,” which “would not survive constitutional scrutiny” under the First Amendment. Herbert v. Lando, 441 U.S. 153, 174 (1979) (allowing examination of the editorial process only to permit discovery into “a specific claim of injury arising from a publication that is alleged to have been knowingly or recklessly false”).
The compliance burden imposed by AB 587 raises its own First Amendment problem. States may only mandate “factual, noncontroversial” commercial disclosures when they are not “unjustified or unduly burdensome.” Zauderer, 471 U.S. at 651. And they may not employ “disclosure” provisions to chill protected speech by “burdening its utterance” in the first place. Sorrell v. IMS Health, Inc., 564 U.S. 552, 566 (2011). Because “[t]he targeted platforms remove millions of posts per day,” Att’y Gen., Fla., 34 F.4th at 1230, AB 587’s “disaggregated” quarterly content-moderation disclosures would require platforms to track, sort, and disclose millions of moderation actions to the California Attorney General without error. The risk of even inadvertent non-compliance could be substantial. A 99% compliance rate for just 10 million reportable quarterly moderation actions could, for example, expose a platform to $1.5 billion in civil penalties per quarter and $6 billion in penalties per year. That risk could lead platforms to shut down or forego making content-moderation decisions altogether. In fact, in the SB 7072 case, the Eleventh Circuit held that Florida’s similar mandate (requiring platforms to provide a rationale for each of their millions of content-moderation decisions) was likely unconstitutional because the law “imposes potentially significant implementation costs” and “also exposes platforms to massive liability.” Id. at 1230. If AB 587 is determined to have these chilling effects, the platforms could advance compelling arguments that the law is unconstitutional as applied.
Both of these bills have proceeded swiftly through the California state legislature, and could be enacted as early as this month.
What’s at Stake
The constitutional questions raised by these cases and proposed legislation mirror the types of issues that have affected the development of First Amendment jurisprudence from the beginning. Although the First Amendment promised freedom from excessive government regulation for the only mass medium that existed in the Framers’ time, cases arose about the constitutional status of each new medium to come along, including cinema, radio, cable television, and, eventually, the internet. Differences in regulatory treatment – and levels of constitutional protection – were institutionalized through different regulatory classifications. Over time, however, courts increasingly recognized that, regardless of the particular technology at issue, “the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary. Those principles, as they have frequently been enunciated by this Court, make freedom of expression the rule.” Joseph Burstyn, Inc. v. Wilson, 343 U.S. 495, 503 (1952) (extending First Amendment protection to cinema).
The Supreme Court first addressed this question for the internet a quarter-century ago in Reno, when Congress tried to impose broadcast-style indecency regulations on the new medium. At that time the Court struck down the law, expressly rejecting the government’s argument to limit First Amendment scrutiny by superimposing standards based on prior regulatory classifications. 521 U.S. at 870. But will the Court reach the same conclusion when asked about the regulation of social-media platforms? According to Justice Alito, the answer is “not at all obvious,” and at least a plurality of others appear to agree. The answer will determine the extent to which online speech will continue to receive the full protection of the First Amendment, and will control how much latitude the government has to adopt new regulations.