Define Social Media, Part I: Findings
Age verification laws for social media have gone 0-5 in legal challenges. What needs to change so that these laws survive legal challenges?
0-5. If your football team went 0-5 (be it a professional or college team), it would be time to shake things up. State legislatures who have passed age verification laws for social media have gone 0-5 in legal challenges.1 Fortunately, this engineer—who has also served as a tech fellow in Congress—has already been figuring out how to fix it.
I. 0-5 vs. 2-0
Additionally, state legislatures have gone 0-5 when it comes to writing a content-neutral definition of social media; the court has ruled that every definition thus far is content-based.2 If you’re not a nerd, why does that detail matter? What are the stakes?
A. Content-Based vs. Content-Neutral
In a First Amendment challenge, courts will look at both what the law does, and who the law applies to.3 The definition of social media matters because it determines who the law applies to. As a commonsense example, the courts would be extremely skeptical of a neutral-sounding law that curiously only applied to X/Twitter, but not to other social media platforms; it’s a thinly-veiled attempt to target Elon Musk.
In the usual First Amendment case, the court will determine if the law is content-based or content-neutral. As the Supreme Court said, “As a general rule, laws that by their terms distinguish favored speech from disfavored speech on the basis of the ideas or views expressed are content based.” In practice, the courts can be very picky—in part because they’re very protective of free speech.
That leads into the second question of stakes: what’s the difference between a content-neutral and a content-based law? A content-neutral law is subject to intermediate scrutiny, while a content-based law is subject to strict scrutiny. Strict scrutiny, however, tends to be “strict in theory, fatal in fact”; 0-5 is definitely fatal in fact.
B. Nothing New Under the Sun
These lawsuits have all been filed by NetChoice, a trade association that lobbies for Big Tech. Every time NetChoice wins, they try to manufacture a narrative that history will repeat itself if anyone else tries to regulate social media. While “those that fail to learn from history are doomed to repeat it,” those who learn from history are not consigned to the same fate as their predecessors.
History did not begin with social media, either. At one point in time, cable was the new technology and the new forum for expression. Back then, cable companies started dropping broadcast channels, such as the local NBC station—despite the fact that these channels were popular with consumers. Congress stepped in and passed must-carry, which forced cable companies to carry local broadcast stations.4
In response—and tell me if you’ve heard this one before—the cable industry sued, claiming that must-carry violated their free speech rights. That battle, Turner Broadcasting System v. FCC, reached the Supreme Court twice, but the government went 2-0; must-carry survived intermediate scrutiny, as it was a content-neutral law.5
And even though social media is a very different medium for cable, the same First Amendment principles apply to both. There is nothing new under the sun.
C. The Medium is the Problem
Could we protect kids on social media if tech companies would just do a better job of content moderation? According to social psychologist Jonathan Haidt (author of the #1 New York Times bestseller The Anxious Generation), the answer is a firm no: “Social media is just not appropriate for children.” Haidt also frames the issue another way: “The medium is the problem.”
That’s not just good policy advice; it’s also good legal advice. In Turner I, the Supreme Court said that medium-based distinctions are often content-neutral: “It would be error to conclude, however, that the First Amendment mandates strict scrutiny for any speech regulation that applies to one medium (or a subset thereof) but not others.”6 And as an oft-quoted line from Southeastern Promotions v. Conrad (1975) goes, “Each medium of expression . . . must be assessed for First Amendment purposes by standards suited to it, for each may present its own problems.”
The Internet is not a monolithic medium. It contains many distinct mediums, such as social media, search, and e-commerce. Two oft-cited precedents, Reno v. ACLU (1997) and Ashcroft v. ACLU (2004), dealt with laws from the 1990s that tried to regulate the entire Internet: the Communications Decency Act of 1996 (CDA) and the Child Online Protection Act of 1998 (COPA). Today, however, nobody is proposing that we age-gate access to the entire Internet. Age verification is only being proposed for mediums that pose heightened risk to children, such as social media and pornographic sites.
The difference between cable and social media, however, is that social media is much harder to define than cable. Defining the medium is the problem.7
II. Change the Text, Change Your Fate: Findings
Legal analysis of a law begins with the text of the law—especially for a judge with a textualist philosophy. To change the fate of a law, one only needs to change its text. The challenge lies in figuring out how to change it.
A. Why Findings Matter
While knowing and citing the Turner cases is a good start, an even better approach is to do the same things that Congress did when it passed must-carry.
In particular, the “unusually detailed statutory findings” played a role in persuading the courts to apply intermediate scrutiny in Turner I. Findings have the power to persuade, but they don’t have the power to control. Courts will not believe that something is true just because the findings say it’s true, but well-written findings can have a very strong persuasive effect.
When writing these findings, you have to remember who the audience is: the courts. When a judge conducts a First Amendment analysis of a law, these findings are designed to help answer the questions they will ask. The findings need to be written with that specific audience and that specific purpose in mind.
If, per Turner I, regulations can be “justified by the special characteristics” of the medium, then someone has to explain the special characteristics of social media.
B. Intermediate Scrutiny
Likewise, you need to know how intermediate scrutiny operates if you want your legislation to survive intermediate scrutiny.
Intermediate scrutiny has two parts. First, the legislation needs to further an important or substantial government interest. (Under strict scrutiny, it has to be a compelling government interest.)
Second, the legislation needs to be narrowly tailored. Unlike strict scrutiny, the government does not have to use the least restrictive means, but to quote Ward v. Rock Against Racism (1989), the government still cannot “burden substantially more speech than is necessary to further the government's legitimate interests.” Nonetheless, Turner II also did establish that under intermediate scrutiny, the government does get to decide the degree to which it will promote its interests:
It is for Congress to decide how much local broadcast television should be preserved for noncable households, and the validity of its determination “ ‘does not turn on a judge’s agreement with the responsible decisionmaker concerning’ . . . the degree to which [the Government’s] interests should be promoted.” Ward, 491 U. S., at 800 (quoting United States v. Albertini, 472 U. S. 675, 689 (1985)); accord, Clark v. Community for Creative Non-Violence, 468 U. S. 288, 299 (1984) (“We do not believe . . . [that] United States v. O’Brien . . . endow[s] the judiciary with the competence to judge how much protection of park lands is wise”).
C. Tell Your Story Without Experts
A “war of experts” against Big Tech is a dicey proposition. With their deep pockets, these companies will easily have the resources to find (or pay) experts who can manufacture their preferred narrative. And if a judge with limited expertise has a hard time telling which experts are right, that small army of experts that Big Tech can summon may appear more persuasive—regardless of what actually is true.
Expertise is important, but you must first tell your story without experts. Often, good findings promote an intuitive narrative that is reasonably persuasive to non-experts.
Most importantly, that narrative sets the anchor before we consult the experts. Of course, that anchor will look unreasonable if the evidence is one-sided against you when we do consult the experts. But if a judge with limited expertise has a hard time telling which experts are right, they may default to the anchor.
This strategy also aligns with the question that judges ask for intermediate scrutiny. Per Turner II, “The question is not whether Congress, as an objective matter, was correct . . . Rather, the question is whether the legislative conclusion was reasonable and supported by substantial evidence in the record before Congress.” First, you set a reasonably persuasive anchor. Then, you provide evidence to hold that anchor.
As an added bonus, under intermediate scrutiny, judges give more deference to the legislature’s judgment when they resolve conflicting evidence: “The Constitution gives to Congress the role of weighing conflicting evidence in the legislative process.”
Of course, experts also have a role in finding the story to tell. A great narrative needs to be backed by great facts; you can’t pick a narrative based on personal whims and then ask experts to manufacture the facts to back that narrative. And in many cases, a good finding will use an expository tone and make a straightforward statement of fact.
III. Model Findings with Commentary
The Legislature finds the following:
(1) The State has a compelling interest in protecting the physical and psychological well-being of minors.
(2) The Internet is not a monolithic medium but instead contains many distinct mediums, such as social media, search, and e-commerce.
(3) Existing measures to protect minors on social media have been insufficient for reasons including—
(A) the difficulty of content moderation at the scale of a platform with millions of user-generated content providers;
(B) the difficulty of making subjective judgments via algorithms, such as identifying content that harms the physical or psychological well-being of minors; and
(C) limited interoperability between social media platforms and third-party child safety tools, in part due to privacy concerns about sharing user data with third parties.
(4) Social media companies have failed to control the negative impacts of their algorithms to distribute content for reasons including—
(A) the scale of a platform with millions of users, combined with the personalized nature of content distribution;
(B) the natural incentive of such companies to maximize engagement and time spent on their platforms; and
(C) the limited degree of control that users have over the content they receive.
(5) Limited accountability exists on social media platforms for bad actors, especially given the anonymous or hard-to-track nature of many such actors.
(6) Users frequently encounter sexually explicit material accidentally on social media.
(7) Social media platforms are accessible—
(A) from a wide variety of devices, ranging from an individual’s smartphone to a laptop at a friend’s house to a desktop in a public library; and
(B) via a variety of methods on a single device, including apps and websites.
Finding 1
(1) The State has a compelling interest in protecting the physical and psychological well-being of minors.
Don’t reinvent the wheel.
Sable Communications v. FCC (1989): “We have recognized that there is a compelling interest in protecting the physical and psychological wellbeing of minors.”
Additionally, “psychological well-being” is a much better framing than “harmful content.” Under the framing of harmful content, the obvious counterargument is that while some harmful content exists on social media, most content is not harmful; the legislation is extremely overbroad because it targets social media as a whole.
The framing of psychological well-being plays out much differently. Consider the story of 16-year-old Chase Nasca, who committed suicide after TikTok showed him over 1,000 unsolicited videos of violence and suicide. Does it really matter whether those 1,000 videos were 5% or 50% of the content that Chase saw? What really matters is that those videos—regardless of the percentage—led Chase to commit suicide.
Finding 2
(2) The Internet is not a monolithic medium but instead contains many distinct mediums, such as social media, search, and e-commerce.
Medium-based distinctions are content-neutral.
This finding is fairly intuitive and straightforward; you don’t have to be an expert to know that the Internet is not a monolithic entity. It also distinguishes a social media law from the CDA in Reno and COPA in Ashcroft II. Both the CDA and COPA tried to regulate the entire Internet, not a specific medium like social media.
Finding 3
(3) Existing measures to protect minors on social media have been insufficient for reasons including—
(A) the difficulty of content moderation at the scale of a platform with millions of user-generated content providers;
(B) the difficulty of making subjective judgments via algorithms, such as identifying content that harms the physical or psychological well-being of minors; and
(C) limited interoperability between social media platforms and third-party child safety tools, in part due to privacy concerns about sharing user data with third parties.
Content moderation at scale is hard.
You may be able to detect an engineer’s influence in crafting this finding, but you don’t need to be an engineer to know that content moderation is hard when millions of users are producing content every day.
At that scale, you inevitably will have to rely on algorithms more and more; humans alone can’t handle that volume of content. But how effective are these algorithms when they have to make very subjective judgments, such as whether content would harm the psychological well-being of a child?
Even the advent of AI is not a magical panacea. To be clear, there are many objective tasks that AI handles well, such as image recognition for handwritten digits or for traffic signs. But if our self-driving cars hallucinated as often as ChatGPT did, they would swiftly be taken off the roads.8
To further complicate matters, social media sites often operate as closed ecosystems. There’s a two-word explanation for why Facebook is understandably wary about sharing user data with third parties: Cambridge Analytica. (It’s also worth nothing that Cambridge Analytica obtained personal data via an external researcher who claimed to be collecting it for academic purposes.)
At the end of the day, you can certainly understand why Haidt arrived at the conclusion he did: “Even if social media companies could reduce sextortion, CSAM, deepfake porn, bullying, self-harm content, drug deals, and social-media induced suicide by 80%, I think the main take away from those Senate hearings is: Social media is just not appropriate for children.” The medium is the problem.
Finding 4
(4) Social media companies have failed to control the negative impacts of their algorithms to distribute content for reasons including—
(A) the scale of a platform with millions of users, combined with the personalized nature of content distribution;
(B) the natural incentive of such companies to maximize engagement and time spent on their platforms; and
(C) the limited degree of control that users have over the content they receive.
The distribution model matters.
In defining social media, we will have to consider many “negative examples” of sites that aren’t social media: the comments section of the New York Times, Netflix, Wikipedia, Substack, etc. A social media law should not apply to these sites—especially since overbreadth can be fatal in a First Amendment challenge.
One underexamined but vitally important (and content-neutral) difference is the distribution model. Simply put, you’re not going to see over 1,000 unsolicited videos of violence and suicide if you subscribe to some newsletters on Substack.
When you have millions of users, you have limited bandwidth to address problems affecting only a single user. The highly personalized nature of content distribution on social media, however, means that many problems are also personalized in nature.
In particular, social media platforms—who have natural incentives to maximize engagement (especially when more engagement leads to more ad revenue)—have a wealth of personal engagement data. Their algorithms can use the content that you have viewed, liked, reposted, replied to, etc., to decide what content to serve you.
Subparagraph (C) of this finding also alludes to another important aspect of the distribution model: lack of control. If you don’t like a Substack newsletter, you can easily unsubscribe from it. If TikTok’s algorithms start feeding you suicide content or eating disorder content, however, your options to make it go away are more limited.
This subparagaph is also written as a callback to Section 230 (the law that says that online platforms are generally not liable for the third-party content they host). Section 230 included this finding: “(2) These [interactive computer services] offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.” Social media has not lived up to that potential. The world has changed since Section 230 was enacted in 1996.
Finding 5
(5) Limited accountability exists on social media platforms for bad actors, especially given the anonymous or hard-to-track nature of many such actors.
Use a frame of reference that courts are familiar with.
While social media can be somewhat new and unfamiliar to the courts, they do have more extensive experience with older mediums such as broadcast and cable.
In the world of broadcast, if CBS broadcasted sexually explicit material during the day (or the breast of Janet Jackson during the Super Bowl), they could expect a fine from the FCC. And while the FCC regulations do not apply to cable, the incentives of that medium make it highly unlikely, for example, that ESPN’s Pardon the Interruption would interrupt your sports viewing experience with hardcore pornography.
The same set of incentives simply do not exist for content producers on social media. At worst, your account could be banned, but you can often create a new account. Even if Instagram investigates a “sextortion” case on their platform, what can you do when—in the case of Walker Montgomery—they trace the account’s IP address to Nigeria?
When the Supreme Court compared broadcast regulations to dial-a-porn regulations in Sable Communications v. FCC (1989), they noted that while an “unexpected outburst on a radio broadcast” tends to be “invasive or surprising,” dial-a-porn is different: “In contrast to public displays, unsolicited mailings, and other means of expression which the recipient has no meaningful opportunity to avoid, the dial-it medium requires the listener to take affirmative steps to receive the communication.”
As for social media, a sextortion attempt is often unsolicited and invasive in nature. Problems on social media are often caused by unsolicited or invasive content— especially when users have a limited degree of control over the content they receive.
Finding 6
(6) Users frequently encounter sexually explicit material accidentally on social media.
This is self-evident to anyone with an X/Twitter account.
This finding is a direct callback to Reno: “Though [sexually explicit] material is widely available, users seldom encounter such content accidentally.” That may have been true for the Internet of 1997, but it’s definitely not true for the Internet of 2024.
Finding 7
(7) Social media platforms are accessible—
(A) from a wide variety of devices, ranging from an individual’s smartphone to a laptop at a friend’s house to a desktop in a public library; and
(B) via a variety of methods on a single device, including apps and websites.
Do you try to cut kids off at every possible path, or cut them off at the destination?
This is another example of a finding that makes straightforward statements of facts in an expository tone, but which also sets up the narrative.
In the early days of the Internet, many households would have had a single desktop in a common area of the house, and any online content would be accessed via a web browser. Today, most kids have a smartphone that travels everywhere with them.
Some claim parental controls are the answer, but even if you set up perfect parental controls on a single device—a task easier said than done—what if the kid uses a different device? An old smartphone (or a cheap smartphone the kid bought) would not have talk, text, or data, but it would have Internet access wherever there’s WiFi.9 The kid could also use a laptop at a friend’s house or a desktop at a public library.
And even on a single device with parental controls, the task is not straightforward. Perhaps you blocked the Facebook app, but did you block Facebook’s website? And what if the kid downloads a proxy app and uses that to browse Facebook?
Age verification, by contrast, is applied at the destination. It doesn’t matter which device the kid uses to access Instagram, or whether they accessed Instagram via a browser or via an app; they still need to verify their age to create an account.
Cutting kids off at the destination offers a greater degree of protection, compared to trying to cut them off at each possible path they could take.10 Parental controls are not a narrowly tailored alternative to age verification. To quote Turner II, “In the final analysis this alternative represents nothing more than appellants’ ‘ “[dis]agreement with the responsible decisionmaker concerning” . . . the degree to which [the Government’s] interests should be promoted.’ ”
Now that we have model findings, the next step is to write a model definition of social media. Ideally, the definition should naturally flow from the findings, and it should codify the special characteristics of social media that we identified in the findings. In the next part, we’ll dive into the brass tacks of writing that definition.
See NetChoice v. Griffin (W.D. Ark. Aug. 31, 2023), NetChoice v. Yost (S.D. Ohio Feb. 12, 2024), NetChoice v. Fitch (S.D. Miss. July 1, 2024), CCIA & NetChoice v. Paxton (W.D. Tex. Aug. 30, 2024), and NetChoice v. Reyes (D. Utah Sept. 10, 2024). I raised the alarm when we were 0-2.
That’s certainly not the only problem, though. For example, in multiple cases, the court also ruled that the definition of social media was unconstitutionally vague; in Arkansas, the state’s own witnesses could not agree on whether the law applied to Snapchat or not.
As a corollary, one problem with the “we’re only regulating conduct, not speech” argument is that it only tells you what the law does. It does not tell you who the law applies to.
This problem was the result of vertical integration between cable operators and cable programmers. Cable channels often competed with local broadcast channels for advertising revenue. When cable companies started owning their own channels, that created a perverse incentive for cable companies to not carry local broadcast channels, so that advertising revenues would flow to their own channels instead.
The government also argued that “must-carry provisions are nothing more than industry-specific antitrust legislation,” and that rational-basis review should apply as a result. The court rejected this argument, as the industry in question, cable, was a forum for expression. Again, courts look at both what the law does, and who the law applies to. Attempts to frame age verification for social media as “industry-specific child safety legislation” (or “industry-specific contract legislation”) would likely face a similar fate.
Conversely, discriminating within a medium will often make a law content-based: “Regulations that discriminate among media, or among different speakers within a single medium, often present serious First Amendment concerns.”
By contrast, pornographic sites have been much easier to define. Obscenity is unprotected speech, and Ginsberg v. New York (1968) established that “[t]he State has power to adjust the definition of obscenity as applied to minors.” At a high level, pornographic sites have been defined as sites where at least one-third of the content is obscene for minors.
Even when the task is image recognition for traffic signs, there are ways that street signs can be “hacked” in real life so that self-driving cars won’t recognize them.
Even if parents confiscated phones at night and shut off the WiFi router, a more drastic solution, a kid could hand over their phone but keep the SIM card—and then put the SIM card in a different device. That device would then have access to data.
As for the arguments that kids will find arguments to bypass age verification, the exact same argument could be made for parental controls. And that’s assuming that parental controls work in the first place. For example, the Wall Street Journal published a story about how it took Apple three years to fix an X-rated loophole in Screen Time.