The Social Dilemma Over Parental Consent
Should we ban kids from social media, or let them join with parental consent?
You are a legislator drafting an age verification bill. You have two choices:
Ban social media for kids under 16.
Only allow kids under 16 on social media if they have parental consent.
Which choice should you make?
Initially, your gut instinct says the answer is the first option: a straight ban. When you recall the stories about kids and social media that you’ve heard not just from your constituents—but also from your local community—it’s not hard to understand why.
But shouldn’t you give parents a choice here? On top of that, some groups and some experts will inevitably proclaim that “parental responsibility” is the answer, not the government. By those standards, a straight ban is even more out of the question.
“The answer is parental responsibility, not the government.” The more you think about that, the more you realize it’s an inch-deep argument. It’s not hard to imagine a social media influencer tweeting something like that—and it’s probably not wise to raise kids based on the wisdom of a social media influencer.
In the real world, “parental responsibility” effectively means that Big Tech can make as big of a mess as they want, and it’s the “responsibility” of parents to clean it up.
And tech policy experts aren’t supposed to be the ones with inch-deep arguments. They’re supposed to be the ones with deep knowledge, the ones who can clearly articulate the consequences of either choice.
To understand the consequences of this choice, let’s look at it through the eyes of an engineering director at Meta. This director is blissfully ignorant of politics and unaware of the policy debates over age verification—until one day his higher-ups tell him he’s in charge of implementing age verification to comply with a new law.
At first glance, this seems like a challenging yet feasible problem. If nothing else, it’s nowhere near as bad as GDPR compliance. But then one of the director’s best engineers—the type where you can just tell him to do something, and he finds a way to get it done—unexpectedly raises serious alarms about the complexity of the project.
Why the alarm? It’s not about verifying age. It’s about verifying parental consent—specifically, verifying the parental relationship. How do you navigate the custody laws of even one state (much less 50 states)—especially when you consider kids with divorced parents, kids with foster parents, and other complex custody arrangements?
If you just need to verify age, then perhaps you could use facial age estimation—which has minimal privacy concerns—to verify most of your users.1 However, the law says that kids under 16 can join with parental consent. To implement parental consent, you also need to verify the child’s identity, verify the identity of an adult, and then verify that this adult is actually the parent of the child.
Of course, for that scenario to play out in the first place, the legislature would have to pass a law—which has been the easy part in many states—and the courts would have to uphold that law. It’s the latter part where everything often goes off the rails.
Once you pass a law, expect a lawsuit. When Florida passed its law, House Speaker Paul Renner quipped, “[NetChoice] and Big Tech cronies will launch a lawsuit within seconds of HB3 becoming law.”2 (NetChoice is a trade association for Big Tech.)
What are the consequences of your choice in that legal battle? In short, do you want to fight a one-front war or a two-front war against NetChoice? With a straight ban, it’s a one-front war over age verification. With parental consent, you must defend a second front over verifying parental consent—a front that is much harder to defend.3
NetChoice has scored several victories on that second front. In Arkansas, the state got undercut by its own witness, who testified that “the biggest challenge . . . with parental consent is actually establishing the relationship, the parental relationship.”4
In Mississippi as well, NetChoice scored a win on the parental consent front. The judge there also cited Brown v. Entertainment Merchants Association (2011). In that case, the Supreme Court struck down a California law banning the sale of violent video games to kids without parental consent.5
In Brown, the issue of parental consent—and how you would verify it in practice—also became a stumbling block for that law. And that was in the offline world, where the problem is easier to solve. That does not bode well for the online world.
And the lessons to learn from Brown don’t stop at parental consent. The Supreme Court also raised a more fundamental challenge to California’s law:
The Act is also seriously underinclusive in another respect—and a respect that renders irrelevant the contentions . . . that video games are qualitatively different from other portrayals of violence. The California Legislature is perfectly willing to leave this dangerous, mind-altering material in the hands of children so long as one parent (or even an aunt or uncle) says it’s OK.
Although violent video games are not good for kids, they are not a “dangerous, mind-altering material” either. The same cannot be said for social media; there truly is something “qualitatively different” about it. Why is it that Jonathan Haidt let his 9-year-old ride the subway alone in New York City, but he firmly believes that “[s]ocial media is just not appropriate for children” under 16?6
If you truly believe that kids simply don’t belong on social media, then act on those convictions. Propose a straight ban for kids under 16. Wavering here—even if done with noble intentions—can cost you everything in the courts.
In facial age estimation—which should not be confused with facial recognition—you upload a short video clip of yourself, an algorithm estimates your age in seconds or even a split-second, and then the video clip is deleted. No data is retained.
While it wasn’t in seconds, NetChoice did sue Florida; that lawsuit is still pending.
In three states, NetChoice’s briefs have said the state’s age verification law does not account for “the difficulty in verifying a parent-child relationship”; they even added a “Most fundamentally” to one brief to drive the point home.
If you choose a straight ban—a one-front war—NetChoice could try to argue that your law is not narrowly tailored; why not allow kids on social media if their parents are OK with it? The counter-argument can be taken straight from NetChoice’s briefs; you didn’t offer that option because you contemplated “the difficulty in verifying a parent-child relationship.”
Arkansas’s problem here was not an incompetent witness, but an honest witness. Sometimes, your worst enemy is an expert witness who’s honest to a fault.
Since this law targeted violent video games, it was clearly a content-based law that was subject to strict scrutiny. In this case, however, if the definition of social media is drafted correctly—a challenging but feasible problem—an age verification law for social media can be a content-neutral law, which is only subject to intermediate strict scrutiny.