Digging Up DiRT
Advocates hope that the App Store Accountability Act is constitutional. Hope is not a strategy.
What’s the difference between tech and tech policy? In tech policy, if the judge strikes down a law, you write an op-ed blaming the judge. In tech, if a site goes down, you write a postmortem: a structured process to learn from failure.
But while learning from failure is good, preventing failure is even better. And when it comes to preventing failure, site reliability engineers (SREs) have a famous saying: “Hope is not a strategy.” In tech policy, courts have blocked six age verification laws for social media. That begs the question: have we learned from these failures? Since then, some have pivoted to age verification for app stores, hoping this pivot will work. Hope is not a strategy.
“The SRE culture is forever vigilant and constantly questioning: What could go wrong?” SREs don’t build coalitions with other SREs and co-author papers with great theoretical arguments on why nothing will go wrong. They throw DiRT at the problem: Disaster Recovery Testing. They simulate disaster scenarios to see how systems will actually react when the rubber meets the DiRT road; no academic paper could serve as a substitute for DiRT.
But how can we apply DiRT to tech policy? Once an age verification bill becomes law, it is almost guaranteed that NetChoice will challenge it in court. Thus, here’s the scenario: I am NetChoice’s technologist, and I’ve been asked to write an expert declaration that they will include when they ask a court to block an age verification law for app stores. What could go wrong with this law?
(Since I’m testing out this idea, I’ll start with an excerpt instead of the entire mock declaration. If this idea works out, I can then write the entire thing. This excerpt will analyze the full scope of the law, which doesn’t just affect Big Tech. And since this is a mock exercise, I won’t worry about things like the finer formatting details.)
Since there are multiple age verification laws for app stores, this mock expert declaration will be based on Texas’s law, SB 2420.
1. I understand that in a facial challenge, courts must apply a two-step analysis under Moody v. Netchoice (2024).1 “The first step in the proper facial analysis is to assess the state laws’ scope. What activities, by what actors, do the laws prohibit or otherwise regulate?” This step analyzes not only a law’s “heartland applications,” but also its application to services such as Gmail, Etsy, Venmo, and Uber. This first step can benefit from the expertise of a technologist, who can more easily navigate an online world that is “variegated and complex.”
2. Back in the time of Reno v. ACLU (1997), 40 million people used the Internet. In 2022, Apple’s app store alone had over 650 million weekly visitors and nearly 1.8 million apps. And the variety of apps is so expansive that back in 2009, Apple ran an ad campaign with a memorable slogan: “There’s an app for that.” One commercial highlighted an app to check snow conditions on the mountain, an app to count the calories in your lunch, and an app to figure out where you parked your car: “There’s an app for just about anything.”
3. The Act would require age verification “[w]hen an individual in this state creates an account with an app store.” I understand that if an adult just wanted to download an app to figure out where they parked their car, they would have to verify their age with the app store first. For minors, the app store must “obtain consent for each individual download or purchase sought by the minor.” I understand that a minor would need parental consent to download a calorie-counting app—or any app.
4. I understand that in Reno, the Supreme Court rejected an argument to uphold the Communications Decency Act by applying Renton v. Playtime Theatres (1986): “According to the Government, the CDA is constitutional because it constitutes a sort of ‘cyberzoning’ on the Internet. But the CDA applies broadly to the entire universe of cyberspace.” I also understand that, similar to the CDA, the Act does not define any cyber-zone that is inappropriate for minors. Rather, it “applies broadly to the entire universe of” apps.2
5. Under the second step of Moody’s facial analysis, “The next order of business is to decide which of the laws’ applications violate the First Amendment, and to measure them against the rest.” Here, I certainly sympathize with the Court as to the task before it. With millions of apps, it would not be feasible to analyze apps individually, determining whether the Act violates the First Amendment for each individual app.
6. While the defendants may anecdotally identify apps that are inappropriate for minors, as the saying goes, the plural of anecdote is not data. Measuring an ecosystem containing millions of apps is a very different task. And even if the plural of anecdote was data, these anecdotes only measure one side of the equation—the constitutional applications—while Moody requires that we measure both sides, measuring the unconstitutional applications against the constitutional applications.
7. Per Moody, “The question is whether ‘a substantial number of [the law’s] applications are unconstitutional, judged in relation to the statute’s plainly legitimate sweep.’ ” That question simplifies the Court’s task. It may not need to analyze the entire ecosystem; if it finds a “substantial number” of unconstitutional applications, it can stop its analysis there. And while a technologist cannot determine whether the Act is unconstitutional for any class of applications, they can still identify potential candidate classes for the Court to analyze.
8. To further simplify the task, we can assume arguendo that the cyber-zone of constitutional applications is defined expansively and favorably for the defendants (e.g., including all social media apps). If we can find a substantial number of unconstitutional applications in relation to this expansive zone, we would also have a substantial number in relation to the smaller actual zone. This approach also avoids any constitutional questions involved in drawing the actual zone’s precise boundaries.
9. I understand that HB 1181 in Texas, which was upheld by the Supreme Court in Free Speech Coalition v. Paxton (2025), required age verification for sites where one-third of the content was obscene for minors. The Act, however, would apply even to apps where 0% of the content is obscene for minors.
10. I understand that in Brown v. Entertainment Merchants Association (2011), the Supreme Court struck down a law that prohibited the sale of violent video games to minors, absent parental consent. The Act would apply not only to apps containing violent video games, but also to apps containing non-violent video games.
11. The Act would also apply to Bible apps, as well as apps containing seminal texts for other religions.
12. I understand that the First Amendment protects artistic expression, not just political speech. For those who remember the Paint app in Windows, which dates back to the 1980s, the Act would apply to any Paint-like app today. It would also apply to Photoshop and other apps that serve an artistic purpose.
13. Apps have a wide variety of content models. Some apps primarily rely on user-generated content, such as social media. Some apps primarily rely on third-party content, but play a very active role in curating that content, such as Netflix. Some apps primarily rely on first-party content, such as The New York Times app. Even if we assume arguendo that the cyber-zone of constitutional applications includes all apps that primarily rely on user-generated content, we would still have to analyze apps with different content models.
14. Even within the class of apps that primarily rely on user-generated content, further distinctions can be drawn. Such apps are not limited to social media apps. They would also include user-generated encyclopedias, such as Wikipedia, and apps that rely on user-generated reviews, such as Yelp.
15. Apps have a wide variety of content distribution models. Some apps rely on recommendation algorithms, some of which may be designed to maximize the time spent on the app. Some apps use features like infinite scrolling for a content feed or auto-playing the next video. The Act, however, also applies to apps that don’t use recommendation algorithms or features like infinite scroll or auto-play. It would also include, for example, email apps and messaging apps such as Signal.
16. Finally, a variety of people create apps. Not every company with an app is a Big Tech company with a small army of developers at its beck and call. Many apps are built by small businesses with no technical talent on staff; many such businesses hire someone to build an app for them. Thus, if legislation requires any change to an app, that may force such businesses to pay someone to update their app for them. The same idea would also extend to nonprofits with no technical talent on staff.
This is not part of the declaration, but I wanted to briefly address the argument that this part of Moody is “nonbinding dicta,” citing Justice Alito’s minority opinion in Moody. Even the Fifth Circuit would not buy this claim; it vacated a preliminary injunction in NetChoice v. Fitch because the district court didn’t apply the two-step facial analysis required by Moody.
Technically, the scope of the Act excludes apps that are not downloaded via an app store. However, this distinction would not have a significant effect on the facial analysis. There would still be millions of apps, and the variety of apps would still cover just about anything.