<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Technical Assistance]]></title><description><![CDATA[Tech policy. It's time to build legislation.]]></description><link>https://www.technicalassistance.io</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 11:55:01 GMT</lastBuildDate><atom:link href="https://www.technicalassistance.io/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Mike Wacker]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[technicalassistance@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[technicalassistance@substack.com]]></itunes:email><itunes:name><![CDATA[Mike Wacker]]></itunes:name></itunes:owner><itunes:author><![CDATA[Mike Wacker]]></itunes:author><googleplay:owner><![CDATA[technicalassistance@substack.com]]></googleplay:owner><googleplay:email><![CDATA[technicalassistance@substack.com]]></googleplay:email><googleplay:author><![CDATA[Mike Wacker]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Age Verification for What?]]></title><description><![CDATA[Surface-level similarities between age verification laws can conceal deeper differences.]]></description><link>https://www.technicalassistance.io/p/age-verification-for-what</link><guid isPermaLink="false">https://www.technicalassistance.io/p/age-verification-for-what</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Wed, 10 Sep 2025 13:03:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;m <a href="https://www.greentape.pub/p/one-year-in-dc">not the first</a> to say it, but &#8220;the think tank ecosystem can be an echo chamber.&#8221; Depending on which echo chamber you prefer, you can easily find think-tank experts who reflexively support&#8212;or reflexively oppose&#8212;age verification. I don&#8217;t fit into either group, but I&#8217;m an engineer, not a think-tank expert. I supported age verification for porn sites, but I also spotted a <a href="https://www.city-journal.org/article/what-is-social-media">critical flaw</a> in age verification laws for social media.</p><p>When the Free Speech Coalition sued to block Texas&#8217;s age verification law for porn sites, the Supreme Court upheld that law in <em><a href="https://www.supremecourt.gov/opinions/24pdf/23-1122_3e04.pdf">Free Speech Coalition v. Paxton</a></em><a href="https://www.supremecourt.gov/opinions/24pdf/23-1122_3e04.pdf"> (2025)</a>. When NetChoice sued to block Mississippi&#8217;s age verification law for social media, though, Justice Kavanaugh&#8212;who was part of the majority for <em>FSC</em>&#8212;wrote a <a href="https://www.supremecourt.gov/opinions/24pdf/25a97_5h25.pdf">brief opinion</a> suggesting that this law was unconstitutional.</p><p>Now, states have passed age verification laws for app stores: the App Store Accountability Act. And more court battles are inevitable. Who will win this time? Here, I&#8217;ve read quite a few polished policy pieces that feel more like a pep rally for one side; a mock battle is more my style. In a court battle, my role would be a technical expert, so I&#8217;ve written a mock expert declaration in opposition to this law.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><div><hr></div><p>1. When NetChoice sued to block Mississippi&#8217;s age verification law for social media, the Fifth Circuit <a href="https://www.ca5.uscourts.gov/opinions/pub/24/24-60341-CV0.pdf">remarked</a>, &#8220;This case continues our struggle with the interface of law and the rapidly changing universe of technology.&#8221;</p><p>2. In this case, comparisons to <em>Free Speech Coalition v. Paxton</em> (2025) are inevitable. Such comparisons, though, again bring this Court back to &#8220;the interface of law and the rapidly changing universe of technology.&#8221; On the technical side, what similarities&#8212;or differences&#8212;would be found in the factual records for both cases?</p><p>3. On the surface, the factual records seem similar. Age verification technology today would be similar to (if not better than) age verification technology at the time of <em>FSC</em>. The factual record is incomplete, however, unless it also considers the requirements of the age verification law.</p><p>4. Compare two software features: finding nearby restaurants and finding nearby friends. They seem similar on the surface, but restaurants rarely change their location, while friends frequently change their location. Unlike nearby restaurants, software to find nearby friends has this requirement: it processes over 100,000 location updates per second.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Likewise, the App Store Accountability Act (&#8220;the Act&#8221;) imposes materially different requirements, compared to the law in <em>FSC</em> (&#8220;H.B. 1181&#8221;).</p><p><strong>Comparing the Basic Requirements of Both Laws</strong></p><blockquote><p><em>NOTE: Since there are multiple versions of the App Store Accountability Act, I&#8217;ve focused on the requirements that are common to most, if not all, versions of this act.</em></p></blockquote><p>5. Based on my understanding of H.B. 1181, it contains these requirements:</p><ul><li><p>Pornographic sites must block users who are under 18.</p></li><li><p>Pornographic sites must verify each user&#8217;s age category: under 18, or 18 and older.</p></li></ul><p>6. Based on my understanding of the Act, it contains these requirements:</p><ul><li><p>App stores may allow users who are under 18, but they must obtain parental consent for each app that an underage user wants to download.</p></li><li><p>App stores must verify each user&#8217;s age category: under 13, 13 to 15, 16 to 17, or 18 and older.</p></li><li><p>App stores must verify parental consent for minors.</p></li></ul><p><em>Under 18 vs. Granular Age Categories</em></p><p>7. Facial age estimation is a modern method of age verification. To explain how it works, a webcam captures a short video (or an image) of one&#8217;s face. An algorithm uses this video to estimate the person&#8217;s age. Once that estimate is produced, the video can then be deleted, alleviating privacy concerns. This is known as a biometric method of age verification.</p><p>8. Suppose that an engineer was designing an age verification system for pornographic sites; the system only needs to verify if a user&#8217;s age is under 18, or 18 and older.</p><p>9. An engineer may look at using facial age estimation. This solution would produce the correct age category for most users&#8212;though a secondary method would be needed for 18- to 20-year-olds. In its <a href="https://www.yoti.com/wp-content/uploads/2025/08/Yoti-Age-Estimation-White-Paper-July-2025-PUBLIC-v1.pdf">whitepaper</a> on facial age estimation, Yoti reports that it can reliably determine that 13- to 17-year-olds are under 21; the accuracy is 99.3%.</p><p>10. According to the Age Verification Providers Association&#8217;s <a href="https://www.supremecourt.gov/DocketPDF/23/23-1122/332649/20241122182350193_No.%2023-1122%20AVPA%20amicus.pdf">amicus brief</a> in <em>FSC</em>, &#8220;while biometric age verification cannot perfectly identify a user&#8217;s age, it effectively waves in the vast majority of users who are well over 18, leaving potential doubts only as to those between 18 and 21.&#8221; </p><p>11. But suppose that the requirements are changed: the system must now verify if a user&#8217;s age is under 13, 13 to 15, 16 to 17, or 18 and older.</p><p>12. Facial age estimation is not perfect. If the margin of error is &#177;1 year, a user estimated to be 15 could actually be 14 or 16; 16 is a different age category. Per Yoti&#8217;s whitepaper, their facial age estimation has a mean absolute error of 1.1 years for 13- to 17-year-olds.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> For many minors, the margin of error would cross the boundaries of an age category. </p><p>13. In the offline world, people often verify their age with a driver&#8217;s license, so an engineer could try that option next. One age category, though, is 13 to 15. Many users in that category do not have a driver&#8217;s license (or even a learner&#8217;s permit).</p><p><em>Blocking Underage Users vs. Allowing Underage Users with Parental Consent</em></p><p>14. Suppose that a new requirement is added: the system must verify parental consent for users who are under 18. The key challenge here is verifying the parental relationship between an adult and a minor. (Once that relationship is established, verifying parental consent is straightforward.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>)</p><p>15. Facial age estimation can only verify an age, not a parental relationship. Likewise, a government ID can only verify an identity and an age, not a parental relationship.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>16. A birth certificate, by contrast, does list a parent, but it can only verify who has custody at the time of birth. Custody can change, for example, if the child is adopted&#8212;or for minors with divorced parents, minors in foster care, and emancipated minors.</p><p>17. In his testimony for Arkansas&#8217;s age verification law, Tony Allen of the Age Verification Providers Association <a href="https://storage.courtlistener.com/recap/gov.uscourts.arwd.68680/gov.uscourts.arwd.68680.44.0.pdf">observed</a> that the &#8220;biggest challenge&#8221; is establishing the parental relationship: &#8220;It&#8217;s easy to say that this person who is giving the consent is, let&#8217;s say, in their 40s, versus the person that&#8217;s asking for the consent being under 18. But actually establishing that that is a parent or a legal guardian, that&#8217;s the challenge with those processes.&#8221;</p><p>18. Age is a self-contained fact. I understand that a parental relationship, though, is a government-dependent fact; for example, the government decides who gets custody in a divorce, or whether an adult can adopt a minor. When an engineer designs a system that verifies a parental relationship, navigating custody laws goes beyond the means of what an engineer can do.</p><p><strong>The Scope and Scale of the Act</strong></p><p>19. Verifying age or parental consent is a means to an end. And I understand that the requirements of both laws differ as to their ends (not just their means). The scope of the Act is not limited to pornographic apps.</p><p><em>Assessing the Scope and Scale</em></p><p>20. I understand that this Court is bound by <em><a href="https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf">Moody v. NetChoice</a></em><a href="https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf"> (2024)</a>, which requires a two-step facial analysis: &#8220;The first step in the proper facial analysis is to assess the state laws&#8217; scope.&#8221; Based on my understanding of the Act, the scope includes (but may not be limited to) every app that is downloaded from Apple and Google&#8217;s app stores.</p><p>21. The scale here is defined by both volume and variety. In terms of volume, Apple <a href="https://www.apple.com/newsroom/2023/05/developers-generated-one-point-one-trillion-in-the-app-store-ecosystem-in-2022/">reported</a> in 2022 that its app store had nearly 1.8 million apps. Estimates <a href="https://42matters.com/google-play-statistics-and-trends">indicate</a> that Google&#8217;s app store also has millions of apps. In terms of variety, <em>Wired</em> <a href="https://www.wired.com/2010/10/app-for-that/">published</a> a story in 2010 about an Apple slogan &#8220;so catchy that it's endlessly parroted by the media&#8221;: &#8220;There&#8217;s an app for that.&#8221; One <a href="https://www.youtube.com/watch?v=szrsfeyLzyg">TV ad</a> featured an app to check snow conditions on the mountain, an app to count the calories in your lunch, and an app to find where you parked your car.</p><p>22. Per <em>Moody</em>, &#8220;The next order of business is to decide which of the laws&#8217; applications violate the First Amendment, and to measure them against the rest.&#8221; In terms of volume, analyzing millions of apps on an individual basis could go beyond this Court&#8217;s capabilities.</p><p>23. In terms of variety, while many pornographic sites exist, one can make generalizations that apply broadly to all pornographic sites. An app ecosystem where apps can do just about anything, however, is also an ecosystem that resists meaningful generalizations.</p><p>24. It may be possible to anecdotally identify apps that are inappropriate for minors. But even assuming arguendo that the Act is constitutional as applied to these apps, the plural of anecdote is not data. Measuring an ecosystem containing millions of different apps&#8212;including both constitutional and unconstitutional applications&#8212;is a very different task.</p><p><em>Managing Complexity and Scale</em></p><p>25. At 65 MPH, can a car drive 300 miles from St. Louis to Chicago in 4.5 hours? Consider real-world conditions: traffic, red lights, and roads with speed limits under 65 MPH. The answer is no. Even under ideal conditions&#8212;no traffic, no red lights, and speed limits of 65 MPH or greater&#8212;the drive would take over 4.5 hours (300 miles / 65 MPH &#8776; 4.62 hours).</p><p>26. Does the Act&#8217;s plainly legitimate sweep include social media apps? This Court may have to confront such constitutional questions not just for social media apps, but for many types of apps. Here, this Court could apply <em>Moody</em>&#8217;s framework under ideal conditions for the State&#8212;assuming arguendo that the Act&#8217;s legitimate sweep includes social media apps.</p><p>27. Suppose that&#8212;even under ideal conditions for the State&#8212;the number of unconstitutional applications grows beyond what <em>Moody</em> allows. In that case, this Court need not consider real-world conditions, such as deciding whether the Act is constitutional as applied to social media apps.</p><p>28. Still, analyzing this ecosystem can be difficult without concrete examples. Social media&#8212;which relies on user-generated content&#8212;is one example. This ecosystem also includes apps that rely on curated third-party content, such as the collection of movies on the Netflix app, and apps that rely on first-party content, such as The New York Times app.</p><p>29. Apps that rely on user-generated content do not just include social media apps. They also include user-generated encyclopedias, such as Wikipedia&#8217;s app, and user-generated reviews, such as Yelp&#8217;s app.</p><p>30. This ecosystem includes non-violent video games such as Solitaire, Candy Crush, and Angry Birds. It includes Bible apps and other religious apps. And it includes an app to check snow conditions on the mountain, an app to count the calories in your lunch, and an app to find where you parked your car.</p><p><strong>Sharing Age Data with Apps: Cybersecurity and Privacy Risks</strong></p><p>31. Based on my understanding of the Act, it contains these requirements:</p><ul><li><p>App stores must share the (verified) age category of a user with apps. For minors, app stores must also share the parental consent status.</p></li><li><p>Apps must verify the age category and parental consent for users, using the data provided by app stores.</p></li></ul><p><em>The &#8220;Zero Trust&#8221; Mindset</em></p><p>32. A core cybersecurity challenge is the <strong>defender&#8217;s dilemma</strong>: attackers can strike from anywhere using any method. If hackers are trying to penetrate a corporate network, they could exploit a security vulnerability, or they could <a href="https://www.cloudflare.com/learning/email-security/email-attachments/">embed malware</a> in a PDF and email it to any employee. For the attackers to succeed, the defenders only need to fail once.</p><p>33. If the corporate network is a castle, attackers often infiltrate this castle. John Reed Stark, former chief of the SEC&#8217;s Office of Internet Enforcement, once <a href="https://news.medill.northwestern.edu/chicago/facing-inevitable-data-breaches-and-new-privacy-laws-companies-shift-focus-to-response/?utm_source=chatgpt.com">said</a>, &#8220;Cybersecurity is an oxymoron and a data breach is inevitable.&#8221; Thus, the industry developed a <strong><a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">&#8220;zero trust&#8221;</a></strong> paradigm: don&#8217;t assume it&#8217;s safe inside the castle.</p><p>34. If the app store is a castle, both Apple and Google have an app review process to ensure the land inside their castle is safe. This process is not perfect. In 2024, a fraudulent app <a href="https://blog.lastpass.com/posts/warning-fraudulent-app-impersonating-lastpass-currently-available-in-apple-app-store">impersonating</a> LastPass, a popular password manager, was discovered in Apple&#8217;s app store. In 2025, the Tea Dating Advice app suffered a <a href="https://www.404media.co/women-dating-safety-app-tea-breached-users-ids-posted-to-4chan/">data breach</a> that exposed users&#8217; selfies and driver&#8217;s licenses.</p><p>35. The defender&#8217;s dilemma also applies to app stores, but Apple and Google must defend a landscape with millions of different apps&#8212;a landscape much larger than a corporate network. And inevitably, bad apps <a href="https://www.washingtonpost.com/technology/2021/06/06/apple-app-store-scams-fraud/">often</a> <a href="https://about.fb.com/news/2022/10/protecting-people-from-malicious-account-compromise-apps/">find</a> ways to infiltrate the castle. Thus, the same &#8220;zero trust&#8221; principle applies: don&#8217;t trust an app just because it&#8217;s inside the castle walls of the app store.</p><p><em>Best Practices for Sensitive Data</em></p><p>36. For age data, a useful frame of reference is location data; both are sensitive data. The McDonald&#8217;s app wants your location to find nearby restaurants, but data brokers (companies that collect personal data and sell it) also want your location. And parents may object if a browser broadcasts their kid&#8217;s location to every site they visit. Thus, the tech industry converged on this principle: do not share location data without the user&#8217;s permission.</p><p>37. For websites, the World Wide Web Consortium (W3C), a standards body, published a <a href="https://www.w3.org/TR/geolocation/">geolocation standard</a> that &#8220;requires express permission from an end-user before any location data is shared.&#8221; For apps, iOS, Apple&#8217;s operating system for iPhones, <a href="https://support.apple.com/en-us/102515">only shares</a> location with an app with the user&#8217;s permission. Likewise, Android, Google&#8217;s operating system for smartphones, <a href="https://support.google.com/android/answer/6179507">only shares</a> location with an app with the user&#8217;s permission.</p><p>38. A similar use/abuse duality exists for age data. Apps can use it to enforce age restrictions. However, data brokers also want this data, and predators could use it to target children. In February 2025, Apple <a href="https://developer.apple.com/support/downloads/Helping-Protect-Kids-Online-2025.pdf">announced</a> plans to share the age category of child accounts with apps.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> However, it would do so &#8220;if and only if parents decide to allow this information to be shared, and they can also disable sharing if they change their mind.&#8221;</p><p>39. Compared to Apple&#8217;s approach, the Act lacks a key privacy protection: it forces app stores to share age data with apps <em>without a parent&#8217;s permission</em>.</p><p>40. From a parent&#8217;s perspective, an app developer can reasonably be characterized as a stranger. And not every app developer may work for a reputable tech company. For a parent, the decision to let a child use an app is distinct from the decision to share a child&#8217;s personal data&#8212;including age data&#8212;with an app.</p><p>41. Age data can also be combined with other data that the app has already collected. For example, when the Act provides age data to apps that use location data, it introduces this risk: it tells strangers which users are kids, and where those kids are located.</p><p><em>Comparing Incentive Structures to Protect Privacy</em></p><p>42. Based on my understanding, the Act also forbids apps from sharing age data with unaffiliated third parties. In other words, the Act forces app stores to share age data with apps without asking a parent for permission, but then tells apps not to misuse this data.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>43. If an app misuses this data, someone must catch them. Here, app stores and law enforcement face a vast, ever-changing landscape with all three Vs of Big Data: volume, variety, and velocity. For volume and variety, millions of different apps exist. For velocity, many app developers regularly update their apps; app stores must review a steady stream of updates.</p><p>44. Even if an app is caught, the app developer, for example, could live in China and have ties to the Chinese military.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> In August 2025, <a href="https://www.comparitech.com/news/a-deeper-dive-into-the-china-and-russia-linked-vpns-on-ios-and-android/">security researchers</a> at Comparitech found 10 VPN (virtual private network) apps that communicated with Russian domains and 6 that communicated with Chinese domains. <a href="https://www.techtransparencyproject.org/articles/apple-offers-apps-with-ties-to-chinese-military">Earlier research</a> from the Tech Transparency Project traced multiple VPN apps back to QiHoo 360, a Chinese military company.</p><p>45. In <em>FSC</em>, I understand that the Court concluded that pornographic websites would &#8220;have every incentive to assure users of their privacy.&#8221; Here, though, it would not be prudent to assume that millions of different apps would all be incentivized to protect privacy&#8212;especially in the case of Chinese or Russian apps.</p><p><em>Data Minimization</em></p><p>46. Another privacy principle is to only collect or share data where there is a <a href="https://epic.org/issues/consumer-privacy/data-minimization/">specific, legitimate purpose</a> for using that data. (Nonetheless, collecting or sharing a large amount of data still presents privacy risks, even with a specific, legitimate purpose.)</p><p>47. Here, one such purpose is denying access to an app. But this purpose can be accomplished without sharing age data with apps. If the burden of verifying age and parental consent is shifted to the operating system (OS), the OS can block an app from starting if a parent blocks that app.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p><p>48. Second, apps could build child safety features that depend on an age category. But not all apps may build such features. Even for those that do, Apple or Google may want to first verify that the app has a specific, legitimate purpose to use an age category. The Act, by contrast, forces app stores to unconditionally share an age category with every app.</p><p>49. There exists, however, a third purpose. I understand that some child safety regulations are triggered when a site or app has actual knowledge of a user&#8217;s age, such as <a href="https://www.law.cornell.edu/uscode/text/15/6502">certain provisions</a> in the Children&#8217;s Online Privacy Protection Act.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> Such laws can create an incentive structure where sites and apps can reduce regulatory risk by avoiding actual knowledge of their users&#8217; ages. Child safety advocates have <a href="https://www.commonsensemedia.org/sites/default/files/featured-content/files/2024-us-age-assurance-white-paper_final.pdf">often criticized</a> the actual knowledge standard; some even <a href="https://www.aei.org/op-eds/protecting-teens-from-big-tech-five-policy-ideas-for-states/">claim</a> it is &#8220;almost impossible to prove in a court of law.&#8221;</p><p>50. If an app verifies a user&#8217;s age, it then has actual knowledge of a user&#8217;s age.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> This is merely an idea, though. Legislators must still chart out a path to implement this idea.</p><p>51. The Act charts out this path. First, it places the burden of age verification on app stores. Then, it forces app stores to share that verified age category with every app. On the surface, this solution appears to work. Nearly every app has actual knowledge of its users&#8217; ages.</p><p>52. Having considered how age data is used to create actual knowledge, the next step is to consider how this same data can be abused. Recall that apps should be untrusted by default; &#8220;zero trust&#8221; applies. In a world filled with hackers, data brokers, and predators, the Act forces app stores to share age data with millions of untrusted apps&#8212;without asking a parent for permission. This is a dangerous path.</p><p><strong>Concluding Note</strong></p><p>53. Justice Thomas&#8212;who authored the majority opinion in <em>FSC</em>&#8212;asked the first question of <a href="https://www.supremecourt.gov/oral_arguments/argument_transcripts/2024/23-1122_7m58.pdf">oral arguments</a>: &#8220;Can age verification systems ever be found constitutional?&#8221; I offer no opinions on the constitutionality of age verification as an idea. In the context of a case or controversy, though, I can offer my expertise concerning an implementation of that idea. A different implementation with different requirements would be evaluated differently.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Since this is a mock declaration, I&#8217;ll focus on the core content and omit formalities like a fancy bio, proper formatting, and boilerplate legal verbiage. You can assume that these could easily be added in if this was a real declaration.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>These examples were based on the first two chapters of <em>System Design Interview, Volume 2</em>. In the chapter on nearby friends, the estimate was that this system would process 334,000 location updates per second.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The margin of error is different than the mean absolute error. But if, for example, the mean absolute error is 1 year, the margin of error for a 95% confidence interval is probably significantly larger than &#177;1 year.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Likewise, parental controls like Apple&#8217;s Family Sharing and Google&#8217;s Family Link are relatively easy to build if you assume that the person who claims to be the child&#8217;s parent is in fact the child&#8217;s parent.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Some have suggested matching last names or addresses on IDs. However, last names may not match if the wife kept her maiden name. Minors without a driver&#8217;s license could use a passport, but passports do not have an address. The address on a driver&#8217;s license may not match if a family moves within a state.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>To the best of my knowledge, this is a declared age; Apple does not verify this age.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>I understand that the Act also requires that app stores encrypt this data before it is shared with apps. Encryption, however, only prevents a third party from intercepting this data. If the app store does not trust the app&#8212;at least until a parent grants permission to share age data&#8212;encryption cannot solve that problem.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>The app developer would also have to be identified first. This process depends in part on what information an app store collects from developers, and what steps the app store takes to verify that information. Bad actors may provide false information.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>On Android devices, Samsung and Epic Games would no longer need to verify age or parental consent for their app stores. Only Google would need to verify age and parental consent; it owns the Android OS.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>An operating system may also need to share this data with an app store, but managing data sharing with a few app stores is much simpler than managing data sharing with millions of different apps.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>I understand that these provisions in COPPA also apply to sites or services that are &#8220;directed to children,&#8221; where a child is &#8220;under the age of 13.&#8221; However, a site may state that users must be at least 13 years old, but then not verify age at sign-up.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>However, users could substitute websites for apps in some cases. For example, they could use facebook.com instead of the Facebook app.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Fact Check: Age Verification for App Stores and Privacy ]]></title><description><![CDATA[Would the App Store Accountability Act make you share more personal data with Apple and Google? The answer is yes, despite claims to the contrary.]]></description><link>https://www.technicalassistance.io/p/fact-check-age-verification-for-app</link><guid isPermaLink="false">https://www.technicalassistance.io/p/fact-check-age-verification-for-app</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Thu, 08 May 2025 13:02:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!eKG2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Under the App Store Accountability Act (an age verification bill), users will need to provide more personal data to Apple and Google before they can access their app stores. Claims to the contrary are false&#8212;and are based on a major technical error.</p><div><hr></div><p>Age verification bills often raise privacy concerns, but does age verification for app stores offer a novel approach that avoids these concerns? In <a href="https://joellthayer.medium.com/written-testimony-of-joel-thayer-president-the-digital-progress-institute-0e8307b9109f">written testimony</a> to Texas&#8217;s legislature, Joel Thayer, who &#8220;developed the legal and policy framework&#8221; for this approach, claimed that the answer is yes:</p><blockquote><p>The Act even avoids the obvious privacy objection that Big Tech organizations like to lodge against age verification measures at the website tier. App stores <em>already</em> have all of this age information. This means that the user would not need to proffer more data to these platforms &#8212; a distinct characteristic from website-level age verification requirements.</p></blockquote><p>After the federal version of this bill <a href="https://james.house.gov/news/documentsingle.aspx?DocumentID=247">was introduced</a>, many prominent supporters also made similar claims. It&#8217;s a rather remarkable claim about privacy&#8212;one that could easily persuade many people to support this act. It&#8217;s also a false claim; users would have to &#8220;proffer more data to these platforms.&#8221;</p><p>&#8220;App stores <em>already</em> have all of this age information.&#8221; Thayer&#8217;s mistake here is subtle but important: an inability to distinguish between an <em>attested</em> age and a <em>verified</em> age.</p><p>App stores do already have age information, but what they have is an attested age.</p><p>Let&#8217;s use Google as an example. Google&#8217;s app store knows who you are&#8212;or perhaps more accurately, who you claim to be&#8212;based on your Google Account. And you did <a href="https://support.google.com/accounts/answer/1733224?hl=en">tell Google your age</a> when you created that Google account.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eKG2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eKG2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png 424w, https://substackcdn.com/image/fetch/$s_!eKG2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png 848w, https://substackcdn.com/image/fetch/$s_!eKG2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png 1272w, https://substackcdn.com/image/fetch/$s_!eKG2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eKG2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png" width="1456" height="712" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:712,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:84887,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.technicalassistance.io/i/163111750?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eKG2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png 424w, https://substackcdn.com/image/fetch/$s_!eKG2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png 848w, https://substackcdn.com/image/fetch/$s_!eKG2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png 1272w, https://substackcdn.com/image/fetch/$s_!eKG2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa835eb-ff7e-435c-8121-ce9813b995e9_2568x1256.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Screenshot of the workflow to create a new Google Account</figcaption></figure></div><p>But did Google make any attempt to verify that age? No. You can lie about your age here, just like you can lie when a website asks you to enter your age. What Google has collected is merely an attested age, not a verified age. (It&#8217;s the same for Apple.)</p><p>The App Store Accountability Act, however, requires a verified age. And to verify that age, you will need to provide more personal data to Apple or Google.</p><p>To make matters worse, the App Store Accountability Act age-gates every app, not just apps that may be inappropriate for children. If you want to download a Bible app, you will need to verify your age with Apple or Google first.</p><div><hr></div><p>While I&#8217;m here, I want to briefly make another key point: when evaluating the privacy concerns of age verification, the specific requirements of an age verification bill matter just as much, if not more, than the current state of age verification technology.</p><p>From a privacy perspective, these are two dramatically different bills:</p><ol><li><p>I just need to verify if a person is 18 or older.</p></li><li><p>I need to verify if a person is 12-, 13-15, 16-17, or 18+. For users under 18, I also need to verify parental consent&#8212;which requires verifying a parental relationship. Also, I will force app stores to share age data with millions of app developers.</p></li></ol><p>For example, while facial age estimation is a newer and more privacy-conscious method of age verification, it doesn&#8217;t work if you need granular age categories, such as 13-15 or 16-17. And that&#8217;s just scratching the surface&#8230;</p>]]></content:encoded></item><item><title><![CDATA[Holding the Innocent Accountable]]></title><description><![CDATA["Hold Big Tech accountable" is a slogan about punishing the guilty. It does not excuse punishing the innocent.]]></description><link>https://www.technicalassistance.io/p/holding-the-innocent-accountable</link><guid isPermaLink="false">https://www.technicalassistance.io/p/holding-the-innocent-accountable</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Tue, 04 Mar 2025 14:03:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Why do you need to verify your age before you can download a Bible app? Sure, app stores could do more to protect kids, but does that really justify a bill&#8212;the App Store Accountability Act&#8212;that also &#8220;protects&#8221; you from a Bible app?</p><h2>Conduct, Not Content</h2><p>&#8220;Does this bill violate the First Amendment?&#8221;</p><p>When a legislator proposes the App Store Accountability Act&#8212;or really any bill to protect kids online&#8212;a small army of experts (or &#8220;experts&#8221;) will often show up at their door, often armed with arguments that their bill violates the First Amendment.</p><p>But while the legislator justifiably expresses skepticism towards these hostile experts, the experts then add that the courts have sided with them. NetChoice (a trade association for Big Tech) has sued to block age verification laws for social media in Arkansas, Ohio, Mississippi, and Utah. In all four states, the judge sided with them.</p><p>Thus, the legislator begins to question whether their law is unconstitutional. That question is too imprecise for my tastes, though. I would instead ask more precise questions&#8212;ones that are akin to what a practicing lawyer would ask.</p><p>First, what level of scrutiny applies: rational-basis review, intermediate scrutiny, or strict scrutiny? Second, does the law survive that level of scrutiny?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>And since we&#8217;re predicting what courts would do in the real world, let&#8217;s examine a real-world case: <em><a href="https://www.supremecourt.gov/opinions/24pdf/24-656_ca7d.pdf">TikTok v. Garland</a></em><a href="https://www.supremecourt.gov/opinions/24pdf/24-656_ca7d.pdf"> (2025)</a>. Here, the Supreme Court considered a law that forces China to divest TikTok&#8212;and bans TikTok if China does not divest.</p><p>For many experts, their time to shine had come. On one side, Jennifer Huddleston of the Cato Institute <a href="https://www.cato.org/commentary/us-wants-ban-tiktok-first-amendment-demands-stronger-case-national-security">confidently proclaimed</a> that the law was unconstitutional under strict scrutiny. On the other side, Joey Thayer of the Digital Progress Institute <a href="https://x.com/joellthayer/status/1775612141393018886">confidently proclaimed</a> that there are &#8220;no [First Amendment] concerns here,&#8221; as &#8220;[t]he bill regulates TikTok's conduct, not content.&#8221; (For what it&#8217;s worth, this engineer <a href="https://www.technicalassistance.io/p/why-the-tiktok-bill-doesnt-violate">predicted</a> that courts would uphold the law under intermediate scrutiny.)</p><p>Whose elaborate legal theories would survive contact with reality? When the Supreme Court handed down its unanimous decision, it upheld the law under intermediate scrutiny&#8212;a standard used when some First Amendment concerns exist.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>What about the App Store Accountability Act? Friendly experts have declared that this bill regulates conduct, not content&#8212;<a href="https://utahnewsdispatch.com/2025/02/10/utah-app-store-law-passes-senate-kids-accounts/">describing</a> it as a bill that regulates contracts with minors, not as a content bill. I can spot the logical fallacy there.</p><p>That argument creates a false dichotomy. It suggests that we can cleanly separate laws into conduct laws&#8212;which are subject to rational-basis review&#8212;and content laws&#8212;which are subject to strict scrutiny. In the real world, however, many laws fall somewhere in between a conduct law and a content law.</p><p>The true choice courts must make is not between strict scrutiny and rational-basis review, but between strict and intermediate scrutiny. As the Supreme Court said in <em><a href="https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf">Moody v. NetChoice</a></em><a href="https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf"> (2024)</a>, &#8220;In the usual First Amendment case, we must decide whether to apply strict or intermediate scrutiny.&#8221;</p><p>If an age verification law touches social media, it probably qualifies as the &#8220;usual&#8221; case. The App Store Accountability Act touches social media apps (and other apps).</p><p>Of course, some experts&#8212;armed with motivated reasoning&#8212;will then argue that this law is the &#8220;unusual&#8221; case. (It&#8217;s not.) Ohio already tried to claim that age verification laws are contract laws, not content laws. Judge Marbley <a href="https://storage.courtlistener.com/recap/gov.uscourts.ohsd.287455/gov.uscourts.ohsd.287455.33.0.pdf">rejected</a> that claim: &#8220;this Court is unaware of a &#8216;contract exception&#8217; to the First Amendment.&#8221;</p><p>And if these experts still won&#8217;t back down, I would then ask this: what if this was the usual case? Can this act not even survive intermediate scrutiny?</p><p>If the App Store Accountability Act could survive intermediate scrutiny, then perhaps there&#8217;s little harm in shooting your shot for rational-basis review. But if the act can&#8217;t survive intermediate scrutiny, then your entire legal argument depends on the courts applying rational-basis review. Engineers would call that a single point of failure.</p><p>And if that single point of failure collapses if the courts decide that this is the usual case where rational-basis review does not apply, then the act is poorly engineered.</p><p>Thus, going forward, we will assume that this is the usual case where &#8220;we must decide whether to apply strict or intermediate scrutiny.&#8221;</p><h2>Holding Dreamwidth Accountable</h2><p>In many cases, age verification bills easily pass the legislature, but fail to survive contact with reality in the courts. Before we jump into the App Store Accountability Act, we should first study those failures in the realm of social media.</p><p>After Utah passed its age verification law for social media, NetChoice <a href="https://netchoice.org/netchoice-sues-utah-to-keep-kids-safe-online-and-protect-constitutional-rights/">sued</a> Utah in December 2023&#8212;and the Foundation for Individual Rights and Expressions <a href="https://www.thefire.org/news/lawsuit-utahs-clumsy-attempt-childproof-social-media-unconstitutional-mess">followed suit</a> in January 2024. NetChoice had already <a href="https://storage.courtlistener.com/recap/gov.uscourts.arwd.68680/gov.uscourts.arwd.68680.44.0.pdf">convinced</a> Judge Brooks to block a similar law in Arkansas in August 2023. After they sued Utah, NetChoice also <a href="https://storage.courtlistener.com/recap/gov.uscourts.ohsd.287455/gov.uscourts.ohsd.287455.33.0.pdf">convinced</a> Judge Marbley to block a similar law in Ohio in February 2024.</p><p>Needless to say, Utah legislators could not ignore the legal threat from NetChoice as they contemplated their next steps. Friendly experts, led by the Institute for Family Studies (IFS), <a href="https://www.deseret.com/opinion/2024/2/22/24079641/opinion-utah-has-led-the-nation-in-standing-up-for-kids-with-big-tech-social-media/">published a coalition letter</a> in February 2024 with this message: &#8220;Don&#8217;t back down now.&#8221; &#8220;Utah is retreating at the very moment of Big Tech&#8217;s vulnerability.&#8221;</p><p>Regarding the legal concerns, they wrote, &#8220;attempts to forecast what . . . will survive judicial review are highly speculative.&#8221; They declared that Utah&#8217;s law is &#8220;not a restriction on speech (though Big Tech lobbyists have been arguing otherwise),&#8221; as it &#8220;regulates minors&#8217; right to contract for certain goods and services.&#8221;</p><p>(This letter was published after Judge Marbley in Ohio had written that &#8220;this Court is unaware of a &#8216;contract exception&#8217; to the First Amendment.&#8221;)</p><p>Meanwhile, I sought to study the words of Judge Brooks and Judge Marbley. Even if you do not agree with them 100%, you still must respect their authority as judges. And, I suspected that those laws may have been poorly engineered.</p><p>Eventually, I zeroed in on one key flaw: a bad definition of social media. The Internet contains <a href="https://siteefy.com/how-many-websites-are-there/">over one billion websites</a>; how do you accurately classify each one as &#8220;social media&#8221;/&#8220;not social media&#8221;? <a href="https://www.technicalassistance.io/p/define-social-media-part-ii-definitions">Speaking from experience</a>, it&#8217;s not as easy as it seems.</p><p>As I <a href="https://www.city-journal.org/article/what-is-social-media">warned</a> in City Journal: &#8220;Legislators in both parties may want to hold Big Tech accountable, but their reforms will go for naught unless they sweat the details of their definition.&#8221; Specifically, I focused on three common anti-patterns: content-based exceptions, overinclusive definitions, and vague definitions.</p><p>In some circles, the reaction to my fixation on the definition was not exactly positive. Was this a pointless &#8220;crusade&#8221;? In other cases, this fixation was just ignored.</p><p>Meanwhile, NetChoice was litigating social media laws in Mississippi and Texas. Both states followed a familiar pattern with their definition. The first part of the definition was overinclusive; it included sites that weren&#8217;t social media. Thus, the second part of the definition contained exceptions that would exclude those sites. </p><p>NetChoice, however, argued that some exceptions were content-based. Judge Ozerden in Mississippi and Judge Pitman in Texas <a href="https://cases.justia.com/federal/district-courts/mississippi/mssdce/1:2024cv00170/125118/30/0.pdf">both</a> <a href="https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172798016/gov.uscourts.txwd.1172798016.25.0_1.pdf">agreed</a>, blocking those laws, respectively, in July and August 2024. That same objection&#8212;content-based exceptions&#8212;had also previously persuaded Judge Brooks and Judge Marbley.</p><p>What goes wrong when a definition is &#8220;content-based&#8221;? Content-based laws are subject to strict scrutiny (while content-neutral laws are only subject to intermediate scrutiny). And strict scrutiny tends to be &#8220;strict in theory, fatal in fact.&#8221; In short, content-based exceptions tend to be fatal in fact&#8212;and were fatal in four states.</p><p>To return to Utah, in reaction to NetChoice&#8217;s legal threat, legislators decided to stay the course, but they also could not walk the exact same path. And although they acted before the court decisions in Mississippi and Texas came down, they nonetheless sensed that a definition with <a href="https://le.utah.gov/~2023/bills/static/SB0152.html">20 exceptions</a> would be a vulnerability, so they <a href="https://le.utah.gov/~2024/bills/static/SB0194.html">rewrote</a> it.</p><p>This time, NetChoice could not convince the judge that the definition had a content-based exception. They did, however, convince the judge that the definition was overinclusive, noting that it included Dreamwidth (a blogging service). NetChoice won again, and Judge Shelby blocked Utah&#8217;s law in September 2024.</p><p>The slogan was &#8220;hold Big Tech accountable,&#8221; not &#8220;hold Dreamwidth accountable.&#8221; As Judge Shelby noted, &#8220;Dreamwidth is distinguishable in form and purpose from the likes of traditional social media platforms&#8212;say, Facebook and X.&#8221; Dreamwidth was not exactly a guilty party here. Courts will not give you a pass for punishing the innocent just because your legislation also punishes the guilty.</p><h2>The Full Scope of a Law</h2><p>Perhaps the time had come to reflect back and carefully study these failures before charting a path forward. The Institute for Family Studies, however, charged forward&#8212;this time with new <a href="https://ifstudies.org/ifs-admin/resources/app-store-accountability-act.pdf">model legislation</a> for the App Store Accountability Act.</p><p>As we said earlier, we will assume that this is the usual case where &#8220;we must decide whether to apply strict or intermediate scrutiny.&#8221; And for the sake of argument, let&#8217;s assume that intermediate scrutiny applies.</p><p>Does the law survive intermediate scrutiny? Experts on both sides will bring out elaborate legal theories&#8212;and both would be willing to litigate them all the way to the Supreme Court. But before both sides take their legal battle that far, I propose that we first examine what the Supreme Court said the last time a battle like that took place.</p><p>Let us conduct the analysis that the Supreme Court laid out in <em><a href="https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf">Moody v. NetChoice </a></em><a href="https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf">(2024)</a>: &#8220;The first step in the proper facial analysis is to assess the state laws&#8217; scope.&#8221; </p><p>In an op-ed, the IFS and others <a href="https://thehill.com/opinion/4904617-app-store-accountability-act/">claimed</a> that their legislation merely &#8220;requires brick-and-mortar stores to check ID for purchases of age-restricted products like cigarettes and alcohol.&#8221; This fatally flawed analogy grossly misstates the scope of their act.</p><p>The correct analogy would be checking ID before you can even enter a brick-and-mortar store, such as a Walmart or a 7/11. To use <a href="https://le.utah.gov/~2025/bills/static/SB0142.html">Utah&#8217;s legislation</a> as an example, age verification does not kick in when a user downloads certain apps that are harmful to minors, but when the user &#8220;creates an account with the app store provider.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>Do you want to download a Bible app? You must verify your age&#8212;and get parental consent if you&#8217;re a minor. The Microsoft Word app? You must verify your age&#8212;and get parental consent if you&#8217;re a minor. The Dreamwidth app? You must verify your age&#8212;and get parental consent if you&#8217;re a minor.</p><p>If defining social media was such a vexing problem, then perhaps some thought that we could avoid this problem by pivoting to app stores. Not so. The purpose of this definition was to control the scope of the law; we could tune the law&#8217;s scope by tuning the definition. The pivot to app stores, however, actually broadened the scope.</p><p>Returning to <em>Moody</em>, &#8220;[t]he next order of business is to decide which of the laws&#8217; applications violate the First Amendment, and to measure them against the rest.&#8221; From there, &#8220;[t]he question is whether &#8216;a substantial number of [the law&#8217;s] applications are unconstitutional, judged in relation to the statute&#8217;s plainly legitimate sweep.&#8217; &#8221;</p><p>Here, the IFS focused on the &#8220;heartland applications&#8221; of this act, such as social media. But as we look at the full scope of the law, we can find a steady stream of examples&#8212;such as a Bible app or the Microsoft Word app&#8212;where the age verification and parental consent mandates do not even survive intermediate scrutiny.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>Even if the act is constitutional for those &#8220;heartland applications,&#8221; we can also identify a substantial illegitimate sweep&#8212;rendering the act unconstitutional.</p><p>If requiring age verification for Dreamwidth rendered a law overinclusive, then imagine how overinclusive a law must be if it requires age verification for a Bible app.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>But, if a law can survive strict scrutiny, it is not necessary to decide which level of scrutiny applies. And if a law would not survive intermediate scrutiny (and we&#8217;ve ruled out rational-basis review), it is not necessary to decide whether intermediate or strict scrutiny applies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>While the Supreme Court assumed without deciding that heightened scrutiny applies (i.e., either intermediate or strict scrutiny applies), the <a href="https://media.cadc.uscourts.gov/opinions/docs/2024/12/24-1113-2088317.pdf">DC Circuit Court</a> unanimously decided that heightened scrutiny applies.</p><p>In arguing that there are &#8220;no [First Amendment] concerns here,&#8221; Joel Thayer heavily relied on <em>Arcara v. Cloud Books</em> (1986). That argument fell flat in the DC Circuit Court: &#8220;At the outset, we reject the Government&#8217;s ambitious argument that this case is akin to <em>Arcara v. Cloud Books, Inc.</em>, 478 U.S. 697 (1986), and does not implicate the First Amendment at all.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>However, Utah deserves credit for narrowing the scope to mobile devices and mobile apps; the original model legislation applied to any &#8220;general purpose computing device.&#8221; Had Utah not narrowed the scope, &#8220;<code>sudo apt install git</code>&#8221; would have triggered age verification; <code>apt</code> would qualify as an app store, and <code>git</code> would qualify as an app.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>To be more precise, it fails the first part of intermediate scrutiny; I&#8217;m not aware of any &#8220;important government interest&#8221; that justifies age verification&#8212;and parental consent for minors&#8212;before you download a Bible app.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The Social Dilemma Over Parental Consent]]></title><description><![CDATA[Should we ban kids from social media, or let them join with parental consent?]]></description><link>https://www.technicalassistance.io/p/the-social-dilemma-over-parental</link><guid isPermaLink="false">https://www.technicalassistance.io/p/the-social-dilemma-over-parental</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Fri, 14 Feb 2025 14:03:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You are a legislator drafting an age verification bill. You have two choices:</p><ol><li><p>Ban social media for kids under 16.</p></li><li><p>Only allow kids under 16 on social media if they have parental consent.</p></li></ol><p>Which choice should you make?</p><div><hr></div><p>Initially, your gut instinct says the answer is the first option: a straight ban. When you recall the stories about kids and social media that you&#8217;ve heard not just from your constituents&#8212;but also from your local community&#8212;it&#8217;s not hard to understand why.</p><p>But shouldn&#8217;t you give parents a choice here? On top of that, some groups and some experts will inevitably proclaim that &#8220;parental responsibility&#8221; is the answer, not the government. By those standards, a straight ban is even more out of the question.</p><p>&#8220;The answer is parental responsibility, not the government.&#8221; The more you think about that, the more you realize it&#8217;s an inch-deep argument. It&#8217;s not hard to imagine a social media influencer tweeting something like that&#8212;and it&#8217;s probably not wise to raise kids based on the wisdom of a social media influencer.</p><p>In the real world, &#8220;parental responsibility&#8221; effectively means that Big Tech can make as big of a mess as they want, and it&#8217;s the &#8220;responsibility&#8221; of parents to clean it up.</p><p>And tech policy experts aren&#8217;t supposed to be the ones with inch-deep arguments. They&#8217;re supposed to be the ones with deep knowledge, the ones who can clearly articulate the consequences of either choice.</p><p>To understand the consequences of this choice, let&#8217;s look at it through the eyes of an engineering director at Meta. This director is blissfully ignorant of politics and unaware of the policy debates over age verification&#8212;until one day his higher-ups tell him he&#8217;s in charge of implementing age verification to comply with a new law.</p><p>At first glance, this seems like a challenging yet feasible problem. If nothing else, it&#8217;s nowhere near as bad as GDPR compliance. But then one of the director&#8217;s best engineers&#8212;the type where you can just tell him to do something, and he finds a way to get it done&#8212;unexpectedly raises serious alarms about the complexity of the project.</p><p>Why the alarm? It&#8217;s not about verifying age. It&#8217;s about verifying parental consent&#8212;specifically, verifying the parental relationship. How do you navigate the custody laws of even one state (much less 50 states)&#8212;especially when you consider kids with divorced parents, kids with foster parents, and other complex custody arrangements?</p><p>If you just need to verify age, then perhaps you could use facial age estimation&#8212;which has minimal privacy concerns&#8212;to verify most of your users.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> However, the law says that kids under 16 can join with parental consent. To implement parental consent, you also need to verify the child&#8217;s identity, verify the identity of an adult, and then verify that this adult is actually the parent of the child.</p><p>Of course, for that scenario to play out in the first place, the legislature would have to pass a law&#8212;which has been the easy part in many states&#8212;and the courts would have to uphold that law. It&#8217;s the latter part where everything often goes off the rails.</p><p>Once you pass a law, expect a lawsuit. When Florida passed its law, House Speaker Paul Renner <a href="https://x.com/Paul_Renner/status/1763717172482760763">quipped</a>, &#8220;[NetChoice] and Big Tech cronies will launch a lawsuit within seconds of HB3 becoming law.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> (NetChoice is a trade association for Big Tech.)</p><p>What are the consequences of your choice in that legal battle? In short, do you want to fight a one-front war or a two-front war against NetChoice? With a straight ban, it&#8217;s a one-front war over age verification. With parental consent, you must defend a second front over verifying parental consent&#8212;a front that is much harder to defend.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>NetChoice has scored several victories on that second front. In Arkansas, the state got undercut by its own witness, who <a href="https://storage.courtlistener.com/recap/gov.uscourts.arwd.68680/gov.uscourts.arwd.68680.44.0.pdf">testified</a> that &#8220;the biggest challenge . . . with parental consent is actually establishing the relationship, the parental relationship.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>In Mississippi as well, NetChoice <a href="https://cases.justia.com/federal/district-courts/mississippi/mssdce/1:2024cv00170/125118/30/0.pdf">scored a win</a> on the parental consent front. The judge there also cited <em><a href="https://tile.loc.gov/storage-services/service/ll/usrep/usrep564/usrep564786/usrep564786.pdf">Brown v. Entertainment Merchants Association</a></em><a href="https://tile.loc.gov/storage-services/service/ll/usrep/usrep564/usrep564786/usrep564786.pdf"> (2011)</a>. In that case, the Supreme Court struck down a California law banning the sale of violent video games to kids without parental consent.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>In <em>Brown</em>, the issue of parental consent&#8212;and how you would verify it in practice&#8212;also became a stumbling block for that law. And that was in the offline world, where the problem is easier to solve. That does not bode well for the online world.</p><p>And the lessons to learn from <em>Brown</em> don&#8217;t stop at parental consent.<em> </em>The Supreme Court also raised a more fundamental challenge to California&#8217;s law:</p><div class="pullquote"><p>The Act is also seriously underinclusive in another respect&#8212;and a respect that renders irrelevant the contentions . . . that video games are qualitatively different from other portrayals of violence. The California Legislature is perfectly willing to leave this dangerous, mind-altering material in the hands of children so long as one parent (or even an aunt or uncle) says it&#8217;s OK.</p></div><p>Although violent video games are not good for kids, they are not a &#8220;dangerous, mind-altering material&#8221; either. The same cannot be said for social media; there truly is something &#8220;qualitatively different&#8221; about it. Why is it that Jonathan Haidt let his 9-year-old <a href="https://www.afterbabel.com/p/good-news-for-anxious-kids-and-parents">ride the subway alone</a> in New York City, but he <a href="https://x.com/JonHaidt/status/1754484727061344345">firmly believes</a> that &#8220;[s]ocial media is just not appropriate for children&#8221; under 16?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>If you truly believe that kids simply don&#8217;t belong on social media, then act on those convictions. Propose a straight ban for kids under 16. Wavering here&#8212;even if done with noble intentions&#8212;can cost you everything in the courts.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>In facial age estimation&#8212;which should not be confused with facial recognition&#8212;you upload a short video clip of yourself, an algorithm estimates your age in seconds or even a split-second, and then the video clip is deleted. No data is retained.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>While it wasn&#8217;t in seconds, NetChoice did sue Florida; that lawsuit is still pending.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><a href="https://netchoice.org/wp-content/uploads/2024/01/2024.01.05-NetChoice-v-Yost-Complaint-for-Declaratory-and-Injunctive-Relief-FILED.pdf">In</a> <a href="https://netchoice.org/wp-content/uploads/2024/06/NetChoice-v.-Fitch_-AS-FILED_Complaint_june-7.pdf">three</a> <a href="https://netchoice.org/wp-content/uploads/2024/08/2024.05.03-ECF-52-MOTION-for-Prelimary-Injunction.pdf">states</a>, NetChoice&#8217;s briefs have said the state&#8217;s age verification law does not account for &#8220;the difficulty in verifying a parent-child relationship&#8221;; they even added a &#8220;Most fundamentally&#8221; to one brief to drive the point home.</p><p>If you choose a straight ban&#8212;a one-front war&#8212;NetChoice could try to argue that your law is not narrowly tailored; why not allow kids on social media if their parents are OK with it? The counter-argument can be taken straight from NetChoice&#8217;s briefs; you didn&#8217;t offer that option because you contemplated &#8220;the difficulty in verifying a parent-child relationship.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Arkansas&#8217;s problem here was not an incompetent witness, but an honest witness. Sometimes, your worst enemy is an expert witness who&#8217;s honest to a fault.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Since this law targeted violent video games, it was clearly a content-based law that was subject to strict scrutiny. In this case, however, if the definition of social media is drafted correctly&#8212;a <a href="https://www.technicalassistance.io/p/a-defining-battle-for-kosma">challenging but feasible problem</a>&#8212;an age verification law for social media can be a content-neutral law, which is only subject to intermediate strict scrutiny.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Besides Haidt&#8217;s recommendation, there is another reason to set the age threshold at 16: courts have <a href="https://cases.justia.com/federal/district-courts/mississippi/mssdce/1:2024cv00170/125118/30/0.pdf">objected</a> to &#8220;a one-size-fits-all approach to all children from birth to 17 years and 364-days old.&#8221; Setting the age threshold above 16 would create legal complications.</p></div></div>]]></content:encoded></item><item><title><![CDATA[A Defining Battle for KOSMA]]></title><description><![CDATA[The definition of social media will be a key battleground for legal fights over KOSMA.]]></description><link>https://www.technicalassistance.io/p/a-defining-battle-for-kosma</link><guid isPermaLink="false">https://www.technicalassistance.io/p/a-defining-battle-for-kosma</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Tue, 11 Feb 2025 14:02:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the courtroom, Big Tech can &#8220;<a href="https://www.youtube.com/watch?v=dvWVEddn0LM">win so much</a>, you may even get tired of winning.&#8221; How do we ensure that the courts don&#8217;t block the <a href="https://www.congress.gov/bill/119th-congress/senate-bill/278/text">Kids Off Social Media Act</a> (KOSMA)? A key battleground, one where Big Tech has scored five wins, is the definition of social media.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>If we do not learn from those losses, history will likely repeat itself with KOSMA.</p><h2>1. The Core Technical Challenge</h2><p>In the early days of the Internet, Congress made multiple attempts to protect kids online&#8212;namely the Communications Decency Act of 1996 and the Child Online Protection Act of 1998&#8212;only to see the courts overturn those laws. How do we ensure that history does not repeat itself with attempts to protect kids from social media?</p><h3>A. &#8220;Ideas are Easy. Execution is Everything.&#8221;</h3><p>John Doerr, a well-known venture capitalist and an early investor in Google, once said, &#8220;Ideas are easy. Execution is everything.&#8221;</p><p>Legislation often starts with a belief that social media is harmful for kids. This belief then leads to an idea. Perhaps the idea is that we should require age verification. Perhaps the idea is that&#8212;in the case of KOSMA&#8212;social media sites should at least ban users when they know that a user is under 13.</p><p>And too often, conflicts in the policy world narrowly focus on those grand debates of ideas. Is age verification constitutional? Does banning kids from social media violate the First Amendment? Many policy analysts will jump straight into those grand debates&#8212;which rarely touch on finer details like the definition of social media.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>But what if you are more focused on execution? In that case, you would instead jump straight into the court decisions that have blocked these laws to protect kids online.</p><p>Why does the definition of social media matter? When we get down to brass tacks, courts look at not just &#8220;what&#8221; the law does, but also &#8220;who&#8221; the law applies to. The definition of social media matters because it determines &#8220;who&#8221; the law applies to.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>And in five cases, before we even get into &#8220;what&#8221; the law does, the &#8220;who&#8221;&#8212;the definition of social media&#8212;is enough to render the law unconstitutional.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><h3>B. I Know It When I See It, but I Can&#8217;t Intelligibly Define It</h3><p>Justice Potter Stewart <a href="https://supreme.justia.com/cases/federal/us/378/184/">famously said</a> this about hardcore pornography: &#8220;I know it when I see it.&#8221; But does that mean he could easily define hardcore pornography?</p><div class="pullquote"><p>I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description, and perhaps I could never succeed in intelligibly doing so.</p></div><p>We often know what a social media site is when we see it, but that doesn&#8217;t mean it&#8217;s easy to define social media. How do we create a definition that &#8220;intelligibly&#8221; defines the kinds of sites &#8220;to be embraced within that shorthand description&#8221;? Five states&#8212;Arkansas, Ohio, Mississippi, Texas, and Utah&#8212;have tried and failed.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>At its core, defining social media is a vexing technical challenge.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> There are <a href="https://siteefy.com/how-many-websites-are-there/">over one billion websites</a> on the Internet (and Google and Apple each have <a href="https://42matters.com/stats">about two million apps</a> in their app stores). A definition has to accurately classify each one of these billion websites as &#8220;social media&#8221; or &#8220;not social media.&#8221;</p><p>The part that often goes underlooked here is that the definition must exclude sites that are &#8220;not social media.&#8221; And the volume and variety of such sites can be mind-boggling: Netflix, the comments section of the New York Times, Amazon, (arguably) LinkedIn, Yelp, Wikipedia, Google&#8217;s search engine, Substack, and so on.</p><p>It&#8217;s already hard enough as it is, but on top of that, the First Amendment will impose some limitations on how we can write that definition.</p><h2>2. Where Did Things Go Wrong?</h2><p>When crafting a definition of social media, there are many choices we could make, many paths we could take. But many paths lead to dead ends&#8212;and to losses in the courts. And while those five losses can provide insights as to which paths we should not take, they don&#8217;t always provide insights as to which paths we can take.</p><h3>A. Taking Exception to Exceptions</h3><p>In analyzing KOSMA&#8217;s definition of social media [Sec. 102(6)], we can split it into two parts: the base definition [Sec. 102(6)(A)] and the 12 exceptions [Sec. 102(6)(B)].</p><p>If you&#8217;re crafting a definition, perhaps your first draft, like KOSMA&#8217;s base definition, will focus on sites that are &#8220;a community forum for user-generated content.&#8221;</p><p>That first draft, however, will tend to include many sites that are &#8220;not social media.&#8221; For example, Yelp reviews do count as user-generated content.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>If your definition includes Yelp, how do you fix it? Your first instinct may be to add an exception. Arkansas walked down that path, but one exception led to another and eventually to a definition with 13 exceptions. It was an <a href="https://storage.courtlistener.com/recap/gov.uscourts.arwd.68680/gov.uscourts.arwd.68680.44.0.pdf">easy win</a> for NetChoice&#8212;a trade association for Big Tech that filed all five of these lawsuits.</p><p>That does not bode well for KOSMA, whose definition has 12 exceptions. Laws like that tend to be content-based, and content-based laws almost always lose in court.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>What makes a law content-based? A law that favors liberal speech and disfavors conservative speech would be content-based, but more subtle distinctions can also be content-based. In <em>Arkansas Writers' Project v. Ragland</em> (1987), a law that &#8220;taxes general interest magazines, but exempts newspapers and religious, professional, trade, and sports journals&#8221; was content-based. In <em>Reed v. Town of Gilbert</em> (2015), a law that treated &#8220;temporary directional signs&#8221; and &#8220;political signs&#8221; differently was content-based.</p><p>When you are familiar with those court decisions from the offline world, these court decisions in the online world will hardly surprise you. In the four court cases after Arkansas&#8217;s, the judge cited <em>Reed</em> when analyzing the law&#8217;s definition of social media.</p><p>It&#8217;s not just the number of exceptions, either. Some exceptions will send your definition straight off the cliff. To exclude sites like Yelp, Ohio&#8217;s law had an exception for product review websites, which led to another easy win for NetChoice. The judge said this exception was <a href="https://storage.courtlistener.com/recap/gov.uscourts.ohsd.287455/gov.uscourts.ohsd.287455.33.0.pdf">&#8220;easy to categorize&#8221;</a> as content-based: &#8220;For example, a product review website is excepted, but a book or film review website, is presumably not.&#8221;</p><p>That also does not bode well for KOSMA, which has this exception:</p><blockquote><p>(vii) Business, product, or travel information including user reviews or rankings of such businesses, products, or other travel information.</p></blockquote><p>This exception could also get KOSMA into trouble:</p><blockquote><p>(vi) Content that consists primarily of news, sports, sports coverage, entertainment, or other information or content that is not user-generated but is preselected by the platform and for which any chat, comment, or interactive functionality is incidental, directly related to, or dependent on the provision of the content provided by the platform.</p></blockquote><p>This exception is almost identical to exceptions in Mississippi&#8217;s and Texas&#8217;s laws. Again, NetChoice won in both states, convincing a judge that the exception was content-based. The judge in Texas, for example, <a href="https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172798016/gov.uscourts.txwd.1172798016.25.0_1.pdf">highlighted</a> how this exception &#8220;singles out specific subject matter for differential treatment.&#8221;</p><p>And while no court decision directly addresses an exception like this, by now, we can safely infer that this exception in KOSMA would probably be content-based:</p><blockquote><p>(iii) Crowd-sourced reference guides such as encyclopedias and dictionaries.</p></blockquote><h3>B. &#8220;Not Social Media&#8221;</h3><p>How do we get back on the right path? All these content-based exceptions have something in common: they are all for sites that have user-generated content&#8212;be it comments, product reviews, or wikis&#8212;but that are not social media sites.</p><p>The real problem is not content-based exceptions. The real problem goes back to our core technical challenge: classifying over one billion sites as &#8220;social media&#8221; or &#8220;not social media.&#8221; The base definition inaccurately classifies many sites with user-generated content as &#8220;social media&#8221; when they are not social media sites.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><p>That&#8217;s why we can&#8217;t just solve the problem by removing those exceptions. To return to our example of Yelp, if we remove all 12 exceptions from KOSMA&#8217;s definition of social media, then the definition says that Yelp is a social media site.</p><p>If Yelp is a social media site, that just creates a different First Amendment problem: overinclusivity. Consider the path Utah took. They saw what was happening in other states with their exceptions, and they clamped down on exceptions in their law. But NetChoice still won on different grounds: overinclusivity.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> The judge, in particular, <a href="https://storage.courtlistener.com/recap/gov.uscourts.utd.145120/gov.uscourts.utd.145120.86.0_1.pdf">criticized the definition</a> because it included Dreamwidth (a blogging service).</p><p>The existence of user-generated content is necessary to define social media, but it is not sufficient. The real problem is that KOSMA&#8217;s base definition is incomplete. There&#8217;s a missing piece (or pieces), and we need to figure out what it is.</p><h3>C. Void for Vagueness</h3><p>What happens if a company reads the definition, and it has no idea whether the definition applies to its site or not? In that case, we can run into another constitutional problem: the law is void for vagueness.</p><p>What sorts of wrong turns can we make here? Phrases like &#8220;primary purpose&#8221; or &#8220;primarily functions&#8221; can give the courts fits; &#8220;primary purpose&#8221; was too vague in Arkansas, and &#8220;primarily functions&#8221; was too vague in Mississippi.</p><p>While this question may invoke images of lawyers debating arcane and complex language details, a more accurate image would invoke Socrates. Perhaps a think-tank scholar approaches a modern-day Socrates, confident in his ability to discern the &#8220;primary purpose&#8221; of a site. But as Socrates asks one probing question after another, the man eventually walks away, defeated and uncertain of his abilities.</p><p>In Arkansas, it only took one probing question: what is the &#8220;primary purpose&#8221; of Snapchat? The state&#8217;s own witnesses could not agree on the answer&#8212;and on whether their law applied to Snapchat. Needless to say, that law was void for vagueness.</p><p>The base definition of KOSMA, for its part, does use the phrase &#8220;primary function&#8221;:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a></p><blockquote><p>(iv) as its <em>primary function</em> provides a community forum for user-generated content&#8230;</p></blockquote><p>As for the phrase &#8220;user-generated content,&#8221; here is another probing question: what is the difference between third-party content and user-generated content?</p><p>Take Netflix, for example. Netflix did not make most of the movies on its platform; those movies are almost always third-party content. But would these movies count as user-generated content? Colloquially, the answer would probably be no.</p><p>Legally, though, how do we draw a line between third-party content that is user-generated and third-party content that is not user-generated? If a site relies on third-party content, how do they know which side of the line they&#8217;re on?</p><p>While vagueness is a Fifth Amendment concern, the bar is raised when the First Amendment gets involved. In <em>FCC v. Fox Television Stations</em> (2012), the Supreme Court said this requirement is more rigorous when speech is involved.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> This is another reason why you should not be overconfident in your ability to clear that bar.</p><h2>3. &#8220;Special Characteristics&#8221; of the Missing Pieces</h2><p>In sports, refs make mistakes, but when you&#8217;re 0-5, it&#8217;s probably not the refs&#8217; fault. Have we created a definition that is not overinclusive, content-based, and/or vague? We&#8217;re 0-5. Instead of complaining about the refs(/judges)&#8212;which we can&#8217;t control&#8212;let&#8217;s alter the definition&#8212;which we do control. Let&#8217;s build a definition so accurate, so obviously content-neutral, so crystal-clear that it&#8217;s an easy call for any judge.</p><p>We&#8217;ve talked a lot about court losses, but let&#8217;s talk about a major court win: the divest-or-ban law for TikTok. TikTok argued that this law was content-based, but the Supreme Court <a href="https://www.supremecourt.gov/opinions/24pdf/24-656_ca7d.pdf">rejected that claim</a>, reaffirming a key principle of <em><a href="https://supreme.justia.com/cases/federal/us/512/622/case.pdf">Turner Broadcasting System v. FCC </a></em><a href="https://supreme.justia.com/cases/federal/us/512/622/case.pdf">(1994)</a>: a law is content-neutral &#8220;when the differential treatment is &#8216;justified by some special characteristic of&#8217; the particular medium being regulated.&#8221;</p><p>We know that social media is harmful for kids, but to craft a definition, we need to know <em>why</em> social media is harmful for kids. What are the &#8220;special characteristics&#8221; of social media, and why do these characteristics make it harmful for kids? Specifics are needed; we can&#8217;t make a hand-wavy claim that it&#8217;s the social nature of social media.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a></p><p>This could be one such characteristic: content moderation at scale is hard. Whenever conservatives complained about Big Tech censorship, a certain class of pundits would retort that &#8220;content moderation at scale is hard.&#8221; As an engineer, I would concede there is some truth to that, but the principle works both ways.</p><p>If content moderation at scale is hard, then is social media really safe for kids? Should we be skeptical of the claim that better content moderation will magically solve our problems? It&#8217;s not just the engineer saying that; Jonathan Haidt raised a <a href="https://x.com/JonHaidt/status/1754484727061344345">similar point</a>:</p><div class="pullquote"><p>Even if social media companies could reduce sextortion, CSAM, deepfake porn, bullying, self-harm content, drug deals, and social-media induced suicide by 80%, I think the main take away from those Senate hearings is: Social media is just not appropriate for children.</p></div><p>How do we execute this idea that content moderation at scale is hard? Add a threshold for daily active users to our definition: one million daily active users who create content.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> (The exact number is negotiable; I had to pick some number here.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a>)</p><p>We can also resolve both of our vagueness concerns. There&#8217;s no need to discern the &#8220;primary function&#8221; of a site with that many active users; just delete that part. And even if movies count as user-generated content, Netflix comes nowhere close to having one million (or even ten thousand) daily active users who create content.</p><p>Our odds of winning against NetChoice just got a whole lot better.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a></p><p>Nonetheless, to return to our core technical challenge&#8212;classifying over one billion sites (and millions of apps) as &#8220;social media&#8221; or &#8220;not social media&#8221;&#8212;the definition could still be overinclusive, even with this new piece.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> There could still be other ways where it classifies a site as &#8220;social media&#8221; when it is not a social media site.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a></p><p>But each time that happens, the same pattern applies to find another piece. We find a special characteristic of social media, explain why that characteristic makes social media harmful for kids, and incorporate that characteristic into the definition.</p><p>With each additional piece, the odds of winning against NetChoice increases.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See <em><a href="https://storage.courtlistener.com/recap/gov.uscourts.arwd.68680/gov.uscourts.arwd.68680.44.0.pdf">NetChoice v. Griffin</a></em><a href="https://storage.courtlistener.com/recap/gov.uscourts.arwd.68680/gov.uscourts.arwd.68680.44.0.pdf"> (W.D. Ark. Aug. 31, 2023)</a>, <em><a href="https://storage.courtlistener.com/recap/gov.uscourts.ohsd.287455/gov.uscourts.ohsd.287455.33.0.pdf">NetChoice v. Yost</a></em><a href="https://storage.courtlistener.com/recap/gov.uscourts.ohsd.287455/gov.uscourts.ohsd.287455.33.0.pdf"> (S.D. Ohio Feb. 12, 2024)</a>, <em><a href="https://cases.justia.com/federal/district-courts/mississippi/mssdce/1:2024cv00170/125118/30/0.pdf?ts=1720216157">NetChoice v. Fitch</a></em><a href="https://cases.justia.com/federal/district-courts/mississippi/mssdce/1:2024cv00170/125118/30/0.pdf?ts=1720216157"> (S.D. Miss. July 1, 2024)</a>, <em><a href="https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172798016/gov.uscourts.txwd.1172798016.25.0_1.pdf">CCIA &amp; NetChoice v. Paxton</a></em><a href="https://storage.courtlistener.com/recap/gov.uscourts.txwd.1172798016/gov.uscourts.txwd.1172798016.25.0_1.pdf"> (W.D. Tex. Aug. 30, 2024)</a>, and <em><a href="https://storage.courtlistener.com/recap/gov.uscourts.utd.145120/gov.uscourts.utd.145120.86.0_1.pdf">NetChoice v. Reyes</a></em><a href="https://storage.courtlistener.com/recap/gov.uscourts.utd.145120/gov.uscourts.utd.145120.86.0_1.pdf"> (D. Utah Sept. 10, 2024)</a>. I <a href="https://www.city-journal.org/article/what-is-social-media">raised the alarm</a> in <em>City Journal</em> after the first two court losses.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>One noteworthy exception was <a href="https://rtp.fedsoc.org/podcast/tech-roundup-episode-23-privacy-and-safety-key-arguments-of-the-age-verification-debate/">episode 23</a> of the Federalist Society&#8217;s Tech Roundup podcast. At about 21:03, Baily Sanchez discusses how age verification laws for social media were enjoined not strictly for age verification, but also for other issues like the definition.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>In terms of &#8220;who&#8221; the law applies to, industry-specific laws will receive heightened scrutiny&#8212;either intermediate or strict scrutiny&#8212;when the industry is a forum for expression, such as social media or cable. See <em>Turner Broadcasting System v. FCC</em>, 512 U.S. 622, 640-641 (1994).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>There&#8217;s a little nuance here. For example, a KOSMA-style law would have better odds of surviving strict scrutiny, compared to an age verification law. That being said, I don&#8217;t like the odds for any law that is subject to the &#8220;death knell&#8221; of strict scrutiny, and a content-based definition of social media would subject either law to strict scrutiny.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Litigation is currently ongoing in two more states: Florida and Tennessee; it&#8217;s not yet known how their definitions will fare in the courts.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Ironically, defining hardcore pornography has been much easier. It is often defined as content that is obscene for minors, where obscene is defined using the <em>Miller</em> test. We can debate exactly how intelligible the <em>Miller</em> test is when you try to apply it in practice, but since it&#8217;s a court-invented test, we know it&#8217;s constitutional if nothing else.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>To briefly cover the other parts of KOSMA&#8217;s base definition, Yelp is directed to consumers, it collects personal data since it collects an email address when users create an account, and it <a href="https://businessmodelanalyst.com/yelp-business-model/?srsltid=AfmBOopBAE7hb6KXj7nY5KPFPT83tQAUNGDaSYbPdsuxY_Z0gqiInlKJ#How_Yelp_makes_money">primarily relies</a> on advertising revenue.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Content-based laws are subject to strict scrutiny, which in practice tends to be &#8220;strict in theory, fatal in fact.&#8221; Content-neutral laws are only subject to intermediate scrutiny.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>As we saw with Yelp, the other pieces of the definition often do not help, either. Collecting an email address when a user registers for an account meets part (ii) of the definition. And many sites&#8212;not just social media&#8212;rely on ad revenue and meet part (iii) of the definition.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>The judge also ruled that their definition was content-based, as it &#8220;distinguishes between &#8216;social&#8217; speech and other forms of speech.&#8221; That being said, &#8220;interacts socially&#8221; was a key phrase in Utah&#8217;s definition; KOSMA does not have such a phrase in its definition.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>As a counterexample, though, a judge in Texas did rule that &#8220;primarily functions&#8221; was not too vague, so using such phrases does not guarantee a loss. Nonetheless, it certainly is a risky path to take, and overconfidence can be a vice.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>To get an idea of just how rigorous this requirement can be, there is a First Amendment case where the word &#8220;promote&#8221; was too vague: <em>Bagget v. Bullitt</em> (1964). That judge in Texas cited this case when analyzing the verb &#8220;promote&#8221; in Texas&#8217;s law (though this was for a separate part of the law, not for its definition of social media).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>In Mississippi, Texas, and Utah, the judge ruled that a law is content-based if it treats &#8220;social&#8221; speech differently than other forms of speech. Utah had the right idea to argue that it&#8217;s contemplating the &#8220;structure, not subject matter&#8221; of social media, but its execution was lacking in terms of translating that idea to legislative text.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>While not required, a legislative finding that identifies the &#8220;special characteristic&#8221; of social media&#8212;in this case, that content moderation at scale is hard&#8212;would also be useful.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Lest anyone allege that the threshold was set at 1,000,000 for some nefarious content-based reason, here&#8217;s the actual methodology: I iterated through the powers of 10 (1, 10, 100, 1,000, etc.) until I hit a number that seemed large enough: 1,000,000.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>Although Florida&#8217;s anti-censorship law was deemed unconstitutional for other reasons, the Eleventh Circuit <a href="https://media.ca11.uscourts.gov/opinions/pub/files/202112355.pdf">rejected</a> NetChoice&#8217;s claim it was content-based because it only applied to the largest social media platforms, as the reason why &#8220;might be based on some[] &#8216;special characteristic&#8217; of large platforms&#8212;for instance, their market power.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>The threshold for daily active users was borrowed from my <a href="https://www.technicalassistance.io/i/145782955/iii-the-full-definition">model definition</a> for social media, which has additional pieces to it as well.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>Underinclusivity&#8212;classifying a site as &#8220;not social media&#8221; when it is a social media site&#8212;is less of a legal concern. In <em>TikTok v. Garland</em> (2025), the Supreme Court reaffirmed two key principles from its earlier precedents: that &#8220;the First Amendment imposes no freestanding underinclusiveness limitation&#8221; and that Congress &#8220;need not address all aspects of a problem in one fell swoop.&#8221;</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The App Store Accountability Act: A Cybersecurity Disaster]]></title><description><![CDATA[Forcing app stores to share age data with apps is a very bad idea.]]></description><link>https://www.technicalassistance.io/p/the-app-store-accountability-act</link><guid isPermaLink="false">https://www.technicalassistance.io/p/the-app-store-accountability-act</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Thu, 23 Jan 2025 18:40:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The App Store Accountability will force app stores to share your kid&#8217;s age with EVERY app. What could possibly go wrong? A lot. This legislation is a cybersecurity disaster. It is as if we&#8217;re doing surgery without a doctor in the room.</p><p>In the annals of bad tech bills, few are as infamous as the Stop Online Piracy Act of 2011. Then-Rep. Jason Chaffetz <a href="https://www.youtube.com/watch?v=xrrj9Wc2L84">famously said</a>, &#8220;We&#8217;re going to do surgery on the Internet, and we haven&#8217;t had a doctor in the room tell us how we&#8217;re going to change the organs. We&#8217;re basically going to reconfigure the Internet and how it&#8217;s going to work without bringing in the nerds.&#8221;</p><p>Apps are as ubiquitous today as the Internet was back in 2011, and the App Store Accountability Act&#8212;an age verification bill for app stores&#8212;would perform surgery on not just app stores, but the entire app ecosystem. But who are the surgeons here, and do they know what they&#8217;re doing?</p><p>As <a href="https://ifstudies.org/family-first-technology-initiative/stop-digital-harm">part</a> of its Family First Technology Initiative, the Institute for Family Studies (IFS) has proposed <a href="https://ifstudies.org/ifs-admin/resources/app-store-accountability-act.pdf">model legislation</a> for the App Store Accountability Act. At the end of last year, bills based on this model legislation were introduced in both the <a href="https://james.house.gov/uploadedfiles/james_app_store_accountability_act_final.pdf">House of Representatives</a> and the <a href="https://www.congress.gov/bill/118th-congress/senate-bill/5364">Senate</a>. Recently, <a href="https://www.scstatehouse.gov/sess126_2025-2026/bills/3405.htm">South Carolina</a> and <a href="https://le.utah.gov/Session/2025/bills/introduced/SB0142.pdf">Utah</a> introduced bills based on this model.</p><p>But will the IFS&#8217;s proposed surgery work? Frankly, this engineer&#8212;one who has supported age verification for both social media and adult sites&#8212;would describe it as medical malpractice.</p><h3>The Adversarial Mindset</h3><p>Anyone can create an app&#8212;both good actors and bad actors alike. The App Store Accountability would force app stores to share data about your child age&#8217;s with all apps&#8212;including the bad actors.</p><p>When the nerds build products, they have to apply the <strong>adversarial mindset</strong>. They have to consider not just how regular users will use their product, but also how hackers will try to abuse this product. Security is a de facto requirement for software&#8212;unless you want to get hacked.</p><p>At its core, cybersecurity is about that never-ending war between attackers and defenders. And it would be fair to say that the Internet can be a very dark place&#8212;especially in the corners of the dark web where many hackers reside.</p><p>If you&#8217;re going to do surgery on the entire app ecosystem, you cannot ignore this war. You, too, must apply the adversarial mindset and consider how attackers might exploit the changes that your legislation will mandate. Security is a de facto requirement for your legislation. (And even if you add privacy protections to your law, it suffices to say that bad actors often don&#8217;t obey the law.)</p><p>Imagine that bad actors created an app that impersonated Pok&#233;mon Go. (Last year, an app impersonating LastPass, a popular password management tool, found its way <a href="https://blog.lastpass.com/posts/warning-fraudulent-app-impersonating-lastpass-currently-available-in-apple-app-store">into Apple&#8217;s app store</a>.) As a parent, you give your kids permission to use this app&#8212;as you think it is the real thing, not an impostor&#8212;giving this app access to real-time location. And thanks to the App Store Accountability Act, app stores are forced to share the age category (e.g., 13-15) of every user with this app.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>In essence, we&#8217;ve given predators a database containing the real-time location of kids. And that&#8217;s only one of many examples of what could possibly go wrong. A more conventional concern would be data brokers&#8212;who don&#8217;t always act above-board&#8212;acquiring this age data and combining it will all the other information they&#8217;ve collected about your kid.</p><p>What is stopping the attackers here? The only barrier is the app review process that occurs before an app is added to the app store. And if you believe that Apple and Google cannot be trusted to protect our kids, it does seem odd that you would trust their app review process&#8212;and would assume that this process will stop the bad actors who are trying to get their apps into the app store.</p><p>But regardless of how you view Apple and Google, this mandate to force app stores to share age data with apps is extremely unwise from a cybersecurity perspective.</p><div><hr></div><h3>Zero Trust</h3><p>What level of trust should we grant to apps in the app store? From a cybersecurity perspective, experience has shown that you should default to <strong>zero trust</strong>. In the workplace context, the zero trust mindset assumes that attackers will find a way to breach your corporate network&#8212;no matter how good the defenders on your IT team are at securing it&#8212;and designs cybersecurity with that in mind. (The zero-trust mindset is not meant as an indictment of your IT team.)</p><p>When it comes to app stores, that same zero-trust default should apply. It would be reasonable to assume that&#8212;no matter how good Apple and Google&#8217;s app review process is&#8212;bad actors will find their way inside their app stores. (This is not meant as an indictment of Apple or Google.) And in a zero-trust environment, does it make sense to share age data with untrusted apps? Absolutely not.</p><p>The key architects of this model legislation, however, seem to think that it is secure because it uses encryption. In an <a href="https://ifstudies.org/blog/the-kids-online-safety-act-was-a-good-start-but-app-stores-need-accountability-too">op-ed</a>, they wrote, &#8220;The app store could then transmit the age of the minor user to apps upon download via an anonymous, encrypted signal that indicates whether the user is age-eligible for their product, or not.&#8221; Here, it would help to explain what encryption can and cannot do.</p><p>If two parties that trust each other are communicating with each other, encryption can ensure that attackers cannot eavesdrop and intercept their communications. However, if legislation is forcing one party to send information to an untrusted party&#8212;such as forcing app stores to share age data with an untrusted app&#8212;encryption won&#8217;t solve that problem.</p><h3>The Defender&#8217;s Dilemma</h3><p>Another dilemma that we face here is the <strong>defender&#8217;s dilemma</strong>. Defenders tend to have normal(ish) working hours, a spouse and kids, a social life outside of work, etc. Attackers tend to have way too much free time on their hands (and some may live in their mother&#8217;s basement), but they are damn good at hacking. Defenders often have to defend a large and complex landscape, and they have to be right 100% of the time. Attackers can attack from anywhere on that landscape, and they only need to be right once. Attackers often outnumber defenders.</p><p>For app stores in particular, they have to defend a very large and complex landscape&#8212;one with all three Vs of Big Data: volume, velocity, and variety. Both Google and Apple each have <a href="https://42matters.com/stats">about 2 million apps</a> in their app store (volume). They have to review not just the original app, but a constant stream of app updates (velocity). And the apps that they review can be for just about anything (variety).</p><p>With all that in mind, how does the app review process manage to keep bad actors out of the app store? The short story is that it doesn&#8217;t always work, and it&#8217;s not hard to find a <a href="https://about.fb.com/news/2022/10/protecting-people-from-malicious-account-compromise-apps/">treasure</a> <a href="https://www.washingtonpost.com/technology/2021/06/06/apple-app-store-scams-fraud/">trove</a> of stories where it doesn&#8217;t work. I say this not to dunk on Apple and Google&#8212;despite my misgivings about them&#8212;as this would be a very challenging problem for any tech company to solve.</p><p>You can certainly understand why the core assumption of &#8220;zero trust&#8221; makes sense: that some attackers will find their way inside app stores&#8212;no matter how good the app review process is.</p><h3>Minimizing Surface Area</h3><p>One consequence of the defender&#8217;s dilemma is that surface area matters. How large of a surface area are you asking defenders to protect? It&#8217;s clear that the IFS did not think enough about that.</p><p>The earlier example I gave&#8212;impersonating Pok&#233;mon Go&#8212;is only one of many possible paths. To combine real-time location data with age data, you don&#8217;t need to impersonate Pok&#233;mon Go; any app that relies on real-time location data is a potential target. Or, instead of impersonating an app, you could also build a Trojan Horse app that act that looks and behaves like a normal app on the outside, but whose real purpose is to help predators.</p><p>And that&#8217;s not the only way this age data could be misused. As we mentioned earlier, a more conventional concern would be data brokers acquiring this data. A recommendation algorithm run by bad actors could use that age data to help pair kids with predators.</p><p>That&#8217;s already a very large surface area, and these are mostly examples I came up with off the top of my head. Imagine what bad actors&#8212;who have way more time than me&#8212;could come up with. Trying to poke a few holes in some of these examples is a fool&#8217;s errand, as the hackers&#8212;who are smarter than you and also me&#8212;could certainly figure out ways to patch those holes, and could also come up with additional examples.</p><p>Simply put, forcing app stores to share age data with apps is indefensible&#8212;in more ways than one.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Creating an API where an app could do a parental consent check with the app store would also reveal the age of a user. If the parental consent check fails, you would then know that the user is a minor. (And of course, bad actors may ignore a failed parental consent check.)</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Define Social Media, Part II: Definitions]]></title><description><![CDATA[Age verification laws for social media have gone 0-5 in legal challenges. What needs to change so that these laws survive legal challenges?]]></description><link>https://www.technicalassistance.io/p/define-social-media-part-ii-definitions</link><guid isPermaLink="false">https://www.technicalassistance.io/p/define-social-media-part-ii-definitions</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Fri, 25 Oct 2024 19:40:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>(If you just want to see the definition of social media, use <a href="https://www.technicalassistance.io/i/145782955/iii-the-full-definition">this link</a> to skip to the end.)</em></p><p>0-5. If your football team went 0-5 (be it a professional or college team), it would be time to shake things up. State legislatures who have passed age verification laws for social media have gone 0-5 in legal challenges. Fortunately, this engineer&#8212;who has also served as a tech fellow in Congress&#8212;has already been figuring out how to fix it.</p><p>In <a href="https://www.technicalassistance.io/p/define-social-media-part-i-findings">part I</a>, we developed model findings that explained the special characteristics of social media. In part II, we&#8217;ll dive into the brass tacks of defining social media.</p><h2>I. The Problem: One Billion Websites</h2><p>There are <a href="https://siteefy.com/how-many-websites-are-there/">over a billion</a> websites on the Internet, and that does not even include all the apps that have online functionality. For social media (or for any online medium) how do we accurately determine which sites are social media sites and which are not?</p><p>If you draft a definition with constitutional defects, a lawyer can certainly spot those defects. After you spot those defects, you then need to fix them, which leads back to the core technical challenge: how do you properly classify one billion websites?</p><h3>A. Technical Challenges</h3><p>On the surface, the obvious challenge is volume; one billion is a very big number. Beneath the surface, an even bigger challenge exists: variety. Many different types of websites exist, ranging from Facebook to Wikipedia to Netflix.</p><p>In the past, attempts to regulate technology were simpler, because the mediums of old were tightly coupled with hardware. Broadcast was defined by an over-the-air signal that an antenna would receive. Cable TV was defined by, well, cable; a cable channel would use &#8220;a portion of the electromagnetic frequency spectrum&#8221; (see <a href="https://www.law.cornell.edu/uscode/text/47/522">47 U.S.C. 522(4)</a>).</p><p>Online mediums, however, are tied to software&#8212;which is much more malleable. A social media site, an app store, and an e-commerce site are all examples of software, but each piece of software yields a radically different medium. The variety of sites available is due in large part to the softness of software.</p><p>When writing a definition, including sites that are social media sites is important, but excluding sites that are not social media sites is just as important if not more important. If we are writing an age verification law, we do not want to make people verify their age if they want to leave a review on Yelp.</p><p>Yelp is a &#8220;negative example,&#8221; an example of a site that is not a social media site. Negative examples will play a critical role in developing the definition. As we develop that definition, this phrase will frequently be used: &#8220;necessary but not sufficient.&#8221;</p><p>We will start with a base definition that includes all one billion websites, and we will then incrementally add pieces to that definition. When we add a new piece, this piece will often be motivated by a negative example that the definition needs to exclude. But even with this piece, we can still find other negative examples that are included in the definition. Thus, while this piece is necessary, it&#8217;s not sufficient by itself.</p><h3>B. Legal Challenges</h3><p>Laws regulating social media will face heightened scrutiny, but the level of scrutiny depends on whether the law is content-based or content-neutral. To make that determination, courts will look at both <strong>what</strong> the law does, and <strong>who</strong> the law applies to.</p><p>The definition of social media matters because it determines <strong>who</strong> the law applies to.</p><p>Thus far, state legislatures have gone 0-5 when it comes to writing a content-neutral definition of social media. The most common pitfall here is the exceptions.</p><p>If you write a definition with <a href="https://www.arkleg.state.ar.us/Home/FTPDocument?path=%2FACTS%2F2023R%2FPublic%2FACT689.pdf">thirteen exceptions</a> like Arkansas did, NetChoice will have an easy time <a href="https://storage.courtlistener.com/recap/gov.uscourts.arwd.68680/gov.uscourts.arwd.68680.44.0.pdf">convincing the judge</a> that your definition is content-based. Additionally, if a law has thirteen exceptions, it gives lawyers thirteen chances to shoot a law down by proving that one of those exceptions is content-based.</p><p>But in some cases, one well-placed shot is enough to bring a law down. In Mississippi, the judge <a href="https://fingfx.thomsonreuters.com/gfx/legaldocs/zdpxxxbqapx/07012024mississippi.pdf">ruled</a> that an exception for &#8220;news, sports, commerce, [or] online video games&#8221; was content-based. In Ohio, the judge similarly <a href="https://storage.courtlistener.com/recap/gov.uscourts.ohsd.287455/gov.uscourts.ohsd.287455.33.0.pdf">ruled</a>, &#8220;The exceptions to the Act for product review websites and &#8216;widely recognized&#8217; media outlets, however, are easy to categorize as content based.&#8221;</p><p>Laws are not content-based, however, if they target one medium but not another. Regulations can be justified by the special characteristics of a medium (see <em>Turner Broadcasting System v. FCC</em> (1994)). The Internet is not a monolithic medium; it contains many distinct mediums, such as social media, search, and e-commerce. </p><p>But while a definition cannot use a content-based exception to exclude Yelp, if the definition includes Yelp, that creates a different constitutional problem: the definition is not narrowly tailored.</p><p>In Utah, the judge criticized their definition of social media because it <a href="https://storage.courtlistener.com/recap/gov.uscourts.utd.145120/gov.uscourts.utd.145120.86.0_1.pdf">included Dreamwidth</a>, which is &#8220;distinguishable in form and purpose from the likes of traditional social media platforms.&#8221;  (Dreamwidth is a blogging service that is similar to WordPress, Tumblr, and Medium.)</p><p>Finally, what happens if Snapshot cannot determine if it is a social media site, according to the definition? That creates another constitutional problem: <a href="https://constitution.congress.gov/browse/essay/amdt5-8-1/ALDE_00013739/">vagueness</a>.</p><p>Sometimes, this can get technical; for example, multiple judges (though not every judge) have said that phrases such as &#8220;primary purpose&#8221; are too vague. When Arkansas&#8217;s own witnesses <a href="https://storage.courtlistener.com/recap/gov.uscourts.arwd.68680/gov.uscourts.arwd.68680.44.0.pdf">could not agree</a> on what the &#8220;primary purpose&#8221; of Snapchat is, however, that effectively settled that case; their law was void for vagueness.</p><h3>C. Context Matters</h3><p>Before we dive into the brass tacks, there is one last important point: the definition may depend in part on the problem we&#8217;re trying to solve and the proposed solution.</p><p>As a practical example, why can&#8217;t we just use the definition in <a href="https://www.law.cornell.edu/uscode/text/42/1862w">42 U.S.C. 1862w(a)(2)</a>?</p><blockquote><p>(5) SOCIAL MEDIA PLATFORM.&#8212;The term &#8220;social media platform&#8221; means a website or internet medium that&#8212;</p><blockquote><p>(A) permits a person to become a registered user, establish an account, or create a profile for the purpose of allowing users to create, share, and view user-generated content through such an account or profile;</p><p>(B) enables 1 or more users to generate content that can be viewed by other users of the medium; and</p><p>(C) primarily serves as a medium for users to interact with content generated by other users of the medium.</p></blockquote></blockquote><p>Taking a quick look, many negative examples will be classified as a social media platform under this definition&#8212;which means the definition is not narrowly tailored. (Additionally, the term &#8220;primarily&#8221; might create some issues in terms of vagueness.)</p><p>So why haven&#8217;t courts struck down this definition? Look at the rest of 42 U.S.C. 1862w. This definition is used in a law that studies social media&#8217;s impact on human trafficking. But what if we use this same definition in a law that regulates social media&#8212;as opposed to a law that only studies social media? It probably won&#8217;t end well.</p><p>A couple more examples will further illustrate this point. First, Florida passed an anti-censorship law for social media (<a href="https://www.flsenate.gov/Session/Bill/2021/7072/BillText/er/PDF">SB 7072, 2021</a>), and it also passed an age verification law for social media (<a href="https://www.flsenate.gov/Session/Bill/2024/3/BillText/er/PDF">HB 3, 2024</a>). If you compare these two laws, you will find some major differences in how each law defines &#8220;social media platform.&#8221; Since these two laws are solving two very different problems, though, it&#8217;s not surprising that their definitions of &#8220;social media platform&#8221; would be different as well.</p><p>Second, let&#8217;s look at a popular federal bill, the <a href="https://www.congress.gov/bill/118th-congress/senate-bill/2073/text/eas">Kids Online Safety Act</a> (KOSA). KOSA&#8217;s definition of &#8220;covered platform&#8221; is fairly broad and is not limited to social media. A broader definition makes sense, however, when you look at the solution KOSA is proposing. Since KOSA relies on more light-touch (yet effective) regulations, it does make sense to apply those regulations to a broader set of sites, not just social media.</p><p>So what&#8217;s the context for the definition that we&#8217;re about to create? This definition will be used for child safety legislation. Specifically, it will be used for age verification&#8212;though this definition can also be reused for other types of child safety legislation. </p><p>For an age verification bill, we will obviously need to write a pretty tight definition of social media. The impact of misclassifying Yelp as a social media site is much more severe for a law that requires age verification (as opposed to extra paperwork). However, if our definition is tight enough that it can be used in an age verification bill, this definition can probably be reused for other child safety bills as well.</p><h2>II. Classifying One Billion Websites</h2><h3>A. Starting Point</h3><p>The law is not a creative writing discipline. The goal is to clearly explain to people what the law demands of them&#8212;demands that often come with severe penalties if they are violated. In this domain, copying the work of others is good!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>If an existing legal term has a well-understood meaning, or if another law or bill has a well-built definition, just reuse it. Don&#8217;t reinvent the wheel. The question is not about you&#8212;and whether your legislative work is original. The question is about the people who must obey this law&#8212;and whether they understand what the law demands of them.</p><p>Where do we start with a definition? Let&#8217;s start by reusing the definition of &#8220;interactive computer service&#8221; from a federal law known as Section 230&#8212;the law that says platforms are generally not liable for the third-party content they host.</p><blockquote><p>SOCIAL MEDIA PLATFORM.&#8212;The term &#8220;social media platform&#8221; means an interactive computer service that&#8230;</p><p>INTERACTIVE COMPUTER SERVICE.&#8212;The term &#8220;interactive computer service&#8221; has the meaning given the term in section 230(f)(2) of the Communications Act of 1934 (47 U.S.C. 230(f)(2)).</p></blockquote><p>For reference, here is Section 230&#8217;s definition of &#8220;interactive computer service&#8221;:</p><blockquote><p>INTERACTIVE COMPUTER SERVICE.&#8212;The term &#8220;interactive computer service&#8221; means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.</p></blockquote><p>Despite the age of Section 230, which was passed in 1996, this definition of &#8220;interactive computer service&#8221; is still widely used today. If it&#8217;s not broken, don&#8217;t fix it.</p><p>It also has a modern advantage: it covers both websites and apps. In this definition, it does not matter whether users access Facebook via facebook.com or via the Facebook app; Facebook is an &#8220;interactive computer service&#8221; either way. (For the sake of convenience, though, we&#8217;ll use website/site as a shorthand term for &#8220;interactive computer service&#8221;&#8212;which includes both websites and apps.)</p><p>Of course, all one billion websites would qualify as an interactive computer service, so we&#8217;ll need to narrow this definition.</p><h3>B. Content Moderation at Scale is Hard</h3><p>Section 230 makes a distinction between first-party content and third-party content. When defining social media, that distinction is necessary but not sufficient. Facebook heavily relies on third-party content, but so does Netflix.</p><h4>1. Big Data and the Three Vs</h4><p>Back in 2014, Facebook reported that it generates <a href="https://research.facebook.com/blog/2014/10/facebook-s-top-open-data-problems/">4 petabytes</a> of data per day (4 petabytes = 4,000,000 gigabytes). By comparison, a typical smartphone has 64 gigabytes of storage, and one of the <a href="https://www.techradar.com/best/large-hard-drives-and-ssds">largest hard drives</a> on the market offers 30 terabytes of storage (30 terabytes = 30,000 gigabytes).</p><p>It suffices to say that 4 petabytes of data won&#8217;t fit onto a single computer.</p><p>Welcome to the world of Big Data. Big Data is defined by the three Vs: volume, velocity, and variety. For example, 4 petabytes is certainly a very large volume of data, and at 4 petabytes per day, the velocity is approximately 46 gigabytes per second.</p><p>The three Vs don&#8217;t just present technical challenges, either. They also present social challenges. When millions of users are generating content each day, that definitely makes content moderation hard, in terms of both volume and velocity. And in terms of variety, content can cover virtually any topic, and a global social media platform will have content in many different languages.</p><h4>2. Daily Active Content Providers</h4><p>So how do we distinguish between Netflix and Facebook? Netflix doesn&#8217;t have millions of users who produce content each day. Facebook does. Simply put, scale is the differentiator. After all, our narrative is that content moderation at scale is hard.</p><p>In the tech industry, many companies measure their daily active users and monthly active users. We can use this metric as a starting point, but we&#8217;ll need to refine it.</p><p>First, should we measure daily active users or monthly active users? The answer is daily active users. Velocity&#8212;one of our three Vs&#8212;is much easier to see at the daily level. Additionally, monthly active users does not distinguish between a user who spends hours on Twitter/X every day and a user who logs in to Twitter/X once a week or once a month; both count as one monthly active user.</p><p>Second, we need to tweak what we measure: daily active content providers, not daily active users. If only ten users are producing content but millions of users are viewing that content, content moderation is fairly simple. Both Netflix and Facebook have millions of users, but only Facebook has millions of content providers.</p><h4>3. Legislative Text</h4><p>So how do we translate that to legislative text? Fortunately, we have a couple of definitions we can reuse here: the definition of &#8220;information content provider&#8221; from Section 230, and the definition of &#8220;user&#8221; from the Kids Online Safety Act (KOSA).</p><blockquote><p>SOCIAL MEDIA PLATFORM.&#8212;The term &#8220;social media platform&#8221; means an interactive computer service that has averaged at least 1,000,000 daily active content providers over the previous 180 days.</p><p>DAILY ACTIVE CONTENT PROVIDERS.&#8212;The term &#8220;daily active content providers&#8221; means the number of users who serve as an information content providers during a single day.</p><p>USER.&#8212;The term &#8220;user&#8221; means, with respect to a social media platform, an individual who registers an account or creates a profile on the social media platform.</p><p>INFORMATION CONTENT PROVIDER.&#8212;The term &#8220;information content provider&#8221; has the meaning given the term in section 230(f)(3) of the Communications Act of 1934 (47 U.S.C. 230(f)(3)).</p></blockquote><p>For reference, here is Section 230&#8217;s definition of &#8220;information content provider&#8221;:</p><blockquote><p>INFORMATION CONTENT PROVIDER.&#8212;The term &#8220;information content provider&#8221; means any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.</p></blockquote><p>Setting the threshold for daily active content providers is more of an art than an exact science; 100,000 daily active content providers would also be a reasonable threshold.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> </p><h3>C. Scale is Necessary But Not Sufficient</h3><p>Setting the threshold at 1,000,000 (or even 100,000) daily active content providers will filter out most of those one billion websites, but it&#8217;s not a silver bullet. We can still find many negative examples that are included in the definition. Going forward, we will pessimistically assume that scale is necessary but not sufficient.</p><h4>1. Primary vs. Secondary Content</h4><p>What about the comments section of the New York Times? The number of users who comment on New York Times articles is much larger than the number of writers who create those articles. And while the New York Times may not have 100,000 or 1,000,000 users commenting each day, perhaps a different site could hit that threshold&#8212;which is why we pessimistically assume that scale is necessary but not sufficient. This example will justify that pessimism: what about product reviews on Amazon?</p><p>A common mistake here is to create a narrow exception for product review sites. Courts will often rule that that these narrow exceptions are content-based when, to <a href="https://storage.courtlistener.com/recap/gov.uscourts.ohsd.287455/gov.uscourts.ohsd.287455.33.0.pdf">quote an Ohio judge</a> as an example, &#8220;a product review website is excepted, but a book or film review website, is presumably not.&#8221;</p><p>Instead, we can distinguish between &#8220;primary&#8221; content and &#8220;secondary&#8221; content. The New York Times article or the product on Amazon would be the primary content, while a comment on the article or a product review would be the secondary content. Secondary content depends on primary content; you cannot add a product review to Amazon if you don&#8217;t first have a product to review.</p><p>With that in mind, let&#8217;s refine the definition of &#8220;information content provider&#8221;:</p><blockquote><p>INFORMATION CONTENT PROVIDER.&#8212;</p><blockquote><p>(A) IN GENERAL.&#8212;The term &#8220;information content provider&#8221; has the meaning given the term in section 230(f)(3) of the Communications Act of 1934 (47 U.S.C. 230(f)(3)).</p><p>(B) SECONDARY CONTENT EXCLUDED.&#8212;The term &#8220;information content provider,&#8221; with respect to a social media platform, does not apply to content that depends on other content on the social media platform, such as comments on an article, reviews for a product, or replies to a post.</p></blockquote></blockquote><p>This exception is written in a content-neutral fashion; product reviews are treated no differently than book reviews or film reviews.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> (A key point here is that the &#8220;such as&#8221; clause only provides illustrative examples, not an exhaustive list of examples.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> )</p><p>Additionally, we can further justify this exception with one of the three Vs: variety. When user-generated content only consists of comments for a small set of articles or reviews for products, the variety of content found on a site is much smaller.</p><h4>2. Commercial Content</h4><p>But even if we exclude Amazon reviews, how many people are creating or editing product listings on Amazon each day? How many users create auctions on eBay each day? Again, we pessimistically assume that scale is necessary but not sufficient.</p><p>While sites like Amazon and eBay do rely on third-party content, the design of these sites effectively ensures that this content is commercial in nature. We can capture that idea in another exception for &#8220;information content provider&#8221;:</p><blockquote><p>INFORMATION CONTENT PROVIDER.&#8212;</p><blockquote><p>(A) IN GENERAL.&#8212;The term &#8220;information content provider&#8221; has the meaning given the term in section 230(f)(3) of the Communications Act of 1934 (47 U.S.C. 230(f)(3)).</p><p>(B) SECONDARY CONTENT EXCLUDED.&#8212;The term &#8220;information content provider,&#8221; with respect to an interactive computer service, does not apply to content that depends on other content on the interactive computer service, such as comments on an article, reviews for a product, or replies to a post.</p><p>(C) COMMERCIAL CONTENT EXCLUDED.&#8212;The term &#8220;information content provider,&#8221; with respect to an interactive computer service, does not apply to content that is designed by the interactive computer service to facilitate commerce, such as product listings, available drivers for ridesharing, or booking information for accommodations.</p></blockquote></blockquote><p>This exception is content-neutral because it makes a medium-based distinction; social media and e-commerce are two fundamentally different mediums.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>For example, if we have a high-volume third-party seller that&#8217;s engaging in fraud, verifying the age of the seller would do little to stop that fraud. The regulations created by the <a href="https://www.ftc.gov/business-guidance/resources/INFORMAct">INFORM Consumers Act</a> are the answer here. Conversely, it&#8217;s also nonsensical to apply INFORM to social media, making social media platforms collect the bank account information, contact information, and tax ID numbers of their users.</p><h4>3. Directed to a General Audience</h4><p>Another interesting negative example is LinkedIn. Despite the similarities between LinkedIn and social media, LinkedIn is still intuitively different in form and purpose from social media sites. We can reasonably assume that most parents would not be too concerned if they discovered that their kid secretly created a LinkedIn account.</p><p>As before, we need to resist the temptation to exclude LinkedIn by creating a narrow exception for, e.g., professional networking sites. LinkedIn may also not be the only site we need to worry about that has a more specialized purpose; we need an exception that treats all these specialized sites the same:</p><blockquote><p>SOCIAL MEDIA PLATFORM.&#8212;The term &#8220;social media platform&#8221; means an interactive computer service that&#8212;</p><blockquote><p>(A) is directed to a general audience, notwithstanding whether content is delivered via text, images, audio, video, or other types of media content; and</p><p>(B) has averaged at least 1,000,000 daily active content providers over the previous 180 days.</p></blockquote></blockquote><p>In terms of content neutrality, this exception is similar to our exception for secondary content. First, it treats all specialized sites the same; it doesn&#8217;t privilege certain types of specialized sites.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> Second, it can be justified by one of our three Vs: variety. It suffices to say that the variety of content found on Instagram or Snapchat is vastly bigger than the variety of content found on LinkedIn.</p><h3>D. The Distribution Model Matters</h3><p>What about a crowdsourced encyclopedia like Wikipedia? What about email or text messaging? Applying our pessimistic assumption that scale is necessary but not sufficient, the current iteration of our definition still includes these sites.</p><h4>1. A Variety of Distribution Models</h4><p>Previously, we focused on how content is created. What if we focus on how content is distributed? It suffices to say that social media&#8217;s content distribution model is very different from the distribution model of Wikipedia or text messaging.</p><p>Distinguishing between sites based on their distribution model tends to be content-neutral as well. It does not discriminate based on what type of content a site contains, and it enters the territory of a medium-based distinction. Different mediums will have different distribution models; each model presents its own unique set of problems.</p><p>Thus, the third plank of our definition will focus on the distribution model:</p><blockquote><p>SOCIAL MEDIA PLATFORM.&#8212;The term &#8220;social media platform&#8221; means an interactive computer service that&#8212;</p><blockquote><p>(A) is directed to a general audience, notwithstanding whether content is delivered via text, images, audio, video, or other types of media content;</p><p>(B) has averaged at least 1,000,000 daily active content providers over the previous 180 days; and</p><p>(C) socially distributes content.</p></blockquote><p>SOCIALLY DISTRIBUTES CONTENT.&#8212;The term &#8220;socially distributes content&#8221; means&#8230;</p></blockquote><p>The challenge here lies in defining &#8220;socially distributes content.&#8221;</p><h4>2. The Social Network</h4><p>What is the social component of socially distributing content? As a starting point, Facebook users your social network to distribute content to you; Wikipedia does not. Let&#8217;s incorporate the concept of a social network into our definition:</p><blockquote><p>SOCIALLY DISTRIBUTES CONTENT.&#8212;The term &#8220;socially distributes content&#8221; means making decisions about which content to distribute to a user, where such decisions use the user&#8217;s social relations with other users, such as other users that the user follows or is friends with.</p></blockquote><p>This definition focuses on decision-making: is this data used to make a decision? Using your social network for other purposes such as research is not covered by that definition; the social network must be used in decisions about content distribution. (Of course, social media sites may take other factors into account when making these decisions, so we only require that the social network be one of the factors used.)</p><p>This definition is also crystal-clear; it doesn&#8217;t require the courts to make judgments about what the &#8220;primary purpose&#8221; or &#8220;primary function&#8221; of a site is.</p><h4>3. Who Controls Content Distribution?</h4><p>Social media, however, is not the only social network. If one user subscribes to another user&#8217;s newsletter, that&#8217;s a social relation that&#8217;s used in content distribution; newsletters are distributed to subscribers. If many users are all part of a group chat, that&#8217;s a social relation that&#8217;s used in content distribution. (Recall that the &#8220;such as&#8221; clause only provides illustrative examples, not an exhaustive list of examples.)</p><p>The next aspect we can look at is who controls distribution. With many of the online mediums that predate social media, consumers and producers (mostly) controlled content distribution. If you send an email, you control who the recipients of that email are. If you don&#8217;t like a particular Substack newsletter, you can just unsubscribe.</p><p>Of course, this direct distribution model does have one problem: spam. But if spam was the worst of our problems on social media, states wouldn&#8217;t be trying to pass age verification laws, and Congress wouldn&#8217;t be trying to pass the Kids Online Safety Act.</p><p>Search engines typically put users in control, too. While you don&#8217;t control the results you receive, you do control the search query that you type into google.com. Moreover, search engines have an incentive to find results that are relevant to your search query.</p><p>With that in mind, we can add some exclusions to the definition:</p><blockquote><p>SOCIALLY DISTRIBUTES CONTENT.&#8212;</p><blockquote><p>(A) IN GENERAL.&#8212;The term &#8220;socially distributes content&#8221; means means making decisions about which content to distribute to a user, where such decisions use the user&#8217;s social relations with other users, such as other users that the user follows or is friends with.</p><p>(B) DIRECT DISTRIBUTION EXCLUDED.&#8212;The term &#8220;socially distributes content&#8221; does not include, notwithstanding spam filtering, distributing the content of a user directly to recipients or subscribers, such as the recipients of an email, the members of a group chat, or the subscribers of a newsletter.</p><p>(C) SEARCH EXCLUDED.&#8212;The term &#8220;socially distributes content&#8221; does not include providing content to a user when the user deliberately and independently searches for, or specifically requests, content.</p></blockquote></blockquote><p>Here, the language for the search exception is based on language found in KOSA, which has a similar exception for its duty of care.</p><h4>4. Engagement Data</h4><p>At this point, the definition looks like it may finally be sufficient. It&#8217;s certainly hard to think of a negative example that is included in the definition.</p><p>Nonetheless, it may still be beneficial to harden this definition a bit more. And for all this talk of social media addiction and kids spending hours each day on social media, the definition does not include something that is closely tied to that problem.</p><p>In addition to looking at how social networks are used in content distribution, we can also look at how a user&#8217;s engagement with content is used in content distribution:</p><blockquote><p>SOCIALLY DISTRIBUTES CONTENT.&#8212;</p><blockquote><p>(A) IN GENERAL.&#8212;The term &#8220;socially distributes content&#8221; means means making decisions about which content to distribute to a user, where such decisions use&#8212;</p><blockquote><p>(i) the user&#8217;s social relations with other users, such as other users that the user follows or is friends with; and</p><p>(ii) the user&#8217;s engagement or interest with content from other users, such as viewing, liking, reposting, or replying to content.</p></blockquote></blockquote></blockquote><p>Instead of targeting a specific feature or a specific type of algorithm that is used to drive engagement, we target the fuel that powers these features and algorithms: engagement data.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> This approach offers a good blend of specificity and flexibility.</p><p>All three Vs are present. Volume comes from distributing content to millions of users. Variety comes from a distribution model that is hyper-personalized in nature. Velocity comes from the algorithms that constantly process new data about the user&#8217;s activities.</p><p>With millions of users, a tech company has limited bandwidth to address problems that affect a single user. With a hyper-personalized distribution model, though, many of the problems are also hyper-personalized in nature.</p><h3>E. Liability Applies at the Top of the Stack</h3><p>Even for a single site, many companies often play a role in running that site. Beneath the surface of a social media site, there&#8217;s a complex technical stack of infrastructure&#8212;with different companies handling different parts.</p><p>To store billions of posts, a social media site may rely on a cloud storage provider like Amazon Web Services. To ensure that hackers cannot take down that site with a DDoS (distributed denial of service) attack, the site may rely on DDoS protection from Cloudflare. And of course, an ISP like Comcast delivers that content to users.</p><p>In many cases, liability should only be applied at the &#8220;top of the stack.&#8221; Facebook should be held accountable for its actions, but liability should not apply to ISPs or the companies that provide technical infrastructure for social media sites. In addition to defining a &#8220;social media platform,&#8221; let&#8217;s also define a &#8220;social media company&#8221;:</p><blockquote><p>SOCIAL MEDIA COMPANY.&#8212;</p><blockquote><p>(A) IN GENERAL.&#8212;The term &#8220;social media company&#8221; means a person or entity that provides a social media platform.</p><p>(B) TECHNICAL INFRASTRUCTURE EXCLUDED.&#8212;The term &#8220;social media company&#8221; does not include a person or entity acting in its capacity as a provider of&#8212;</p><blockquote><p>(i) a common carrier service subject to the Communications Act of 1934 (47 U.S.C. 151 et seq.) and all Acts amendatory thereof and supplementary thereto;</p><p>(ii) a broadband internet access service (as such term is defined for purposes of section 8.1(b) of title 47, Code of Federal Regulations, or any successor regulation); or</p><p>(iii) an interactive computer service that is used by a social media platform for the management, control, or operation of that social media platform, including for services such as web hosting, domain registration, content delivery networks, caching, security, back-end data storage, and cloud management.</p></blockquote></blockquote></blockquote><p>The first two exceptions were copied from KOSA&#8217;s definition of &#8220;covered platform.&#8221; The third exception was largely copied from the <a href="https://www.congress.gov/bill/118th-congress/senate-bill/483">Internet PACT Act</a>; it&#8217;s not every day that we find a bill that mentions content delivery networks and caching.</p><p>And again, this exception is content-neutral. This is not a content-based question about <strong>what</strong> type of content the social media platform contains; it is a content-neutral question about <strong><a href="https://blog.cloudflare.com/why-we-terminated-daily-stormer">where</a></strong> regulation occurs.</p><h2>III. The Full Definition</h2><p>Now that we have all the pieces of our definition, let&#8217;s see the whole product, including the findings from part I; the definitions logically flow from the findings.</p><blockquote><p><strong>SEC. _. FINDINGS</strong></p><p>The Legislature finds the following:</p><p>(1) The State has a compelling interest in protecting the physical and psychological well-being of minors.</p><p>(2) The Internet is not a monolithic medium but instead contains many distinct mediums, such as social media, search, and e-commerce.</p><p>(3) Existing measures to protect minors on social media have been insufficient for reasons including&#8212;</p><blockquote><p>(A) the difficulty of content moderation at the scale of a platform with millions of user-generated content providers;</p><p>(B) the difficulty of making subjective judgments via algorithms, such as identifying content that harms the physical or psychological well-being of minors; and</p><p>(C) limited interoperability between social media platforms and third-party child safety tools, in part due to privacy concerns about sharing user data with third parties.</p></blockquote><p>(4) Social media companies have failed to control the negative impacts of their algorithms to distribute content for reasons including&#8212;</p><blockquote><p>(A) the scale of a platform with millions of users, combined with the personalized nature of content distribution;</p><p>(B) the natural incentive of such companies to maximize engagement and time spent on their platforms; and</p><p>(C) the limited degree of control that users have over the content they receive.</p></blockquote><p>(5) Limited accountability exists on social media platforms for bad actors, especially given the anonymous or hard-to-track nature of many such actors.</p><p>(6) Users frequently encounter sexually explicit material accidentally on social media.</p><p>(7) Social media platforms are accessible&#8212;</p><blockquote><p>(A) from a wide variety of devices, ranging from an individual&#8217;s smartphone to a laptop at a friend&#8217;s house to a desktop in a public library; and</p><p>(B) via a variety of methods on a single device, including apps and websites.</p></blockquote><p><strong>SEC. _. DEFINITIONS</strong></p><p>In this Act:</p><p>(1) DAILY ACTIVE CONTENT PROVIDERS.&#8212;The term &#8220;daily active content providers&#8221; means the number of users who serve as an information content providers during a single day.</p><p>(2) INFORMATION CONTENT PROVIDER.&#8212;</p><blockquote><p>(A) IN GENERAL.&#8212;The term &#8220;information content provider&#8221; has the meaning given the term in section 230(f)(3) of the Communications Act of 1934 (47 U.S.C. 230(f)(3)).</p><p>(B) SECONDARY CONTENT EXCLUDED.&#8212;The term &#8220;information content provider,&#8221; with respect to an interactive computer service, does not apply to content that depends on other content on the interactive computer service, such as comments on an article, reviews for a product, or replies to a post.</p><p>(C) COMMERCIAL CONTENT EXCLUDED.&#8212;The term &#8220;information content provider,&#8221; with respect to an interactive computer service, does not apply to content that is designed by the interactive computer service to facilitate commerce, such as product listings, available drivers for ridesharing, or booking information for accommodations.</p></blockquote><p>(3) INTERACTIVE COMPUTER SERVICE.&#8212;The term &#8220;interactive computer service&#8221; has the meaning given the term in section 230(f)(2) of the Communications Act of 1934 (47 U.S.C. 230(f)(2)).</p><p>(4) SOCIAL MEDIA COMPANY.&#8212;</p><blockquote><p>(A) IN GENERAL.&#8212;The term &#8220;social media company&#8221; means a person or entity that provides a social media platform.</p><p>(B) TECHNICAL INFRASTRUCTURE EXCLUDED.&#8212;The term &#8220;social media company&#8221; does not include a person or entity acting in its capacity as a provider of&#8212;</p><blockquote><p>(i) a common carrier service subject to the Communications Act of 1934 (47 U.S.C. 151 et seq.) and all Acts amendatory thereof and supplementary thereto;</p><p>(ii) a broadband internet access service (as such term is defined for purposes of section 8.1(b) of title 47, Code of Federal Regulations, or any successor regulation); or</p><p>(iii) an interactive computer service that is used by a social media platform for the management, control, or operation of that social media platform, including for services such as web hosting, domain registration, content delivery networks, caching, security, back-end data storage, and cloud management.</p></blockquote></blockquote><p>(5) SOCIAL MEDIA PLATFORM.&#8212;The term &#8220;social media platform&#8221; means an interactive computer service that&#8212;</p><blockquote><p>(A) is directed to a general audience, notwithstanding whether content is delivered via text, images, audio, video, or other types of media content;</p><p>(B) has averaged at least 1,000,000 daily active content providers over the previous 180 days; and</p><p>(C) socially distributes content.</p></blockquote><p>(6) SOCIALLY DISTRIBUTES CONTENT.&#8212;</p><blockquote><p>(A) IN GENERAL.&#8212;The term &#8220;socially distributes content&#8221; means means making decisions about which content to distribute to a user, where such decisions use&#8212;</p><blockquote><p>(i) the user&#8217;s social relations with other users, such as other users that the user follows or is friends with; and</p><p>(ii) the user&#8217;s engagement or interest with content from other users, such as viewing, liking, reposting, or replying to content.</p></blockquote><p>(B) DIRECT DISTRIBUTION EXCLUDED.&#8212;The term &#8220;socially distributes content&#8221; does not include, notwithstanding spam filtering, distributing the content of a user directly to recipients or subscribers, such as the recipients of an email, the members of a group chat, or the subscribers of a newsletter.</p><p>(C) SEARCH EXCLUDED.&#8212;The term &#8220;socially distributes content&#8221; does not include providing content to a user when the user deliberately and independently searches for, or specifically requests, content.</p></blockquote><p>(7) USER.&#8212;The term &#8220;user&#8221; means, with respect to a social media platform, an individual who registers an account or creates a profile on the social media platform.</p></blockquote><p>Of course, some legal details may need to be ironed out, but Congress and most state legislatures have a service known as Legislative Counsel, which handles the legal details of drafting legislation.</p><p>In fact, Legislative Counsel can work with things that are far less structured than draft legislative text. Part of their job is to take legislative proposals written in plain English and translate them to legislative text; this helps support a legislature whose representatives are teachers, doctors, and engineers&#8212;not just lawyers.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.technicalassistance.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.technicalassistance.io/subscribe?"><span>Subscribe now</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This practice of reusing legislative text is similar to how software engineers often reuse built-in or third-party libraries, instead of reinventing the wheel. If asked to sort a list of numbers, a Java programmer will not write their own sorting algorithm. They&#8217;ll use the Java API to sort those numbers with a single line of code: <code>Collections.sort(numbers)</code>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Lest anyone allege that the threshold for daily active content providers was set at 1,000,000 for nefarious content-based reasons, here&#8217;s the actual methodology: I went through the powers of 10 (1, 10, 100, 1,000, etc.) until I hit a number that was large enough: 1,000,000.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>An interesting wrinkle is that we also exclude users who reply to other user-generated posts but do not create their own posts. The only &#8220;harm&#8221; here is that the definition of &#8220;daily active content providers&#8221; may slightly undercount the actual number of such providers.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>The canon of construction that would apply here is the <em>presumption of nonexclusive &#8220;include.&#8221;</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Facebook is an interesting case, since it also has the Facebook Marketplace. Applying the definition here, users who only create content on the Facebook Marketplace do not count as an &#8220;information content provider&#8221;&#8212;and thus are not included in the count of &#8220;daily active content providers.&#8221; Users who create normal Facebook posts (or who create both types of content) do count as an &#8220;information content provider.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>The &#8220;notwithstanding whether content is delivered&#8221; clause clarifies that both Instagram and Twitter/X are directed to a general audience. Instagram cannot argue that it&#8217;s not directed to a general audience because it primarily relies on images, while Twitter has a broader audience since it uses text, images, and videos.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>However, if the user is reporting content, blocking someone, or clicking a button that tells the platform they don&#8217;t want to see this content, that would not count as &#8220;engagement or interest.&#8221; If a site personalizes content distribution based on those signals, it wouldn&#8217;t qualify as a social media site. (And of course, this has no effect on content moderation measures that are not personalized, such as taking a post down if it violates the rules.)</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Define Social Media, Part I: Findings]]></title><description><![CDATA[Age verification laws for social media have gone 0-5 in legal challenges. What needs to change so that these laws survive legal challenges?]]></description><link>https://www.technicalassistance.io/p/define-social-media-part-i-findings</link><guid isPermaLink="false">https://www.technicalassistance.io/p/define-social-media-part-i-findings</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Tue, 17 Sep 2024 18:09:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>(If you just want to see the definition of social media, use <a href="https://www.technicalassistance.io/i/145782955/iii-the-full-definition">this link</a> to skip to the end.)</em></p><p>0-5. If your football team went 0-5 (be it a professional or college team), it would be time to shake things up. State legislatures who have passed age verification laws for social media have gone 0-5 in legal challenges.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Fortunately, this engineer&#8212;who has also served as a tech fellow in Congress&#8212;has already been figuring out how to fix it.</p><h2>I. 0-5 vs. 2-0</h2><p>Additionally, state legislatures have gone 0-5 when it comes to writing a content-neutral definition of social media; the court has ruled that every definition thus far is content-based.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> If you&#8217;re not a nerd, why does that detail matter? What are the stakes?</p><h3>A. Content-Based vs. Content-Neutral</h3><p>In a First Amendment challenge, courts will look at both <strong>what</strong> the law does, and <strong>who</strong> the law applies to.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> The definition of social media matters because it determines <strong>who</strong> the law applies to. As a commonsense example, the courts would be extremely skeptical of a neutral-sounding law that curiously only applied to X/Twitter, but not to other social media platforms; it&#8217;s a thinly-veiled attempt to target Elon Musk.</p><p>In the usual First Amendment case, the court will determine if the law is content-based or content-neutral. As the Supreme Court said, &#8220;As a general rule, laws that by their terms distinguish favored speech from disfavored speech on the basis of the ideas or views expressed are content based.&#8221; In practice, the courts can be very picky&#8212;in part because they&#8217;re very protective of free speech.</p><p>That leads into the second question of stakes: what&#8217;s the difference between a content-neutral and a content-based law? A content-neutral law is subject to intermediate scrutiny, while a content-based law is subject to strict scrutiny. Strict scrutiny, however, tends to be &#8220;strict in theory, fatal in fact&#8221;; 0-5 is definitely fatal in fact.</p><h3>B. Nothing New Under the Sun</h3><p>These lawsuits have all been filed by NetChoice, a trade association that lobbies for Big Tech. Every time NetChoice wins, they try to manufacture a narrative that history will repeat itself if anyone else tries to regulate social media. While &#8220;those that fail to learn from history are doomed to repeat it,&#8221; those who learn from history are not consigned to the same fate as their predecessors.</p><p>History did not begin with social media, either. At one point in time, cable was the new technology and the new forum for expression. Back then, cable companies started dropping broadcast channels, such as the local NBC station&#8212;despite the fact that these channels were popular with consumers. Congress stepped in and passed must-carry, which forced cable companies to carry local broadcast stations.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>In response&#8212;and tell me if you&#8217;ve heard this one before&#8212;the cable industry sued, claiming that must-carry violated their free speech rights. That battle, <em>Turner Broadcasting System v. FCC</em>, reached the Supreme Court twice, but the government went 2-0; must-carry survived intermediate scrutiny, as it was a content-neutral law.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>And even though social media is a very different medium for cable, the same First Amendment principles apply to both. There is nothing new under the sun.</p><h3>C. The Medium is the Problem</h3><p>Could we protect kids on social media if tech companies would just do a better job of content moderation? According to social psychologist Jonathan Haidt (author of the #1 New York Times bestseller <em><a href="https://www.anxiousgeneration.com/">The Anxious Generation</a>)</em>, the answer is a firm <a href="https://x.com/JonHaidt/status/1754484727061344345">no</a>: &#8220;Social media is just not appropriate for children.&#8221; Haidt also frames the issue <a href="https://www.afterbabel.com/p/content-moderation-red-herring">another way</a>: &#8220;The medium is the problem.&#8221;</p><p>That&#8217;s not just good policy advice; it&#8217;s also good legal advice. In <em>Turner I</em>, the Supreme Court said that medium-based distinctions are often content-neutral: &#8220;It would be error to conclude, however, that the First Amendment mandates strict scrutiny for any speech regulation that applies to one medium (or a subset thereof) but not others.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> And as an oft-quoted line from <em>Southeastern Promotions v. Conrad</em> (1975) goes, &#8220;Each medium of expression . . . must be assessed for First Amendment purposes by standards suited to it, for each may present its own problems.&#8221;</p><p>The Internet is not a monolithic medium. It contains many distinct mediums, such as social media, search, and e-commerce. Two oft-cited precedents, <em>Reno v. ACLU</em> (1997) and <em>Ashcroft v. ACLU</em> (2004), dealt with laws from the 1990s that tried to regulate the entire Internet: the Communications Decency Act of 1996 (CDA) and the Child Online Protection Act of 1998 (COPA). Today, however, nobody is proposing that we age-gate access to the entire Internet. Age verification is only being proposed for mediums that pose heightened risk to children, such as social media and pornographic sites. </p><p>The difference between cable and social media, however, is that social media is much harder to define than cable. Defining the medium is the problem.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><h2>II. Change the Text, Change Your Fate: Findings</h2><p>Legal analysis of a law begins with the text of the law&#8212;especially for a judge with a textualist philosophy. To change the fate of a law, one only needs to change its text. The challenge lies in figuring out how to change it.</p><h3>A. Why Findings Matter</h3><p>While knowing and citing the <em>Turner</em> cases is a good start, an even better approach is to do the same things that Congress did when it passed must-carry.</p><p>In particular, the &#8220;unusually detailed statutory findings&#8221; played a role in persuading the courts to apply intermediate scrutiny in <em>Turner I</em>. Findings have the power to persuade, but they don&#8217;t have the power to control. Courts will not believe that something is true just because the findings say it&#8217;s true, but well-written findings can have a very strong persuasive effect.</p><p>When writing these findings, you have to remember who the audience is: the courts. When a judge conducts a First Amendment analysis of a law, these findings are designed to help answer the questions they will ask. The findings need to be written with that specific audience and that specific purpose in mind.</p><p>If, per <em>Turner I</em>, regulations can be &#8220;justified by the special characteristics&#8221; of the medium, then someone has to explain the special characteristics of social media.</p><h3>B. Intermediate Scrutiny</h3><p>Likewise, you need to know how intermediate scrutiny operates if you want your legislation to survive intermediate scrutiny.</p><p>Intermediate scrutiny has two parts. First, the legislation needs to further an important or substantial government interest. (Under strict scrutiny, it has to be a compelling government interest.)</p><p>Second, the legislation needs to be narrowly tailored. Unlike strict scrutiny, the government does not have to use the least restrictive means, but to quote <em>Ward v. Rock Against Racism</em> (1989), the government still cannot &#8220;burden substantially more speech than is necessary to further the government's legitimate interests.&#8221; Nonetheless, <em>Turner II</em> also did establish that under intermediate scrutiny, the government does get to decide the degree to which it will promote its interests:</p><div class="pullquote"><p>It is for Congress to decide how much local broadcast television should be preserved for noncable households, and the validity of its determination &#8220; &#8216;does not turn on a judge&#8217;s agreement with the responsible decisionmaker concerning&#8217; . . . the degree to which [the Government&#8217;s] interests should be promoted.&#8221; Ward, 491 U. S., at 800 (quoting United States v. Albertini, 472 U. S. 675, 689 (1985)); accord, Clark v. Community for Creative Non-Violence, 468 U. S. 288, 299 (1984) (&#8220;We do not believe . . . [that] United States v. O&#8217;Brien . . . endow[s] the judiciary with the competence to judge how much protection of park lands is wise&#8221;).</p></div><h3>C. Tell Your Story Without Experts</h3><p>A &#8220;war of experts&#8221; against Big Tech is a dicey proposition. With their deep pockets, these companies will easily have the resources to find (<a href="https://www.wsj.com/us-news/law/google-lawyer-secret-weapon-joshua-wright-c98d5a31">or pay</a>) experts who can manufacture their preferred narrative. And if a judge with limited expertise has a hard time telling which experts are right, that small army of experts that Big Tech can summon may appear more persuasive&#8212;regardless of what actually is true.</p><p>Expertise is important, but you must first tell your story without experts. Often, good findings promote an intuitive narrative that is reasonably persuasive to non-experts.</p><p>Most importantly, that narrative sets the anchor before we consult the experts. Of course, that anchor will look unreasonable if the evidence is one-sided against you when we do consult the experts. But if a judge with limited expertise has a hard time telling which experts are right, they may default to the anchor.</p><p>This strategy also aligns with the question that judges ask for intermediate scrutiny. Per <em>Turner II</em>, &#8220;The question is not whether Congress, as an objective matter, was correct . . . Rather, the question is whether the legislative conclusion was reasonable and supported by substantial evidence in the record before Congress.&#8221; First, you set a reasonably persuasive anchor. Then, you provide evidence to hold that anchor.</p><p>As an added bonus, under intermediate scrutiny, judges give more deference to the legislature&#8217;s judgment when they resolve conflicting evidence: &#8220;The Constitution gives to Congress the role of weighing conflicting evidence in the legislative process.&#8221;</p><p>Of course, experts also have a role in finding the story to tell. A great narrative needs to be backed by great facts; you can&#8217;t pick a narrative based on personal whims and then ask experts to manufacture the facts to back that narrative. And in many cases, a good finding will use an expository tone and make a straightforward statement of fact.</p><h2>III. Model Findings with Commentary</h2><blockquote><p>The Legislature finds the following:</p><p>(1) The State has a compelling interest in protecting the physical and psychological well-being of minors.</p><p>(2) The Internet is not a monolithic medium but instead contains many distinct mediums, such as social media, search, and e-commerce.</p><p>(3) Existing measures to protect minors on social media have been insufficient for reasons including&#8212;</p><blockquote><p>(A) the difficulty of content moderation at the scale of a platform with millions of user-generated content providers;</p><p>(B) the difficulty of making subjective judgments via algorithms, such as identifying content that harms the physical or psychological well-being of minors; and</p><p>(C) limited interoperability between social media platforms and third-party child safety tools, in part due to privacy concerns about sharing user data with third parties.</p></blockquote><p>(4) Social media companies have failed to control the negative impacts of their algorithms to distribute content for reasons including&#8212;</p><blockquote><p>(A) the scale of a platform with millions of users, combined with the personalized nature of content distribution;</p><p>(B) the natural incentive of such companies to maximize engagement and time spent on their platforms; and</p><p>(C) the limited degree of control that users have over the content they receive.</p></blockquote><p>(5) Limited accountability exists on social media platforms for bad actors, especially given the anonymous or hard-to-track nature of many such actors.</p><p>(6) Users frequently encounter sexually explicit material accidentally on social media.</p><p>(7) Social media platforms are accessible&#8212;</p><blockquote><p>(A) from a wide variety of devices, ranging from an individual&#8217;s smartphone to a laptop at a friend&#8217;s house to a desktop in a public library; and</p><p>(B) via a variety of methods on a single device, including apps and websites.</p></blockquote></blockquote><h3>Finding 1</h3><blockquote><p>(1) The State has a compelling interest in protecting the physical and psychological well-being of minors.</p></blockquote><p>Don&#8217;t reinvent the wheel.</p><p><em>Sable Communications v. FCC</em> (1989): &#8220;We have recognized that there is a compelling interest in protecting the physical and psychological wellbeing of minors.&#8221;</p><p>Additionally, &#8220;psychological well-being&#8221; is a much better framing than &#8220;harmful content.&#8221; Under the framing of harmful content, the obvious counterargument is that while some harmful content exists on social media, most content is not harmful; the legislation is extremely overbroad because it targets social media as a whole.</p><p>The framing of psychological well-being plays out much differently. Consider the story of 16-year-old Chase Nasca, who <a href="https://nypost.com/2023/03/23/parents-of-li-suicide-teen-break-down-during-tiktok-hearins-on-capitol-hill/">committed suicide</a> after TikTok showed him over 1,000 unsolicited videos of violence and suicide. Does it really matter whether those 1,000 videos were 5% or 50% of the content that Chase saw? What really matters is that those videos&#8212;regardless of the percentage&#8212;led Chase to commit suicide.</p><h3>Finding 2</h3><blockquote><p>(2) The Internet is not a monolithic medium but instead contains many distinct mediums, such as social media, search, and e-commerce.</p></blockquote><p>Medium-based distinctions are content-neutral.</p><p>This finding is fairly intuitive and straightforward; you don&#8217;t have to be an expert to know that the Internet is not a monolithic entity. It also distinguishes a social media law from the CDA in <em>Reno</em> and COPA in <em>Ashcroft II</em>. Both the CDA and COPA tried to regulate the entire Internet, not a specific medium like social media.</p><h3>Finding 3</h3><blockquote><p>(3) Existing measures to protect minors on social media have been insufficient for reasons including&#8212;</p><blockquote><p>(A) the difficulty of content moderation at the scale of a platform with millions of user-generated content providers;</p><p>(B) the difficulty of making subjective judgments via algorithms, such as identifying content that harms the physical or psychological well-being of minors; and</p><p>(C) limited interoperability between social media platforms and third-party child safety tools, in part due to privacy concerns about sharing user data with third parties.</p></blockquote></blockquote><p>Content moderation at scale is hard.</p><p>You may be able to detect an engineer&#8217;s influence in crafting this finding, but you don&#8217;t need to be an engineer to know that content moderation is hard when millions of users are producing content every day.</p><p>At that scale, you inevitably will have to rely on algorithms more and more; humans alone can&#8217;t handle that volume of content. But how effective are these algorithms when they have to make very subjective judgments, such as whether content would harm the psychological well-being of a child?</p><p>Even the advent of AI is not a magical panacea. To be clear, there are many objective tasks that AI handles well, such as image recognition for handwritten digits or for traffic signs. But if our self-driving cars hallucinated as often as ChatGPT did, they would swiftly be taken off the roads.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>To further complicate matters, social media sites often operate as closed ecosystems. There&#8217;s a two-word explanation for why Facebook is understandably wary about sharing user data with third parties: Cambridge Analytica. (It&#8217;s also worth nothing that Cambridge Analytica <a href="https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html">obtained personal data</a> via an external researcher who claimed to be collecting it for academic purposes.)</p><p>At the end of the day, you can certainly understand why Haidt arrived at the <a href="https://x.com/JonHaidt/status/1754484727061344345">conclusion</a> he did: &#8220;Even if social media companies could reduce sextortion, CSAM, deepfake porn, bullying, self-harm content, drug deals, and social-media induced suicide by 80%, I think the main take away from those Senate hearings is: <em>Social media is just not appropriate for children.</em>&#8221; The medium is the problem.</p><h3>Finding 4</h3><blockquote><p>(4) Social media companies have failed to control the negative impacts of their algorithms to distribute content for reasons including&#8212;</p><blockquote><p>(A) the scale of a platform with millions of users, combined with the personalized nature of content distribution;</p><p>(B) the natural incentive of such companies to maximize engagement and time spent on their platforms; and</p><p>(C) the limited degree of control that users have over the content they receive.</p></blockquote></blockquote><p>The distribution model matters.</p><p>In defining social media, we will have to consider many &#8220;negative examples&#8221; of sites that aren&#8217;t social media: the comments section of the New York Times, Netflix, Wikipedia, Substack, etc. A social media law should not apply to these sites&#8212;especially since <a href="https://constitution.congress.gov/browse/essay/amdt1-7-2-1/ALDE_00013538/">overbreadth</a> can be fatal in a First Amendment challenge.</p><p>One underexamined but vitally important (and content-neutral) difference is the distribution model. Simply put, you&#8217;re not going to see over 1,000 unsolicited videos of violence and suicide if you subscribe to some newsletters on Substack.</p><p>When you have millions of users, you have limited bandwidth to address problems affecting only a single user. The highly personalized nature of content distribution on social media, however, means that many problems are also personalized in nature.</p><p>In particular, social media platforms&#8212;who have natural incentives to maximize engagement (especially when more engagement leads to more ad revenue)&#8212;have a wealth of personal engagement data. Their algorithms can use the content that you have viewed, liked, reposted, replied to, etc., to decide what content to serve you.</p><p>Subparagraph (C) of this finding also alludes to another important aspect of the distribution model: lack of control. If you don&#8217;t like a Substack newsletter, you can easily unsubscribe from it. If TikTok&#8217;s algorithms start feeding you suicide content or eating disorder content, however, your options to make it go away are more limited.</p><p>This subparagaph is also written as a callback to Section 230 (the law that says that online platforms are generally not liable for the third-party content they host). Section 230 included this finding: &#8220;(2) These [interactive computer services] offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.&#8221; Social media has not lived up to that potential. The world has changed since Section 230 was enacted in 1996.</p><h3>Finding 5</h3><blockquote><p>(5) Limited accountability exists on social media platforms for bad actors, especially given the anonymous or hard-to-track nature of many such actors.</p></blockquote><p>Use a frame of reference that courts are familiar with.</p><p>While social media can be somewhat new and unfamiliar to the courts, they do have more extensive experience with older mediums such as broadcast and cable.</p><p>In the world of broadcast, if CBS broadcasted sexually explicit material during the day (or the breast of Janet Jackson during the Super Bowl), they could expect a fine from the FCC. And while the FCC regulations do not apply to cable, the incentives of that medium make it highly unlikely, for example, that ESPN&#8217;s Pardon the Interruption would interrupt your sports viewing experience with hardcore pornography.</p><p>The same set of incentives simply do not exist for content producers on social media. At worst, your account could be banned, but you can often create a new account. Even if Instagram investigates a &#8220;sextortion&#8221; case on their platform, what can you do when&#8212;in the case of <a href="https://www.clarionledger.com/story/news/2023/02/22/starkville-dad-talks-of-social-media-dangers-after-sons-suicide-sextortion/69926741007/">Walker Montgomery</a>&#8212;they trace the account&#8217;s IP address to Nigeria?</p><p>When the Supreme Court compared broadcast regulations to dial-a-porn regulations in <em>Sable Communications v. FCC</em> (1989), they noted that while an &#8220;unexpected outburst on a radio broadcast&#8221; tends to be &#8220;invasive or surprising,&#8221; dial-a-porn is different: &#8220;In contrast to public displays, unsolicited mailings, and other means of expression which the recipient has no meaningful opportunity to avoid, the dial-it medium requires the listener to take affirmative steps to receive the communication.&#8221;</p><p>As for social media, a sextortion attempt is often unsolicited and invasive in nature. Problems on social media are often caused by unsolicited or invasive content&#8212; especially when users have a limited degree of control over the content they receive.</p><h3>Finding 6</h3><blockquote><p>(6) Users frequently encounter sexually explicit material accidentally on social media.</p></blockquote><p>This is self-evident to anyone with an X/Twitter account.</p><p>This finding is a direct callback to <em>Reno</em>: &#8220;Though [sexually explicit] material is widely available, users seldom encounter such content accidentally.&#8221; That may have been true for the Internet of 1997, but it&#8217;s definitely not true for the Internet of 2024.</p><h3>Finding 7</h3><blockquote><p>(7) Social media platforms are accessible&#8212;</p><blockquote><p>(A) from a wide variety of devices, ranging from an individual&#8217;s smartphone to a laptop at a friend&#8217;s house to a desktop in a public library; and</p><p>(B) via a variety of methods on a single device, including apps and websites.</p></blockquote></blockquote><p>Do you try to cut kids off at every possible path, or cut them off at the destination?</p><p>This is another example of a finding that makes straightforward statements of facts in an expository tone, but which also sets up the narrative.</p><p>In the early days of the Internet, many households would have had a single desktop in a common area of the house, and any online content would be accessed via a web browser. Today, most kids have a smartphone that travels everywhere with them.</p><p>Some claim parental controls are the answer, but even if you set up perfect parental controls on a single device&#8212;a task easier said than done&#8212;what if the kid uses a different device? An old smartphone (or a cheap smartphone the kid bought) would not have talk, text, or data, but it would have Internet access wherever there&#8217;s WiFi.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> The kid could also use a laptop at a friend&#8217;s house or a desktop at a public library.</p><p>And even on a single device with parental controls, the task is not straightforward. Perhaps you blocked the Facebook app, but did you block Facebook&#8217;s website? And what if the kid downloads a proxy app and uses that to browse Facebook?</p><p>Age verification, by contrast, is applied at the destination. It doesn&#8217;t matter which device the kid uses to access Instagram, or whether they accessed Instagram via a browser or via an app; they still need to verify their age to create an account.</p><p>Cutting kids off at the destination offers a greater degree of protection, compared to trying to cut them off at each possible path they could take.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> Parental controls are not a narrowly tailored alternative to age verification. To quote <em>Turner II</em>, &#8220;In the final analysis this alternative represents nothing more than appellants&#8217; &#8216; &#8220;[dis]agreement with the responsible decisionmaker concerning&#8221; . . . the degree to which [the Government&#8217;s] interests should be promoted.&#8217; &#8221;</p><div><hr></div><p>Now that we have model findings, the next step is to write a model definition of social media. Ideally, the definition should naturally flow from the findings, and it should codify the special characteristics of social media that we identified in the findings. In the <a href="https://www.technicalassistance.io/p/define-social-media-part-ii-definitions">next part</a>, we&#8217;ll dive into the brass tacks of writing that definition.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.technicalassistance.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.technicalassistance.io/subscribe?"><span>Subscribe now</span></a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>See <em>NetChoice v. Griffin</em> (W.D. Ark. Aug. 31, 2023), <em>NetChoice v. Yost</em> (S.D. Ohio Feb. 12, 2024), <em>NetChoice v. Fitch</em> (S.D. Miss. July 1, 2024), <em>CCIA &amp; NetChoice v. Paxton</em> (W.D. Tex. Aug. 30, 2024), and <em>NetChoice v. Reyes</em> (D. Utah Sept. 10, 2024). I <a href="https://www.city-journal.org/article/what-is-social-media">raised the alarm</a> when we were 0-2.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>That&#8217;s certainly not the only problem, though. For example, in multiple cases, the court also ruled that the definition of social media was unconstitutionally vague; in Arkansas, the state&#8217;s own witnesses could not agree on whether the law applied to Snapchat or not.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>As a corollary, one problem with the &#8220;we&#8217;re only regulating conduct, not speech&#8221; argument is that it only tells you what the law does. It does not tell you who the law applies to.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This problem was the result of vertical integration between cable operators and cable programmers. Cable channels often competed with local broadcast channels for advertising revenue. When cable companies started owning their own channels, that created a perverse incentive for cable companies to not carry local broadcast channels, so that advertising revenues would flow to their own channels instead.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>The government also argued that &#8220;must-carry provisions are nothing more than industry-specific antitrust legislation,&#8221; and that rational-basis review should apply as a result. The court rejected this argument, as the industry in question, cable, was a forum for expression. Again, courts look at both what the law does, and who the law applies to. Attempts to frame age verification for social media as &#8220;industry-specific child safety legislation&#8221; (or &#8220;industry-specific contract legislation&#8221;) would likely face a similar fate.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Conversely, discriminating within a medium will often make a law content-based: &#8220;Regulations that discriminate among media, or among different speakers within a single medium, often present serious First Amendment concerns.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>By contrast, pornographic sites have been much easier to define. Obscenity is unprotected speech, and <em>Ginsberg v. New York</em> (1968) established that &#8220;[t]he State has power to adjust the definition of obscenity as applied to minors.&#8221; At a high level, pornographic sites have been defined as sites where at least one-third of the content is obscene for minors.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Even when the task is image recognition for traffic signs, there are ways that street signs can be <a href="https://arstechnica.com/cars/2017/09/hacking-street-signs-with-stickers-could-confuse-self-driving-cars/">&#8220;hacked&#8221; in real life</a> so that self-driving cars won&#8217;t recognize them.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Even if parents confiscated phones at night and shut off the WiFi router, a more drastic solution, a kid could hand over their phone but keep the SIM card&#8212;and then put the SIM card in a different device. That device would then have access to data.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>As for the arguments that kids will find arguments to bypass age verification, the exact same argument could be made for parental controls. And that&#8217;s assuming that parental controls work in the first place. For example, the Wall Street Journal published a story about how it took Apple <a href="https://www.wsj.com/tech/personal-tech/a-bug-allowed-kids-to-visit-x-rated-sites-apple-took-three-years-to-fix-it-17e5f65d">three years to fix</a> an X-rated loophole in Screen Time.</p></div></div>]]></content:encoded></item><item><title><![CDATA[Why the TikTok Bill Doesn't Violate the First Amendment (Part II)]]></title><description><![CDATA[The courts are not the place to relitigate policy debates that you lost in Congress.]]></description><link>https://www.technicalassistance.io/p/why-the-tiktok-bill-doesnt-violate-cf2</link><guid isPermaLink="false">https://www.technicalassistance.io/p/why-the-tiktok-bill-doesnt-violate-cf2</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Wed, 24 Apr 2024 13:00:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The TikTok bill, which would force the Chinese Communist Party (CCP) to divest TikTok&#8212;and would ban TikTok if the CCP refuses to divest&#8212;just <a href="https://www.senate.gov/legislative/LIS/roll_call_votes/vote1182/vote_118_2_00154.htm">passed the Senate</a> and is on the verge of becoming a law. But does this bill violate the First Amendment?</p><p>Social media may be a forum for expression, but that doesn&#8217;t mean that any bill that touches social media is unconstitutional. Cable is also a forum for expression, but despite that, the Supreme Court upheld the FCC&#8217;s must-carry rules&#8212;rules that force cable companies to carry local broadcast stations.</p><p>The dispute over must-carry, <em>Turner Broadcasting System v. FCC</em>, was settled in two separate cases. In <em><a href="https://supreme.justia.com/cases/federal/us/512/622/case.pdf">Turner I</a></em> (1994), the Supreme ruled that must-carry is content-neutral and only subject to intermediate scrutiny. In <em><a href="https://supreme.justia.com/cases/federal/us/520/180/case.pdf">Turner II</a></em> (1997), the Supreme Court ruled that must-carry survives intermediate scrutiny.</p><p>As for the TikTok bill, there is nothing new under the sun. In <a href="https://www.technicalassistance.io/p/why-the-tiktok-bill-doesnt-violate">part I</a>, we showed that the TikTok bill is content-neutral and only subject to intermediate scrutiny. In part II, we&#8217;ll show that this bill survives intermediate scrutiny. With the TikTok bill on the verge of becoming a law, some will use legal disputes to relitigate the policy disputes that they lost in Congress. That didn&#8217;t work in <em>Turner II</em>, and it won&#8217;t work here. The courts won&#8217;t replace Congress&#8217;s policy judgments with its own policy judgments.</p><h2>Intermediate Scrutiny: How It Works</h2><p>What is intermediate scrutiny? Here is the definition from <em>Turner II</em>:</p><div class="pullquote"><p>A content-neutral regulation will be sustained under the First Amendment if it advances important governmental interests unrelated to the suppression of free speech and does not burden substantially more speech than necessary to further those interests.</p></div><p><em>Turner II</em> did not just establish the criteria for intermediate scrutiny, though; it also establishes how courts evaluate those criteria. While some treat the First Amendment like a magical trump card that overturns laws, that&#8217;s not how intermediate scrutiny works (though it could be an accurate description of how strict scrutiny works).</p><p>Before we explain the details of how intermediate scrutiny works, let&#8217;s first explain why it creates a more favorable legal environment for the government.</p><p>First, content-neutral laws pose less of a threat to free speech: &#8220;Content-neutral regulations do not pose the same &#8216;inherent dangers to free expression&#8217; that content-based regulations do, and thus are subject to a less rigorous analysis, which affords the Government latitude in designing a regulatory solution.&#8221; Second, the courts are not a policy-making institution; Congress is the policy-making institution. This is separation of powers 101: &#8220;We are not at liberty to substitute our judgment for the reasonable conclusion of a legislative body.&#8221;</p><p>(The courts, however, will assert their judicial power on constitutional questions&#8212;including the question of whether a law is content-based or content-neutral. Nonetheless, if a court&#8212;after carefully scrutinizing a law&#8212;does determine that a law is content-neutral, the legal landscape will become more favorable to the government.)</p><p>Since the courts will defer to Congress on policy matters, under intermediate scrutiny, the courts will not make their own policy judgment as to which side had the better evidence. They will instead ask if Congress had substantial evidence: &#8220;Our sole obligation is &#8216;to assure that, in formulating its judgments, Congress has drawn reasonable inferences based on substantial evidence.&#8217; &#8221;</p><p>This core principle also applies when conflicting evidence exists: &#8220;The Constitution gives to Congress the role of weighing conflicting evidence in the legislative process.&#8221; So long as Congress&#8217;s conclusion was reasonable and supported by substantial evidence, the existence of other possible conclusions does not negate Congress&#8217;s judgment&#8212;even if those other conclusions were also reasonable.</p><p>Moreover, the courts have recognized that Congress&#8217;s authority to make policy judgments includes the authority to make predictive judgments: &#8220;A fundamental principle of legislation is that Congress is under no obligation to wait until the entire harm occurs but may act to prevent it.&#8221;</p><p>The First Amendment operates differently under intermediate scrutiny. The concluding sentences of <em>Turner II</em> concisely state what it does (and does not) require:</p><div class="pullquote"><p>We cannot displace Congress&#8217; judgment respecting content-neutral regulations with our own, so long as its policy is grounded on reasonable factual findings supported by evidence that is substantial for a legislative determination. Those requirements were met in this case, and in these circumstances the First Amendment requires nothing more.</p></div><h2>Important Governmental Interests</h2><p>Does TikTok pose a national security risk? When he <a href="https://www.cbsnews.com/news/tiktok-cybersecurity-china-60-minutes-2020-11-15/">appeared on 60 Minutes</a>, former CIA officer Klon Kitchen framed the issue perfectly:</p><blockquote><p>Imagine you woke up tomorrow morning and you saw a news report that China had distributed 100 million sensors around the United States, and that any time an American walked past one of these sensor, this sensor automatically collected off of your phone your name, your home address, your personal network, who you're friends with, your online viewing habits and a whole host of other pieces of information. Well, that's precisely what TikTok is. It has 100 million U.S. users, it collects all of that information.</p></blockquote><p>When Congress is allowed to make predictive judgments, and it only needs to show that it acted reasonably based on substantial evidence, it should have an easy time proving the first prong: that the bill advances an important government interest.</p><p>When it comes to the threat posed by TikTok, there certainly won&#8217;t be a shortage of evidence, either. On the right, FCC Commissioner Brendan Carr has compiled an excellent Twitter thread of <a href="https://x.com/BrendanCarrFCC/status/1765823031966904671">evidence</a>. On the left, Sen. Maria Cantwell (D-WA) and Sen. Mark Warner (D-VA) engaged in a <a href="https://twitter.com/michaelsobolik/status/1782850001107865984">colloquy</a> on the Senate floor that also laid out the substantial evidence for the TikTok bill.</p><p>Critics, on the other hand, frequently ignore this evidence when they try to argue the contrary. For example, even after TikTok was caught redhanded <a href="https://www.forbes.com/sites/emilybaker-white/2022/12/22/tiktok-tracks-forbes-journalists-bytedance/">spying on journalists</a>, Jennifer Huddleston and Paul Matzke of the Cato Institute <a href="https://www.cato.org/commentary/protect-free-speech-congress-should-consider-alternatives-banning-tiktok#">still claim</a> that the evidence for the TikTok bill is only &#8220;mere suspicion that TikTok might someday be used to monitor American citizens.&#8221; This is not a serious argument.</p><p>Writing for Lawfare, Adam Chan also notes that the US government has a <a href="https://www.lawfaremedia.org/article/why-tiktok-s-victory-in-montana-might-be-bad-news-for-the-platform">very strong case</a> on this first prong. In particular, he cites precedents relating to national security (<em>Holder v. Humanitarian Law Project</em> (2010)) and foreign influence in elections (<em>Bluman v. FEC</em> (2011)) where the Supreme Court even upheld content-based laws.</p><h2>Narrow Tailoring</h2><p>Realistically, the question of whether the TikTok bill can survive intermediate scrutiny is going to center on the second prong: narrow tailoring. As <em>Turner II</em> noted, narrow tailoring under intermediate scrutiny is very different from strict scrutiny: &#8220;we will not invalidate the preferred remedial scheme because some alternative solution is marginally less intrusive on a speaker&#8217;s First Amendment interests.&#8221;</p><p>In <em>Turner II</em>, the Supreme Court also evaluated the effectiveness of proposed alternatives to must-carry. For example, it rejected a leased-access regime as a narrowly tailored alternative to must-carry, as &#8220;it would not be as effective in achieving Congress&#8217; further goal of ensuring that significant programming remains available for the 40 percent of American households without cable.&#8221;</p><p>As a corollary, under intermediate scrutiny, the government decides to what degree it will promote its legitimate interests: &#8220;the validity of its determination &#8216; &#8220;does not turn on a judge&#8217;s agreement with the responsible decisionmaker concerning&#8221; . . . the degree to which [the Government&#8217;s] interests should be promoted.&#8217; &#8221;</p><p>So long as Congress &#8220;does not burden substantially more speech than necessary,&#8221; the courts will defer to the policy judgments of Congress.</p><p><strong>Divest First, Ban Second</strong></p><p>With the TikTok bill, divestment is the first option; a ban is the second option. This strategy has significant legal consequences. If a ban was the first option, perhaps one could argue that it burdens more speech than necessary&#8212;and that divestment is a narrowly tailored alternative. You can&#8217;t make that same argument if the Chinese Communist Party refuses to divest, though. At that point, a ban is necessary.</p><p><strong>The Real Problem: The Chinese Communist Party</strong></p><p>Are there narrowly tailored alternatives to divestment? Since the real problem with TikTok is the Chinese Communist Party, the answer is no.</p><p>Some have tried to narrowly portray the problem as a privacy issue, but the problems created by the CCP&#8217;s control of TikTok exist along multiple dimensions: privacy, child safety, espionage, and foreign interference in elections, among others.</p><p>With some dimensions, such as privacy, the problem is the CCP&#8217;s access to US user data. With other dimensions, such as child safety, the problem is the CCP&#8217;s control of the algorithms. For example, 16-year-old Chase Nasca committed suicide after TikTok&#8217;s algorithms fed him <a href="https://nypost.com/2023/03/23/parents-of-li-suicide-teen-break-down-during-tiktok-hearins-on-capitol-hill/">over 1,000 unsolicited videos</a> promoting violence and suicide. While every social media platform has child safety issues, TikTok is the worst of the worst. The Chinese Communist Party does not care about dead American kids.</p><p>Thus, a national privacy law is not a narrowly tailored alternative, as it only deals with one dimension: privacy. It won&#8217;t fix the algorithms that are killing our kids.</p><p>And even if the only problem with TikTok was privacy, a national privacy law would still not be a narrowly tailored alternative for another reason. While a national privacy law would be an effective deterrent for a truly private company like Facebook, it would not be an effective deterrent for the CCP. After all, we already have intellectual property (IP) laws on the books, but that has not deterred <a href="https://saisreview.sais.jhu.edu/how-chinas-political-system-discourages-innovation-and-encourages-ip-theft/">IP theft</a> from China.</p><p><strong>Project Texas</strong></p><p>Finally, some have suggested that a narrowly tailored alternative is Project Texas, which would put US user data in a <a href="https://www.youtube.com/watch?v=zDgRRVpemLo">lockbox</a> located in the USA. Speaking as an engineer, this option is not viable.</p><p>The short answer here is that you can only trust Project Texas if you trust the CCP; it suffices to say that you cannot trust the CCP. Nonetheless, here is the long answer.</p><p>Under Chinese law, if the lockbox is located in China, or if it is owned by a Chinese company, the CCP can make you open the lockbox. TikTok&#8217;s parent company, ByteDance, is a Chinese company that must obey Chinese law. Putting the lockbox in America does not fully solve the problem. If ByteDance still has a key to the lockbox, it does not matter whether the lockbox is located in China or America.</p><p>Moreover, as an engineer, if you asked me to do a threat model for a lockbox, I would ask you, &#8220;Who designed the lockbox?&#8221; If the answer is a Chinese company, that&#8217;s your critical vulnerability right there. Since ByteDance controls TikTok, Project Texas is a lockbox that is designed by China. Lockbox rejected.</p><p>One consultant working on Project Texas <a href="https://www.buzzfeednews.com/article/emilybakerwhite/tiktok-tapes-us-user-data-china-bytedance-access">said</a>, &#8220;I feel like with these tools, there&#8217;s some backdoor to access user data in almost all of them, which is exhausting.&#8221; Reporting from the Wall Street Journal likewise confirmed that Project Texas was a &#8220;porous&#8221; system that <a href="https://www.wsj.com/tech/tiktok-pledged-to-protect-u-s-data-1-5-billion-later-its-still-struggling-cbccf203">does not live up</a> to its promises: &#8220;Employees say ByteDance managers continue to request U.S. data.&#8221;</p><p>(It's not just ByteDance, either. Backdoors have been found on <a href="https://www.theverge.com/2013/7/30/4570780/lenovo-reportedly-banned-by-mi6-cia-over-chinese-hacking-fears">devices from Lenovo</a>, a Chinese company, as far back as 2013. <a href="https://www.wsj.com/articles/u-s-officials-say-huawei-can-covertly-access-telecom-networks-11581452256">Telecom equipment by Huawei</a>, a Chinese company, had backdoors that let them covertly access US telecom networks.)</p><p>Even if ByteDance did not have a key to the lockbox, what if an American employee removes data from the lockbox and shares it with ByteDance? One TikTok employee had a manager in Seattle on paper, but <a href="https://fortune.com/2024/04/15/tiktok-china-data-sharing-bytedance-project-texas/">actually reported</a> to a ByteDance executive in Beijing: &#8220;Nearly every 14 days . . . he emailed spreadsheets filled with data for hundreds of thousands of U.S. users to ByteDance workers in Beijing.&#8221;</p><p>In cybersecurity, you often assume vulnerabilities will be exploited; even if you can&#8217;t find the exploit, someone else will. Project Texas's vulnerabilities are downstream of the real vulnerability: it&#8217;s designed by China.  It&#8217;s foolish to assume that China will not exploit that. Once again, the real problem is the Chinese Communist Party.</p><p>If the Chinese Communist Party refuses to divest TikTok, TikTok delenda est. </p>]]></content:encoded></item><item><title><![CDATA[Why the TikTok Bill Doesn't Violate the First Amendment (Part I)]]></title><description><![CDATA[Content-neutral laws receive less scrutiny than content-based laws.]]></description><link>https://www.technicalassistance.io/p/why-the-tiktok-bill-doesnt-violate</link><guid isPermaLink="false">https://www.technicalassistance.io/p/why-the-tiktok-bill-doesnt-violate</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Tue, 16 Apr 2024 13:03:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Whenever a bill that touches social media is proposed, certain critics will almost reflexively argue that this bill violates the First Amendment. While it is true that social media is a forum for expression&#8212;and that bills regulating social media receive heightened scrutiny under the First Amendment&#8212;that does not mean a bill violates the First Amendment just because it touches social media.</p><p>For example, the Supreme Court upheld the FCC&#8217;s must-carry rules&#8212;rules that force cable companies to carry local broadcast stations&#8212;even though cable is another forum for expression protected by the First Amendment. Likewise, the TikTok bill, which would force the Chinese Communist Party (CCP) to divest TikTok&#8212;and would ban TikTok if the CCP refuses to divest&#8212;does not violate the First Amendment.</p><p>In part I, we&#8217;ll cover two key distinctions: between using TikTok and owning TikTok, and between a content-based and content-neutral law. The TikTok bill would still let the Chinese Communist Party use TikTok, but it would restrict their ability to own and control TikTok. Moreover, since the TikTok bill distinguishes between social media platforms based on foreign ownership&#8212;not based on the speech on the platform&#8212;it is a content-neutral bill only subject to intermediate scrutiny.</p><p>We&#8217;ll also cover how this bill differs from two past attempts to regulate TikTok: then-President Trump&#8217;s executive order to ban TikTok in 2020, and a Montana law that banned TikTok in 2023. Trump&#8217;s efforts failed because he did not have the legal authority to ban TikTok&#8212;not because he violated the First Amendment. Montana&#8217;s efforts failed because foreign policy is the federal government's exclusive domain. The obvious solution to both problems is a federal bill that would force TikTok to divest&#8212;and then ban TikTok if they refuse to divest.</p><h2>Using TikTok vs. Owning TikTok</h2><p>While the CCP has the right to use TikTok under the First Amendment, it does not have the &#8220;right&#8221; to own TikTok under the First Amendment.</p><p>During the Cold War, the Supreme Court ruled in <em><a href="https://supreme.justia.com/cases/federal/us/381/301/">Lamont v. Postmaster General</a></em><a href="https://supreme.justia.com/cases/federal/us/381/301/"> (1965)</a> that the Soviet Union had a First Amendment right to distribute &#8220;communist political propaganda&#8221; via the United States Postal Service (USPS). Likewise, the CCP today likely has a right to create a TikTok account and use it to spread its propaganda.</p><p>Critics such as Jameel Jaffer of the Knight Institute, however, have cited <em>Lamont</em> to make a <a href="https://www.nytimes.com/2023/03/24/opinion/tiktok-ban-first-amendment.html">very different argument</a>: that the CCP has the right to own and control TikTok. <em>Lamont</em> said that a Soviet company could use the USPS; it never said that a Soviet company could own the USPS.</p><p>There&#8217;s no private sector in communist China. TikTok&#8217;s parent company, ByteDance, is a Chinese company. Under Chinese law&#8212;such as the <a href="https://cs.brown.edu/courses/csci1800/sources/2017_PRC_NationalIntelligenceLaw.pdf">2017 National Intelligence Law</a>&#8212;ByteDance must obey the dictates of the CCP. The lines that divide the public and private sectors in America do not exist in China. People who even criticize the CCP, such as Chinese tech icon Jack Ma, have a habit of mysteriously <a href="https://www.wired.com/story/jack-ma-isnt-back/">&#8220;disappearing.&#8221;</a></p><p>And in America, one TikTok employee, Evan Turner, was &#8220;assigned&#8221; to a manager in Seattle&#8212;a manager he never met&#8212;but actually reported to a ByteDance executive in Beijing whom he met with weekly. As <a href="https://fortune.com/2024/04/15/tiktok-china-data-sharing-bytedance-project-texas/">Fortune reported</a>, &#8220;Nearly every 14 days, as part of Turner&#8217;s job throughout 2022, he emailed spreadsheets filled with data for hundreds of thousands of U.S. users to ByteDance workers in Beijing.&#8221;</p><p>As Alec Stapp of the Institute for Progress (and many others) have <a href="https://twitter.com/AlecStapp/status/1493368858299691015">pointed out</a>, it would be unthinkable to let the Soviet Union own ABC, NBC, or CBS during the Cold War. So why would we let the Chinese Communist Party control TikTok today?</p><p>And to that point, the US does have laws <a href="https://www.law.cornell.edu/uscode/text/47/310">restricting foreign ownership</a> of mass communications media such as radio and broadcast&#8212;laws that have not been struck down as unconstitutional. A similar law restricting foreign ownership of social media&#8212;another mass communications media&#8212;would also be constitutional.</p><h2>Content-Based vs. Content-Neutral</h2><p>When critics claim a bill violates the First Amendment, they often use the following template. First, they argue that since the bill regulates speech, it is subject to strict scrutiny. Then, they argue that the bill does not survive strict scrutiny&#8212;an easy argument since strict scrutiny is often &#8220;strict in theory, fatal in fact.&#8221;</p><p>When the Supreme Court upheld the FCC&#8217;s must-carry rules, however, they applied intermediate scrutiny to those rules. While content-based laws are subject to strict scrutiny, a content-neutral law is only subject to intermediate scrutiny.</p><p>What&#8217;s the difference between a content-based law and a content-neutral law? That line is not always easy to draw, but &#8220;[a]s a general rule, laws that by their terms distinguish favored speech from disfavored speech on the basis of the ideas or views expressed are content based,&#8221; to quote <em><a href="https://supreme.justia.com/cases/federal/us/512/622/case.pdf">Turner Broadcasting System v FCC</a></em><a href="https://supreme.justia.com/cases/federal/us/512/622/case.pdf"> (1994)</a>.</p><p>(There are actually two <em>Turner</em> cases. In <em>Turner I</em> (1994), the Supreme Court ruled that the FCC&#8217;s must-carry rules are subject to intermediate scrutiny. In <em>Turner II</em> (1997), the Supreme Court ruled that these must-carry rules also survive intermediate scrutiny.)</p><p>As the <a href="https://supreme.justia.com/cases/federal/us/520/180/case.pdf">Supreme Court said</a> in <em>Turner II</em>, &#8220;A content-neutral regulation will be sustained under the First Amendment if it advances important governmental interests unrelated to the suppression of free speech and does not burden substantially more speech than necessary to further those interests.&#8221; Under strict scrutiny, by contrast, the law must advance a compelling government interest, and it must use the least restrictive means to advance that interest.</p><p><strong>The TikTok Bill is Content-Neutral</strong></p><p>The TikTok bill is content-neutral for a simple reason: it distinguishes between social media platforms based on foreign ownership, not based on the speech on the platform.</p><p>Many critics, however, will implicitly or explicitly assume that strict scrutiny applies, even though that assumption is not warranted. The <a href="https://www.cato.org/blog/could-latest-tiktok-ban-pass-constitutional-muster">most egregious example</a> comes from Jennifer Huddleston of the Cato Institute:</p><blockquote><p>Under First Amendment precedents, the government will need to prove that forced divestment or otherwise banning of the app is both based on a&nbsp;compelling government interest and represents the least restrictive means of advancing that interest. In December, a&nbsp;federal district court <a href="https://www.cato.org/commentary/courts-reversal-montanas-tiktok-ban-should-be-warning">enjoined a&nbsp;TikTok ban</a> in Montana on First Amendment grounds as it was &#8220;unlikely to pass even intermediate scrutiny.&#8221;</p></blockquote><p>Here, Huddleston claims strict scrutiny applies without providing evidence that the bill is content-based. The court case that Huddleston cites disagrees. In that case, TikTok argued Montana&#8217;s law was content-based, while Montana argued it was content-neutral. While Judge Donald Molloy did not issue a definitive decision, he <a href="https://s3.documentcloud.org/documents/24180112/tiktok_injunction.pdf">did say</a> that Montana &#8220;is closer to the legal mark.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Moreover, Huddleston omits a key reason why Montana&#8217;s law did not survive intermediate scrutiny: &#8220;Montana does not have constitutional authority in the field of foreign affairs.&#8221; Foreign policy is the exclusive domain of the federal government. As a result, &#8220;the law&#8217;s foreign policy purpose is not an important Montana state interest.&#8221; </p><p>In the context of Huddleston&#8217;s argument on whether the federal government has &#8220;a&nbsp;compelling national security interest at stake,&#8221; it makes no sense to cite a case that says nothing about that question&#8212;other than to say that national security is only a federal interest and not a state interest. Writing for Lawfare, Adam Chan also arrived at a <a href="https://www.lawfaremedia.org/article/why-tiktok-s-victory-in-montana-might-be-bad-news-for-the-platform">similar conclusion</a>: &#8220;When it comes to First Amendment analysis, virtually none of Molloy&#8217;s intermediate scrutiny analysis would likely hinder a federal ban.&#8221;</p><p>If any conclusion can be drawn from the Montana case, it is that this issue can only be solved on the federal level&#8212;where the federal government is in a good position to win.</p><p><strong>Targeting Social Media is Content-Neutral</strong></p><p>The TikTok bill applies not just to TikTok, but to any social media app controlled by China, Russia, Iran, or North Korea. But what about other apps that are foreign-controlled? If the bill only targets social media apps but not other apps, is that content-based? In short, no.</p><p>First, cable operators once made a similar argument; they argued that must-carry rules were content-based because they targeted cable but not other mediums of communication. The Supreme Court rejected that argument in <em>Turner I</em>: &#8220;It would be error to conclude, however, that the First Amendment mandates strict scrutiny for any speech regulation that applies to one medium (or a subset thereof) but not others.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Quoting <em><a href="https://supreme.justia.com/cases/federal/us/420/546/">Southeastern Promotions v. Conrad</a></em><a href="https://supreme.justia.com/cases/federal/us/420/546/"> (1975)</a>, the court also said, &#8220;Each medium of expression . . . must be assessed for First Amendment purposes by standards suited to it, for each may present its own problems.&#8221; Few would doubt today that social media is a unique medium with its own problems.</p><p>Second, there is a logical reason why the bill only applies to social media. When then-President Trump tried to ban TikTok in 2020, the courts blocked that ban&#8212;not on the basis that Trump violated the First Amendment, but on the basis that he exceeded his legal authority under the International Emergency Economic Powers Act (IEEPA).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>While the <a href="https://www.law.cornell.edu/uscode/text/50/1702">IEEPA</a> does give the President the power to deal with foreign-controlled apps in general terms, it also has a &#8220;personal communications&#8221; exception and an &#8220;information materials&#8221; exception. (The &#8220;information materials&#8221; exception is also known as the Berman Amendment.) The court ruled that both exceptions <a href="https://s3.documentcloud.org/documents/20421001/2020-12-07-memorandum-dckt-60_0.pdf">applied to TikTok</a>&#8212;and to social media more broadly.</p><p>Since the IEEPA does not apply to social media apps, it would logically make sense to create a separate law for social media apps. The IEEPA can still be used, however, for other apps that are not subject to the &#8220;personal communications&#8221; or &#8220;information materials&#8221; exceptions; there&#8217;s no need to create a new law for those apps.</p><h2>Intermediate Scrutiny (Part II)</h2><p>Now that we have established that the TikTok bill is a content-neutral bill only subject to intermediate scrutiny, we must ask this question next: can the bill survive intermediate scrutiny? In short, it can, but we&#8217;ll cover that in part II&#8230;</p><p><em>You can read part II <a href="https://www.technicalassistance.io/p/why-the-tiktok-bill-doesnt-violate-cf2">here</a>.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Judge Malloy also rejected an argument that Montana&#8217;s law is a prior restraint, which is why I don&#8217;t cover that argument in this piece.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Conversely, some have proposed that the bill should be narrowed to cover only TikTok and not social media more broadly, but that change would raise First Amendment concerns: &#8220;Regulations that discriminate among media, or among different speakers within a single medium, often present serious First Amendment concerns.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>The court also ruled that the Trump administration had violated the Administrative Procedure Act (APA). </p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Cryptography in Politics: A Crash Course]]></title><description><![CDATA[A crash course for non-engineers, written by an engineer.]]></description><link>https://www.technicalassistance.io/p/cryptography-in-politics-a-crash</link><guid isPermaLink="false">https://www.technicalassistance.io/p/cryptography-in-politics-a-crash</guid><dc:creator><![CDATA[Mike Wacker]]></dc:creator><pubDate>Thu, 04 Apr 2024 13:02:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!q3Pb!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3e054d1-137a-47bd-a4af-c9e462fb84f4_256x256.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the debate over online age verification, some tech policy experts <a href="https://www.rstreet.org/commentary/the-technology-to-verify-your-age-without-violating-your-privacy-does-not-exist/">have said</a>, &#8220;The technology to verify your age without violating your privacy does not exist.&#8221; That claim is certainly news to me; I&#8217;m an engineer who <a href="https://github.com/mikewacker/age-verification">built a proof-of-concept</a> for privacy-conscious age verification.</p><p>In tech policy, you will at times inevitably wade into the realm of cryptography&#8212;an area where you often need technical expertise. Tech policy experts, however, often have a humanities degree and a policy job where they specialize in tech policy. They don&#8217;t have the right type of expertise.</p><p>Nonetheless, if you have a job that touches tech policy (such as a congressional staffer), you often have to make policy judgments&#8212;whether you have the needed expertise or not. Thus, I&#8217;ve written this crash course to help you make better judgments&#8212;and feel less lost along the way.</p><p>The goal of this crash course is not to tell you what to think; it&#8217;s to teach you how to think. (You&#8217;ll also learn why passwords should include at least eight characters, an uppercase letter, a lowercase letter, a number, a special character, etc.)</p><h2>Tools, Tasks, and Protocols</h2><p>Hashing, blockchain, and zero-knowledge proofs, oh my! You may have seen these terms thrown around, but what do they mean? Rather than answer that question, let&#8217;s take a step back and start with something simpler: tools, tasks, and protocols.</p><p>Let&#8217;s start with <strong>tasks</strong>. What useful thing are you trying to accomplish with technology? Here are some examples of tasks:</p><ul><li><p>Two parties send encrypted messages to each other.</p></li><li><p>One party verifies that a document was digitally signed by another party.</p></li><li><p>One party verifies their age for another party.</p></li></ul><p>Once we establish what the task is, we design a <strong>protocol</strong>: a precise set of instructions where two or more parties communicate to accomplish a task. (A protocol is essentially a communications algorithm.)</p><p>To build a protocol, we will have various <strong>tools</strong> at our disposal. Hashing is a tool. Blockchain is a tool. Zero-knowledge proofs are a category of tools.</p><p>Tools are used to build the product; tools are not the product. In cryptography, the product we are building is a protocol.</p><p><strong>An Example Protocol</strong></p><p>To make these concepts more concrete, let&#8217;s design a protocol that is accessible to beginners. For this protocol, the task will be sending encrypted messages.</p><p><em>The Tool: Caesar Cipher</em></p><p>A Caesar cipher takes every letter of a message and shifts it forward in the alphabet. If you go past the end of the alphabet, you start over at A.</p><p>For example, with a shift value of 3, here is how we shift the letters:</p><ul><li><p>A becomes D: A&#8594;B&#8594;C&#8594;D</p></li><li><p>B becomes E: B&#8594;C&#8594;D&#8594;E</p></li><li><p>Z becomes C: Z&#8594;A&#8594;B&#8594;C</p></li></ul><p>If our message is <code>HELLO WORLD</code>, and we use a Caesar cipher with a shift of 4, the new &#8220;message&#8221; will be <code>LIPPS ASVPH</code>. (For example, H becomes L: H&#8594;I&#8594;J&#8594;K&#8594;L.)</p><p><em>The Protocol</em></p><p>Protocols usually rely on a <strong>key</strong>, which is a secret value. For a Caesar cipher, the key is the shift value. Here is the protocol to encrypt messages:</p><div><hr></div><p>Beforehand&#8212;when nobody can eavesdrop on them&#8212;Alice and Bob agree on the key: the shift value for a Caesar cipher. Let&#8217;s say that they agree on a shift of 4.</p><p>Alice sends Bob a message (or vice-versa):</p><ol><li><p>Alice creates a message: <code>HELLO WORLD</code></p></li><li><p>Alice shifts every letter forward 4 spots. <code>HELLO WORLD</code> becomes <code>LIPPS ASVPH</code></p></li><li><p>Alice sends the encrypted message, <code>LIPPS ASVPH</code>, to Bob.</p></li><li><p>Bob shifts every letter backward 4 spots. <code>LIPPS ASVPH</code> becomes <code>HELLO WORLD</code></p></li></ol><div><hr></div><p>If a third person, Eve, eavesdrops on the conversation, the &#8220;message&#8221; that she will see is jumbled letters: <code>LIPPS ASVPH</code>.</p><h2>Evaluate Protocols, Not Tools</h2><p>While new technology often fascinates people, sometimes we get so enamored with technology that we lose sight of the task we&#8217;re trying to accomplish. Does this new technology help us accomplish that task, or is it just a shiny new thing?</p><p>While people love playing with chatbots like ChatGPT, what happens if an airline&#8217;s chatbot gives a wrong answer about a refund policy? Air Canada learned the answer the hard way: a Canadian tribunal made them <a href="https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/">honor the refund policy</a>. Or, as Cloudflare CEO Matthew Prince <a href="https://twitter.com/eastdakota/status/1758653521568862624">said</a>, &#8220;AI demos are easy. AI products are hard.&#8221;</p><p>In a similar vein, if you have a colorblind friend who is holding a red ball and a green ball, you can use a zero-knowledge proof to convince him that the balls have different colors&#8212;without revealing which ball is which color. That&#8217;s a cool party trick, but it&#8217;s not clear whether this zero-knowledge proof would have practical applications.</p><p>Tools are used to build the product; tools are not the product. The thing you want to evaluate is the product, not the tools. In this case, the product is the protocol.</p><p>For cryptography in particular, you can definitely shoot yourself in the foot by misusing your tools. The protocol lets us see how we are using our tools&#8212;and whether we are misusing those tools.</p><p>Later, we will examine two very similar protocols for two very similar tasks. One protocol is secure, and one protocol is fatally flawed. Both protocols use the same tool&#8212;hashing&#8212;but one protocol misuses that tool.</p><p>The more you think at the level of protocols (and not at the level of tools), the more persuasive your arguments will be.</p><p>This insight applies in both directions, too. If you want to argue that a certain task is not possible, then what is the key challenge in designing a protocol for this task? Why are existing tools not capable of solving that challenge? (A humanities degree often will not give you the expertise needed to answer those questions.)</p><p><strong>Evaluating the Example Protocol</strong></p><p>As a practical example, let&#8217;s evaluate our protocol for sending encrypted messages using a Caesar cipher. It has two issues:</p><ol><li><p>Beforehand&#8212;when nobody could eavesdrop on them&#8212;Alice and Bob agreed on the key: the shift value for the Caesar cipher. What happens when Alice and Bob cannot agree on a key beforehand?</p></li><li><p>There are only 25 possible keys. If Eve intercepts the encrypted message, <code>LIPPS ASVPH</code>, she can try to decrypt it with every possible shift value (1, 2, 3, &#8230;). Eventually, one of those shift values will work.</p></li></ol><p>In the real world, we would use a different protocol for this task. And if two parties cannot agree on a key beforehand, we can use additional tools for key agreement: a way for Alice and Bob to agree on a key without revealing that key to an eavesdropper.</p><h2>Negotiating the Requirements</h2><p>Let&#8217;s use age verification as a case study on defining and negotiating the requirements. Here, critics will frequently raise this point: kids will find a way to bypass age verification.</p><p>That point is technically correct but practically useless. What percentage of kids bypass age verification? Is it 0.5% of kids, or 50% of kids?</p><p>When you define the requirements for a task, many requirements will not be all-or-nothing. Instead of demanding perfection, you will determine what is good enough. There&#8217;s a saying that you don&#8217;t let the perfect be the enemy of the good.</p><p>Engineers in particular typically do not talk about 100%. Instead, they talk about the &#8220;number of 9s.&#8221; For example, 99.9% would be three 9s. Amazon S3, a cloud storage service, even promises <a href="https://aws.amazon.com/s3/storage-classes/">eleven 9s</a> of durability.</p><p>So how often should an age verification system stop kids? Do we need eleven 9s? Probably not. Is one 9 (90%) a reasonable request? Yes. (Even if age verification stopped kids only 75% of the time, that would still be a major policy victory.)</p><p>That leads to a key point: there often is room for reasonable negotiation on the requirements. In some cases&#8202;&#8212;&#8202;especially when cryptography gets involved&#8202;&#8212;&#8202;a seemingly intractable technical challenge can become easily solvable if you make a reasonable concession.</p><p>Tradeoffs are common in engineering. If you asked for age verification with eleven 9s of effectiveness, it would be extremely challenging to build that in a privacy-conscious way. If you only asked for one 9, the privacy challenges become much easier to solve.</p><h2>A Good Protocol: Password Authentication</h2><p>As a real-world example, let&#8217;s look at one task&#8202;&#8212;&#8202;password authentication&#8202;&#8212;&#8202;and the protocol we use to accomplish this task.</p><p><em>The Tool: Hashing</em></p><p>This protocol will use one key tool: hashing. Hashing can create a &#8220;digital fingerprint&#8221; for any piece of data&#8212;such as a password, a Word document, or a video file.</p><ul><li><p>The input of a hash function is arbitrary data of any size.</p></li><li><p>The output is a short piece of data, which is our digital fingerprint (also known as a hash or a hash value).</p></li></ul><p>Here is an example where the input is a password:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><ul><li><p>Input: <code>MyPassword</code></p></li><li><p>Output/hash value: <code>dc1e7c03e162397b355b6f1c895dfdf3790d98c10b920c55e91272b8eecada2a</code></p></li></ul><p>If the input was a different password, or if the input was a large Word document, the output would be different, but it would have the same length: 64 characters.</p><p>Just like each person has a unique fingerprint, each input produces a unique hash value. Two different passwords (or two different Word documents) will never have the same hash value.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Hashing is also a one-way operation. If all you know is the hash, it is impossible to figure out which input produced that hash&#8202;&#8212;&#8202;unless you get lucky and guess the input. For example, if I know the hash of your password (<code>dc1e7c03e162397b355b6f1c895dfdf3790d98c10b920c55e91272b8eecada2a</code>), I cannot reverse-engineer it to obtain your password (<code>MyPassword</code>).</p><p><em>The Protocol</em></p><p>So how does password authentication work? Here&#8217;s the basic protocol:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><div><hr></div><p>A user sets/resets their password:</p><ol><li><p>The user sends their password to a site (e.g., <code>MyPassword</code>).</p></li><li><p>The site computes the hash of that password (e.g., <code>dc1e7c03e162397b355b6f1c895dfdf3790d98c10b920c55e91272b8eecada2a</code>).</p></li><li><p>The site stores that hash in a database.</p></li></ol><p>A user logs in:</p><ol><li><p>The user sends their username and password to a site.</p></li><li><p>The site computes the hash of the password it just received.</p></li><li><p>The site retrieves the hash on file for that user.</p></li><li><p>If the two hashes match, the password is accepted.</p></li></ol><div><hr></div><p>If a data breach occurs, a hacker can steal the hash of your password, since that&#8217;s stored in the site&#8217;s database. But since hashing is a one-way operation, the hacker cannot reverse-engineer that hash to obtain your password.</p><p>(This protocol is missing one small yet important detail; we&#8217;ll return to that later.)</p><p><strong>A Meta-Point on Data Breaches</strong></p><p>Using this protocol as an example, we can also make a meta-point on data breaches: in some cases, a well-designed protocol can make certain guarantees even if a data breach occurs.</p><p>In this protocol for password authentication, your password cannot be stolen even if a data breach occurs. In my protocol for age verification, users cannot be de-anonymized even if a data breach occurs.</p><h2>A Bad Protocol</h2><p>Could we apply the same idea to Social Security numbers (SSNs)? Could a site use a similar protocol to store the hash of an SSN?</p><p>No. Even though we only changed one detail, the protocol is now fatally flawed. While we are using the same tool&#8212;hashing&#8212;we are now misusing that tool.</p><p>Earlier, we said, &#8220;If all you know is the hash, it is impossible to figure out which input produced that hash&#8202;&#8212;&#8202;unless you get lucky and guess the input.&#8221; That last part raises an intriguing possibility: instead of trying to make a lucky guess, what if you guessed every possible input? That is a brute-force attack.</p><p>Let&#8217;s say that a data breach occurred, and a hacker learns the hash of your SSN: <code>72de837c74b40716d430c711eebde10ff965fcc4a70c98e63a233ff36eebd6a1</code>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>An SSN is a 9-digit number, so there are 1 billion possible SSNs. The hacker could compute the hash of all 1 billion SSNs&#8202;&#8212;&#8202;until they find the SSN that matches the stolen hash. How long would that take? On my laptop, I can calculate those billion hashes in a little over a minute; the matching SSN is 123-45-6789.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>With hashing, you need to pay attention to how many possible inputs there are. If the input is 256 random bits, there are over 10<sup>77</sup> possible inputs: 1 followed by 77 0s.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> By comparison, there are about 10<sup>80</sup> atoms in the universe; a brute-force attack is impossible. With only 1 billion (10<sup>9</sup>) possible inputs, though, brute force will work.</p><p><strong>The Missing Detail for Password Authentication</strong></p><p>Our earlier protocol for password authentication was missing one key detail: the requirements for a valid password. Usually, passwords should include at least eight characters, an uppercase letter, a lowercase letter, a number, a special character, etc.</p><p>Those requirements exist because they expand the number of possible passwords, which guards against a brute-force attack. Brute force has a high chance of success for passwords under eight characters. (In practice, password length is the most important requirement. A 15-character password with only lowercase letters is much more secure than an 8-character password with all the special gadgets.)</p><h2>In Summary</h2><p>First, we establish what the task is (e.g., sending encrypted messages). This task will usually come with some requirements, though there often is room for reasonable negotiation on the requirements.</p><p>Next, we design a protocol. A protocol is a precise set of instructions where two or more parties communicate to accomplish a task. To build a protocol, we will have various tools at our disposal (e.g., hashing).</p><p>Tools are used to build the product; tools are not the product. The thing you want to evaluate is the protocol, not the tools. For cryptography in particular, you can definitely shoot yourself in the foot by misusing your tools. The protocol lets us see how we are using our tools&#8212;and whether we are misusing those tools.</p><p>The more you think at the level of protocols (and not at the level of tools), the more persuasive your arguments will be.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The input has a UTF-8 encoding, and the hash function is SHA-256. The output is 256 bits; we use a hex encoding to encode these bits as text. Each character has 16 possible values (0-9, a-f) and can encode 4 bits: 2<sup>4</sup> = 16. Thus, 64 characters would encode 256 bits.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>To use SHA-256 as an example, when there are infinite possible inputs and only 2<sup>256</sup> possible outputs, it is technically correct that some inputs will have the same output; the term for that is a collision. However, the odds of finding any collision&#8202;&#8212;&#8202;much less a collision of practical significance&#8202;&#8212;&#8202;are nigh impossible.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>In practice, many sites will also use an additional tool, salting, but that&#8217;s beyond the scope of this article.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>We assume that the SSN is stored as a 4-byte, little-endian integer. As before, the hash function is SHA-256.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Or, you could precompute all 1 billion hashes, and store them in a database that can look up the SSN for any hash. This technique is called a dictionary attack.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>You should use a cryptographically strong random number generator; a normal random number generator should not be used.</p><p></p></div></div>]]></content:encoded></item></channel></rss>