Why the TikTok Bill Doesn't Violate the First Amendment (Part II)
The courts are not the place to relitigate policy debates that you lost in Congress.
The TikTok bill, which would force the Chinese Communist Party (CCP) to divest TikTok—and would ban TikTok if the CCP refuses to divest—just passed the Senate and is on the verge of becoming a law. But does this bill violate the First Amendment?
Social media may be a forum for expression, but that doesn’t mean that any bill that touches social media is unconstitutional. Cable is also a forum for expression, but despite that, the Supreme Court upheld the FCC’s must-carry rules—rules that force cable companies to carry local broadcast stations.
The dispute over must-carry, Turner Broadcasting System v. FCC, was settled in two separate cases. In Turner I (1994), the Supreme ruled that must-carry is content-neutral and only subject to intermediate scrutiny. In Turner II (1997), the Supreme Court ruled that must-carry survives intermediate scrutiny.
As for the TikTok bill, there is nothing new under the sun. In part I, we showed that the TikTok bill is content-neutral and only subject to intermediate scrutiny. In part II, we’ll show that this bill survives intermediate scrutiny. With the TikTok bill on the verge of becoming a law, some will use legal disputes to relitigate the policy disputes that they lost in Congress. That didn’t work in Turner II, and it won’t work here. The courts won’t replace Congress’s policy judgments with its own policy judgments.
Intermediate Scrutiny: How It Works
What is intermediate scrutiny? Here is the definition from Turner II:
A content-neutral regulation will be sustained under the First Amendment if it advances important governmental interests unrelated to the suppression of free speech and does not burden substantially more speech than necessary to further those interests.
Turner II did not just establish the criteria for intermediate scrutiny, though; it also establishes how courts evaluate those criteria. While some treat the First Amendment like a magical trump card that overturns laws, that’s not how intermediate scrutiny works (though it could be an accurate description of how strict scrutiny works).
Before we explain the details of how intermediate scrutiny works, let’s first explain why it creates a more favorable legal environment for the government.
First, content-neutral laws pose less of a threat to free speech: “Content-neutral regulations do not pose the same ‘inherent dangers to free expression’ that content-based regulations do, and thus are subject to a less rigorous analysis, which affords the Government latitude in designing a regulatory solution.” Second, the courts are not a policy-making institution; Congress is the policy-making institution. This is separation of powers 101: “We are not at liberty to substitute our judgment for the reasonable conclusion of a legislative body.”
(The courts, however, will assert their judicial power on constitutional questions—including the question of whether a law is content-based or content-neutral. Nonetheless, if a court—after carefully scrutinizing a law—does determine that a law is content-neutral, the legal landscape will become more favorable to the government.)
Since the courts will defer to Congress on policy matters, under intermediate scrutiny, the courts will not make their own policy judgment as to which side had the better evidence. They will instead ask if Congress had substantial evidence: “Our sole obligation is ‘to assure that, in formulating its judgments, Congress has drawn reasonable inferences based on substantial evidence.’ ”
This core principle also applies when conflicting evidence exists: “The Constitution gives to Congress the role of weighing conflicting evidence in the legislative process.” So long as Congress’s conclusion was reasonable and supported by substantial evidence, the existence of other possible conclusions does not negate Congress’s judgment—even if those other conclusions were also reasonable.
Moreover, the courts have recognized that Congress’s authority to make policy judgments includes the authority to make predictive judgments: “A fundamental principle of legislation is that Congress is under no obligation to wait until the entire harm occurs but may act to prevent it.”
The First Amendment operates differently under intermediate scrutiny. The concluding sentences of Turner II concisely state what it does (and does not) require:
We cannot displace Congress’ judgment respecting content-neutral regulations with our own, so long as its policy is grounded on reasonable factual findings supported by evidence that is substantial for a legislative determination. Those requirements were met in this case, and in these circumstances the First Amendment requires nothing more.
Important Governmental Interests
Does TikTok pose a national security risk? When he appeared on 60 Minutes, former CIA officer Klon Kitchen framed the issue perfectly:
Imagine you woke up tomorrow morning and you saw a news report that China had distributed 100 million sensors around the United States, and that any time an American walked past one of these sensor, this sensor automatically collected off of your phone your name, your home address, your personal network, who you're friends with, your online viewing habits and a whole host of other pieces of information. Well, that's precisely what TikTok is. It has 100 million U.S. users, it collects all of that information.
When Congress is allowed to make predictive judgments, and it only needs to show that it acted reasonably based on substantial evidence, it should have an easy time proving the first prong: that the bill advances an important government interest.
When it comes to the threat posed by TikTok, there certainly won’t be a shortage of evidence, either. On the right, FCC Commissioner Brendan Carr has compiled an excellent Twitter thread of evidence. On the left, Sen. Maria Cantwell (D-WA) and Sen. Mark Warner (D-VA) engaged in a colloquy on the Senate floor that also laid out the substantial evidence for the TikTok bill.
Critics, on the other hand, frequently ignore this evidence when they try to argue the contrary. For example, even after TikTok was caught redhanded spying on journalists, Jennifer Huddleston and Paul Matzke of the Cato Institute still claim that the evidence for the TikTok bill is only “mere suspicion that TikTok might someday be used to monitor American citizens.” This is not a serious argument.
Writing for Lawfare, Adam Chan also notes that the US government has a very strong case on this first prong. In particular, he cites precedents relating to national security (Holder v. Humanitarian Law Project (2010)) and foreign influence in elections (Bluman v. FEC (2011)) where the Supreme Court even upheld content-based laws.
Narrow Tailoring
Realistically, the question of whether the TikTok bill can survive intermediate scrutiny is going to center on the second prong: narrow tailoring. As Turner II noted, narrow tailoring under intermediate scrutiny is very different from strict scrutiny: “we will not invalidate the preferred remedial scheme because some alternative solution is marginally less intrusive on a speaker’s First Amendment interests.”
In Turner II, the Supreme Court also evaluated the effectiveness of proposed alternatives to must-carry. For example, it rejected a leased-access regime as a narrowly tailored alternative to must-carry, as “it would not be as effective in achieving Congress’ further goal of ensuring that significant programming remains available for the 40 percent of American households without cable.”
As a corollary, under intermediate scrutiny, the government decides to what degree it will promote its legitimate interests: “the validity of its determination ‘ “does not turn on a judge’s agreement with the responsible decisionmaker concerning” . . . the degree to which [the Government’s] interests should be promoted.’ ”
So long as Congress “does not burden substantially more speech than necessary,” the courts will defer to the policy judgments of Congress.
Divest First, Ban Second
With the TikTok bill, divestment is the first option; a ban is the second option. This strategy has significant legal consequences. If a ban was the first option, perhaps one could argue that it burdens more speech than necessary—and that divestment is a narrowly tailored alternative. You can’t make that same argument if the Chinese Communist Party refuses to divest, though. At that point, a ban is necessary.
The Real Problem: The Chinese Communist Party
Are there narrowly tailored alternatives to divestment? Since the real problem with TikTok is the Chinese Communist Party, the answer is no.
Some have tried to narrowly portray the problem as a privacy issue, but the problems created by the CCP’s control of TikTok exist along multiple dimensions: privacy, child safety, espionage, and foreign interference in elections, among others.
With some dimensions, such as privacy, the problem is the CCP’s access to US user data. With other dimensions, such as child safety, the problem is the CCP’s control of the algorithms. For example, 16-year-old Chase Nasca committed suicide after TikTok’s algorithms fed him over 1,000 unsolicited videos promoting violence and suicide. While every social media platform has child safety issues, TikTok is the worst of the worst. The Chinese Communist Party does not care about dead American kids.
Thus, a national privacy law is not a narrowly tailored alternative, as it only deals with one dimension: privacy. It won’t fix the algorithms that are killing our kids.
And even if the only problem with TikTok was privacy, a national privacy law would still not be a narrowly tailored alternative for another reason. While a national privacy law would be an effective deterrent for a truly private company like Facebook, it would not be an effective deterrent for the CCP. After all, we already have intellectual property (IP) laws on the books, but that has not deterred IP theft from China.
Project Texas
Finally, some have suggested that a narrowly tailored alternative is Project Texas, which would put US user data in a lockbox located in the USA. Speaking as an engineer, this option is not viable.
The short answer here is that you can only trust Project Texas if you trust the CCP; it suffices to say that you cannot trust the CCP. Nonetheless, here is the long answer.
Under Chinese law, if the lockbox is located in China, or if it is owned by a Chinese company, the CCP can make you open the lockbox. TikTok’s parent company, ByteDance, is a Chinese company that must obey Chinese law. Putting the lockbox in America does not fully solve the problem. If ByteDance still has a key to the lockbox, it does not matter whether the lockbox is located in China or America.
Moreover, as an engineer, if you asked me to do a threat model for a lockbox, I would ask you, “Who designed the lockbox?” If the answer is a Chinese company, that’s your critical vulnerability right there. Since ByteDance controls TikTok, Project Texas is a lockbox that is designed by China. Lockbox rejected.
One consultant working on Project Texas said, “I feel like with these tools, there’s some backdoor to access user data in almost all of them, which is exhausting.” Reporting from the Wall Street Journal likewise confirmed that Project Texas was a “porous” system that does not live up to its promises: “Employees say ByteDance managers continue to request U.S. data.”
(It's not just ByteDance, either. Backdoors have been found on devices from Lenovo, a Chinese company, as far back as 2013. Telecom equipment by Huawei, a Chinese company, had backdoors that let them covertly access US telecom networks.)
Even if ByteDance did not have a key to the lockbox, what if an American employee removes data from the lockbox and shares it with ByteDance? One TikTok employee had a manager in Seattle on paper, but actually reported to a ByteDance executive in Beijing: “Nearly every 14 days . . . he emailed spreadsheets filled with data for hundreds of thousands of U.S. users to ByteDance workers in Beijing.”
In cybersecurity, you often assume vulnerabilities will be exploited; even if you can’t find the exploit, someone else will. Project Texas's vulnerabilities are downstream of the real vulnerability: it’s designed by China. It’s foolish to assume that China will not exploit that. Once again, the real problem is the Chinese Communist Party.
If the Chinese Communist Party refuses to divest TikTok, TikTok delenda est.