Can Deepseek Be Banned in the U.S. via the "Protecting Americans from Foreign-Adversary Controlled Applications Act" (the "TikTok Ban")?
The question of whether Deepseek’s AI models could be banned in the U.S.A. under the "TikTok Ban" law has surfaced in the wake of NVIDIA's historic $600 billion single-day loss.
On January 27, 2025, Nvidia’s stock price plummeted nearly 18%, a stunning and record-breaking loss of $600 billion, the single largest one-day valuation loss in history. This was triggered by the announcement of Deepseek’s latest AI model, which promises cutting-edge efficiency at a fraction of Nvidia’s hardware requirements.
Even more disruptive than the model itself is that it was released as open-source (read: FREE) software. This caused an absolute earthquake across the stock markets as investors began realizing the massive implications that this most certainly will have on the American AI industry.
Naturally, investors and technology enthusiasts are now asking, “Can Chinese AI models be banned under the Protecting Americans from Foreign-Adversary Controlled Applications Act” (a.k.a. PAFACA)?”
In short, probably not, but even if that were possible, it would still not close the Grand Canyon-sized national security threat that this particular AI model has already cracked open.
Disclaimer: I am not an attorney; however, I can read, and it is not hard to tell that this law was written with a bafflingly obvious blind eye toward AI technology.
What Threats Do Chinese AI Models Pose?
Various laws allow the Chinese Communist Party to require Chinese-based companies like Deepseek to hand over their data if requested, which is why Congress got so ruffled over TikTok.
Does the same risk exist with Deepseek? Let’s take a look…
Deepseek's Privacy Policy states:
“Where We Store Your Information
The personal information we collect from you may be stored on a server located outside of the country where you live. We store the information we collect in secure servers located in the People's Republic of China.”
While TikTok may have troves of data about what type of cat videos make us laugh the hardest, Deepseek is poised to collect, and perhaps turn over to the Chinese government, an infinitely larger amount of MUCH more sensitive data.
Does PAFACA cover AI Models?
Can’t we just invoke the “Protecting Americans from Foreign-Adversary Controlled Applications Act”? Isn’t that what it’s for? Problem solved, right?
One would think so - but PAFACA focuses on applications. According to the law, an application is “a website, desktop application, mobile application, or augmented or immersive technology application”). An AI model does not meet this definition. Even if it did, the law says that the application must also meet ALL of the following criteria:
“Permits a user to create an account or profile to generate, share, and view text, images, videos, real-time communications, or similar content” and
“Has more than 1,000,000 monthly active users” and
“Enables 1 or more users to generate or distribute content that can be viewed by other users of the website, desktop application, mobile application, or augmented or immersive technology application” and
“Enables 1 or more users to view content generated by other users of the website, desktop application, mobile application, or augmented or immersive technology application.”
I don’t think we need an attorney to clarify that this is obviously a description of a social media app, not an AI model. Models are backend technologies that power applications but are not applications themselves and don’t inherently allow user interaction or content sharing.
Isn’t Deepseek Available as a Mobile App?
Deepseek makes its model available via both a website and a mobile application, the latter of which has now stolen ChatGPT’s position as the #1 app in the Apple App Store. These meet the law’s high-level definition of an application, but they don’t meet ALL of the law’s criteria.
It would be a very different story if Congress wrote the list of criteria with the word “OR” versus “AND” - because it can be argued that some of it applies, but clearly not all of it. Deepseek does not permit “a user to create an account or profile to generate, share, and view text, images, videos, real-time communications, or similar content” “that can be viewed by other users.”
That being said, let’s set that aside and conduct a thought experiment in which the courts decide that only meeting some of the criteria is good enough. Then, we need to look at whether Deepseek as a company falls under the powers of this law.
Does PAFACA Apply to Deepseek as a Company?
The law states that the application in question must be “operated, directly or indirectly” by a “covered company," which is defined as:
ByteDance, Ltd.; TikTok; or their subsidiaries or successors which are controlled by a foreign adversary; or
A company that “is controlled by a foreign adversary” and "is determined by the President to present a significant threat to the national security of the United States”
The second bullet could indeed apply if the President decides to declare Deepseek a significant threat, which he could easily do. DeepSeek Artificial Intelligence Co., Ltd. is a Chinese company with servers located in China, which has already been declared a foreign adversary.
This checks the “covered company” box. In this case, I do think that a ban on Deepseek’s website and mobile app could become a reality.
The clincher is that it wouldn’t stop the model from being used, only Deepseek-owned websites and apps.
How Can Deepseek’s Model Be Used Without Deepseek Apps?
Who needs Deepseek-owned applications when these AI models can be used through APIs? Anybody can whip up an app that uses Deepseek’s AI models through an API (Application Programming Interface) - instantly bypassing the “covered company” hurdle.
The API angle alone throws PAFACA’s power out the window, but wait - there’s more! Any hope of the U.S. being able to take control over this national security shitshow is instantly disintegrated by the fact that Deepseek decided to launch its latest model, Deepseek-R1, with an open-source license. And not just any open source license - the MIT License.
Why is MIT Licensing a Big Deal?
Deepseek’s model, Deepseek-R1, is available under an MIT license, allowing developers to freely integrate its code into their applications - even for commercial use, modification, and re-distribution. This means that American companies and companies from any nation not declared a foreign adversary can essentially copy/paste Deepseek’s AI models into their applications.
Once integrated, the combined software becomes the company’s proprietary product - and is immediately untouchable by the PAFACA law.
What Can of Security Worms Does this Open?
Hidden Backdoors: If Deepseek’s open-source code includes vulnerabilities or hidden features that compromise security, it would be up to the application developer to effectively audit and test the code for such risks, which they may not do. Companies in regulated industries like finance, healthcare, or defense may be required to perform security testing under existing frameworks (e.g., HIPAA, FISMA), but outside of those industries, there is currently no obligation to do such testing.
Reliance: While Deepseek loses control under the MIT license once the model is integrated into another company’s application, those applications will still rely on Deepseek for updated versions of the model. Even if no backdoors or security vulnerabilities exist in the model right now, there’s no guarantee that they won’t appear in future versions.
Final Thoughts: Could Deepseek Be Banned?
The short-sightedness of the PAFACA law left the United States extremely vulnerable to national security risks that are light years above and beyond those presented by TikTok.
While I’m sure there will be an attempt to invoke PAFACA against Deepseek, that is likely to fail. Even if it succeeded, a foreign-adversary-controlled AI model's enormous national security risk is incomparably larger than that posed by a social media app.
Congress's design of the PAFACA law around the latter was not only short-sighted - it was mind-bogglingly stupid. The U.S. is doomed to lose the global artificial intelligence wars spectacularly unless the country develops some modicum of congressional intelligence first.
Comment or reply to submit your AI question as an upcoming newsletter topic!