How Trump Should Approach AI Talks With China: Targeted Dialogue, Maximum Pressure


The Chinese government’s view that AI safety dialogues are a means to close this capability gap was on full display when the United States and China held the only such dialogue in 2024 under President Joe Biden. The United States government sent leading technical experts who outlined areas of greatest shared risk; the Chinese government sent diplomats who complained about U.S. export controls on AI chips. Chinese AI companies and government leaders have repeatedly stated that U.S. export controls are the single biggest constraint on China’s AI development.

The Chinese government’s perspective on AI safety cooperation, and its behavior at past U.S.-China AI dialogues, is also consistent with, and informed by, its longstanding refusal to agree to substantive arms control measures with the United States. China views arms control with extreme skepticism, and China’s track record of abiding by arms control commitments it does make is poor. Leading People’s Liberation Army (PLA) military strategists have described arms control as a “struggle” that great powers use to protect their advantages, and have asserted that Soviet concessions to the United States in arms control negotiations weakened the Soviet Union’s strategic position and contributed to its decline. Make no mistake, the Chinese government would view any agreement to limit China’s AI capabilities as a form of arms control.

China’s skepticism of arms control also stems in part from the fact that it was never a party to a Cuban Missile Crisis-like event, which instilled in U.S. and Russian leaders and negotiators a visceral sense of responsibility to prevent global catastrophe. U.S.-Russia nuclear negotiations produced zero substantive results until the Cuban Missile Crisis. But in 1963, just nine months after that event, the two countries signed the Hotline Agreement and the Limited Test Ban Treaty, the first agreements to establish crisis communications systems and limit certain dangerous activities. Chinese leaders have no similar experience to draw from.

While a U.S.-China AI safety dialogue could help establish relationships and lay the foundation for substantive negotiations in the future, it will not change the perspective of the Chinese government on these issues. So long as China believes it has a chance of catching up with the United States in AI and does not fear reprisal from the United States for potential noncompliance, an effective U.S.-China agreement on AI safety is unattainable. China is currently extremely unlikely to agree to measures that would impose meaningful constraints on its ability to close the gap with the United States. And even if it did, any agreement would be impossible to verify—and China is unlikely to abide by it.

To reach an effective agreement on AI safety with China, the United States therefore must change the structural conditions informing the Chinese government’s current unwillingness to negotiate in good faith. There are three ways it could do so:

  1. Washington could give in to Beijing’s requests to loosen AI-related export controls and permit China to catch up to the United States in AI. The U.S. government would then have to hope that China both complied with any agreement and refrained from using its newly powerful AI capabilities to undermine U.S. national security.
  2. The United States could impose a “maximum pressure” campaign that seeks to increase the gap between U.S. and Chinese AI capabilities and increase Washington’s leverage by tightening export controls. This would eliminate Beijing’s access to U.S. technology that is currently driving its AI development.
  3. The United States could keep the status quo and wait for an external event—a “Cuban Missile Crisis” related to AI—that compels the Chinese government to value global priorities on AI safety ahead of its own priorities on AI capability development.

Of these, the second is the only responsible path, and by far the most effective one. If the Chinese government believed there to be a wide and rapidly expanding AI gap between the United States and China—and viewed existing U.S. AI capabilities as posing a profound risk to its national security—it would likely view negotiations that impose even modest constraints on U.S. AI capabilities as in the country’s national interest. China would have little leverage in these negotiations, but it would be far more likely to comply with any agreement. Beijing would fear detection and reprisal by Washington, enabled by its superior AI models. 

If the United States significantly tightened export controls on China, it could expand the U.S. lead from eight months, to eighteen or twenty-four—an eternity in AI development. Chinese firms remain extremely dependent on U.S. computing power, which is the most critical input into AI development. China will only produce about 2 percent of the AI computing power of U.S. firms this year, and the computing power needs to develop and serve a leading AI model are increasing exponentially. U.S. export controls have materially slowed China’s AI development, but they contain significant loopholes that allow China to purchase U.S. AI chips, remotely access them via the cloud, smuggle them via third-countries, or use U.S. chipmaking technology to manufacture them. The presence of these loopholes is not an inevitability; it is a policy choice that can be changed.

Trump’s goal in Beijing should not be to reach an agreement with China on AI safety, but to create the conditions for such an agreement down the road. If the Trump administration does establish a dialogue with China on AI, it must set clear expectations with the Chinese that the dialogue will be narrowly focused on AI safety issues and not cover export controls. And simultaneously, any such dialogue must be coupled with a “maximum pressure” campaign that imposes robust export controls that close all existing loopholes to maximize the U.S. lead over China. Just as the United States and the Soviet Union never assisted each other’s nuclear weapons development programs, the United States and China should not assist the other’s efforts to develop advanced AI models.

The only alternatives to this approach are to give China the tools to catch up to the United States in AI and hope it operates in good faith, or wait for a global catastrophe to shock the Chinese into good faith cooperation. The first gambles the United States’ security on China’s goodwill; the second gambles it on a disaster terrible enough to change Beijing’s calculus. Maximum pressure with dialogue not only preserves U.S. AI leadership—it’s also the best way to achieve long-term AI safety.

This work represents the views and opinions solely of the author. The Council on Foreign Relations is an independent, nonpartisan membership organization, think tank, and publisher, and takes no institutional positions on matters of policy.



Source link

Scroll to Top