AI Coordination Without Illusion
The Right Way for the United States and China to Get Along in the AI Era
Across the table from Xi Jinping in Beijing next week, President Trump will hear a list of Chinese demands. Near the top, in some form or another, will be the same ask pitched to every American interlocutor since 2023: relax the chip controls. In return, Chinese negotiators will suggest China “stands ready” to participate in “constructive dialogue” to manage the risks posed by AI — perhaps launching a working group empowered to discuss the same.
This Trump-Xi summit is the right moment to reopen conversation with Beijing on AI. What was once dismissed as science fiction — cyber threats to critical infrastructure, brought about by superhuman AI systems — has begun to materialize on both sides of the Pacific, and some form of coordination will be necessary in the coming months and years to protect American interests.
But what is to be discussed, and/or decided? The leaders’ meeting could not arrive at a less opportune time. Four weeks after the release of Claude Mythos, the U.S. government is still scrambling to formulate its own approach to AI. The Department of War is caught in a blood feud with one of the country’s most capable AI champions, while the Treasury gropes desperately toward some mechanism to certify new models are “safe” before they are released to the public.
Amid the frenzied food fight in Washington to define which AI risks are real and how seriously to weigh them, the U.S. interagency is breaking — quickly and visibly — toward treating AI as an abnormal technology.
Next week, it will ask China to do the same.
Where Both Governments Stand
Until the spring of 2026, neither Washington nor Beijing took the risks posed by frontier AI seriously. Both preferred to treat AI as a normal technology — a commodity to be contested in third markets and adopted for economic and military power.
The United States under Trump 2.0 has generally eschewed concerns about AI safety. Just three weeks into his tenure, at the 2025 Paris AI Action Summit, Vice President Vance decried overregulation that might “deter innovators from taking the risks necessary to advance the ball,” and declared that the United States would not accept any restriction on its sovereignty.
The Trump administration has correctly treated diffusion of the technology — both internationally and among small- and medium enterprises — as the primary locus of U.S.-China competition over AI. It is on this battlefield that initiatives like Pax Silica, the American AI Exports Program, and the newly formed Tech Prosperity Corps duel with China’s emergent Digital Silk Road.
Beijing, meanwhile, erected stringent controls on training data and model outputs — but these measures were principally designed to protect political stability rather than personal or national security. Even as it began requiring developers to censor politically inconvenient content under regulations like TC260, Beijing has generally refused to concede the serious safety deficits of Chinese models, preferring instead to look the other way as Western researchers succeeded in jailbreaking DeepSeek to produce bioweapons or help with offensive cyber operations.
To be sure, Beijing has cloaked itself in the language of AI “governance” — though most of its stated concerns ring hollow in practice. At the same Paris AI Action Summit where the United States rejected AI “safety,” China’s National AI Safety and Development Association began inveighing against AI’s “misuse, abuse, and malicious use.” In the fifteen months since, I have asked dozens of Chinese scholars, technical experts, and officials a simple question: How will Beijing reconcile this aspiration with the fact that its national AI champions are building open-weight models — whose entire commercial strategy depends on uncontrolled proliferation? None has been able to offer a coherent answer.
In spring 2026, both governments are changing their tune. At the same time they seek to sell AI services to more users at home and abroad, bureaucrats in both Washington and Beijing are increasingly fearful of AI’s disruptive potential. And though they may lack the vocabulary to fully articulate the breadth and speed of the technology’s change, it is clear that both governments recognize — instinctually — that they are witnessing a profound evolution in humankind’s relationship with AI.
Where the Technology is Going
Most commercially available AI today sits at what will become the low end of the global AI value chain: sparse-attention models and consumer apps that can be run locally at the edge, on a phone or laptop. China is competitive at this layer and will likely keep gaining global market share. The 15th Five Year Plan is built around boosting local deployment of “small AI,” using agents to spawn one-person companies that might saturate hordes of the country’s unemployed youth.
Until very recently, the kinds of AI systems available to users in the United States and China did not, on their own, threaten the state’s monopoly on the instruments of violence. In April, that began to change. The class of models coming online in late spring 2026 — including Claude Mythos, GPT-5.5, and others yet to be unveiled this summer — have crossed a frightening, existential threshold. The capacity to discover and exploit thousands of zero-day vulnerabilities in legacy software is not a feature either the Washington security establishment or the Chinese Communist Party will tolerate in the hands of a private champion accountable only to investors.
A natural compute moat will, for a little while, prevent Mythos-class capabilities from diffusing widely. The reality is that few organizations can afford to buy or rent the computational power needed to host thousands of copies of a ten-trillion-parameter language model in parallel. We should expect access to compute to sustain the de facto boundary between what states and citizens can accomplish with AI.
But this murky equilibrium is unlikely to placate the policy elite of either country. And as free marketeers are lamenting this week in Washington, the dam is breaking toward reflexive regulation — if not through Executive Order, then through voluntary commitments from frontier labs, wrangled through coercive contracting practices.
And so, at the same time it begins holding a magnifying glass to its own industry, Washington needs to know that Beijing will do the same.
The Ides of May
One week ahead of the first Presidential visit to China in nearly a decade, Washington finds itself playing an impossible game of Twister. American officials are contemptuous of Chinese “AI governance,” which they have correctly identified as a hollow political instrument — while simultaneously needing Beijing to take the technology seriously enough to prevent its unfettered proliferation.
This week, following reporting by the Wall Street Journal, the debate in Washington over “engagement” with the CCP on AI safety is proceeding into two camps:
The first wants to recreate the U.S.-Soviet arms control architecture for AI. Its proponents theorycraft U.S.-China AI hotlines, working groups, declarations, and verification regimes. Some have spent careers in arms control and see this moment as the natural extension of that work; while others are technologists who have read enough Cold War history to find the analogy seductive. They are right that the technology’s trajectory demands some form of coordinated risk management. Their error is to underestimate three things: how far afield AI stands from nuclear weapons; how different the China of 2026 is from the Soviet Union of 1972; and indeed the extent to which arms control ever succeeded — relative to the delicate balance of terror that animated much of Soviet and American policy over the course of the first Cold War.
The second camp views any conversation with Beijing about AI as appeasement. They are correct that Chinese negotiators have attempted to use past dialogues to muddy the waters, relax American export controls, and harvest information about U.S. red lines. They are correct, too, that Beijing will likely lean on dialogue to mask the fact that its systems are easily jailbroken.
Where they err is in concluding that no useful conversation with the Chinese side is possible or necessary.
The Goal of Coordination
The reality of this moment is that the United States and China are not prepared to “cooperate on AI safety” in a meaningful sense. That would require a level of trust and shared understanding that the two sides are working toward the same goal — which does not exist today between Washington and Beijing.
Instead, what we are likely to see next week is coordination — parallel domestic commitments by two governments that distrust each other, applied to a narrow set of shared risks neither can manage alone. The right model is not SALT or START, but something much more modest.
To define what that is — or, what it could be — it helps to begin with a list of negatives: What are not the objectives of U.S.-China coordination on AI safety?
First, the goal is not to constrain the development of any capability being pursued by the other side. China will pursue every AI capability it believes it can master, and so will the United States. Meaningfully constraining capability development — à la nuclear or biological weapons — rests on good-faith assumptions of intent. Such frameworks have a long history of collapsing under the political weight of their own optimism.
Second, the goal is also not bilateral verification. Verifying compliance with substantive AI commitments would require either intrusive inspections that neither government will accept, or a level of mutual technical transparency that does not yet exist and for which the political will not exist in 2026.
The honest goal of “dialogue” with the CCP on AI is to build pressure inside both governments to take AI’s risks seriously — and to, generally speaking, steer two independent ships of state in roughly the same direction.
Even a soft bilateral commitment to “discuss” AI “security” through a working group will trigger internal coordination and force decisions about who-owns-which-slice of a sprawling issue set in both governments.
Most optimistically, an attempt at U.S.-China “coordination” on AI policy will produce some minor competition between Washington and Beijing over which side can appear more responsible. This is an outcome worth encouraging, even if neither side particularly cares about what is produced in the bilateral channel.
Three Commitments Within Reach
Within this narrow corridor, three commitments are worth broaching at the May summit. None is particularly costly, and each is something both governments already have independent reason to want to implement in some fashion:
First, lead times for technical evaluation before public release of frontier models. The U.S. AI Safety Institute already has a working relationship with American labs to test major models before deployment; Anthropic’s Project Glasswing showcased the most public version of this arrangement, and other major AI companies are now flexing their own MOUs. In China, institutions like the Shanghai AI Lab, the AI Safety Governance Framework, and various MIIT-affiliated bodies have begun to play an analogous role.
A parallel commitment by both governments to facilitate pre-release evaluation by a designated national authority would establish a shared “best practice” without requiring either side to share weights, architectures, or training data with the other. Such a principle would formalize what each side is already moving toward, and create domestic political pressure to actually do it. The challenge will lie in crafting an agreement strong enough that it propels policy in the right direction, yet vague enough that allows sufficient space for each system to facilitate pre-deployment information-sharing in its own fashion — avoiding any onerous, FDA-style pre-approval for the United States, while providing necessary buffer time for both governments to harden their critical infrastructure.
Second, DNA synthesis screening for AI-assisted biology workflows. Neither government has any interest in seeing a non-state actor synthesize a pandemic pathogen with the help of a frontier model. This is a high-stakes basket of risks, shared by both countries, where verification technology is surprisingly mature. The United States has already begun implementing screening requirements for synthesis providers; and China has its own biosecurity infrastructure — which, however different in motivation, is more than capable of supporting a similar regime.
Here, too, parallel commitments implemented by each side’s domestic regulatory apparatus — coordinated against unscreened gray-market providers in third countries — could foreclose a nightmare scenario at minimal cost.
Third and most basic, an ongoing channel of discussion for AI-related cyber incidents. Close observers of the U.S.-China relationship will be familiar with the Military Maritime Consultative Dialogue. This is one of few U.S.-China crisis communication mechanisms that has produced sustained value, precisely because it is unsentimental. It does not require trust, nor does it produce much in the way of photo ops. It can be — has been — canceled when the broader bilateral relationship sours.
Launching a formal dialogue mechanism for AI would be useful precisely because it could be trashed, cashed in, or scaled up as circumstances require. It need not require, and in fact should not be framed as, discussing “escalation” in a way that allows Chinese negotiators to probe American red lines.
These three commitments share three features: each is something the U.S. and Chinese governments has independent reason to want; each can be implemented domestically without giving Beijing any asymmetric advantage; and each creates a small amount of competitive pressure on the Chinese system to demonstrate, at least for foreign audiences, that it is taking AI risk as seriously as the United States.
What Could Go Wrong
Neither the United States nor China should entertain any pre-condition for a simple discussion on the shared risks both countries are bound to experience from AI’s development. Unfortunately, this is not how international politics is conducted.
Beijing is sure to demand some form of relaxation or suspension of American export controls on advanced chips. The trap will be set politely: Beijing will frame relief as a precondition for serious technical engagement — a goodwill gesture necessary to “lay the foundation” for real dialogue. Some in the U.S. AI industry, watching their China revenue evaporate, will amplify the case behind closed doors. Both will appeal to the same instinct: that hardline export controls are an obstacle, rather than the source of American leverage, in any meaningful bilateral conversation about AI risk.
The reality is that its export controls on advanced chips are a major source of American leverage over the trajectory of China’s AI industry. They have forced China to absorb compounding costs during the most consequential window of the technology’s development. Chinese labs cannot yet purchase enough computational power to generate Mythos-class capabilities or serve them to large numbers of users — in China or elsewhere. This is a tremendous boon to both American national security and economic prosperity.
When American negotiators enter the room next week, they should understand that Washington is not bargaining from weakness; Beijing is. The chip controls are not a sunk cost the United States should “harvest” before they become “useless” — they are a rapidly appreciating asset. And trading them away in 2026 for the promise of dialogue would risk echoing the very same mistake with China that led the United States down a decade of broken promises stretching from the Rose Garden to the Delfines Hotel.
A Test of Mirrors
Diaoyutai next week is not a venue for a grand bargain. What is on the table is something narrower and, on its own merits, worth pursuing: a set of parallel commitments that cost the United States nothing, and which produce valuable pressure inside the Chinese system to finally take AI’s risks seriously.
Still, American negotiators must understand that, although dialogue serves the interest of both countries, Beijing will attempt to extract some sticker price for entry. Trading custody over computational power is not a deal the United States should accept. For Trump to “succeed” in discussing AI with China, his administration must accurately comprehend the state of its hand, and refuse to trade away a compounding source of American advantage.
The most important question at this point is whether Washington is prepared to hold a coherent line on the kind of AI policy it wants to see cultivated in any country. Four weeks after Mythos, the U.S. interagency is still picking sides in its own feud over whether AI is a normal technology.
The most useful thing about a Trump-Xi summit may be the deadline it imposes: every senior official in the executive branch must decide, on paper and in advance, what they are prepared to ask of Beijing — which is to say, what they are prepared to ask of themselves.


