Executive Summary
Within 48 hours this week, three incompatible American AI doctrines surfaced in response to a single event: Anthropic's publication of "2028 — Two Scenarios for Global AI Leadership," the most explicit public policy statement any major frontier lab has released on US-China AI competition. Anthropic's doctrine holds that compute restriction, closed-frontier model access, and anti-distillation enforcement are necessary to lock in democratic AI leadership before 2028 — the year Anthropic implicitly identifies as the threshold for recursive self-improvement. Nvidia's counter-doctrine, articulated through Jensen Huang, holds that restricting compute to China accelerates the very indigenous chip-industry development that would make restrictions permanently counterproductive, and that American AI companies must become the platform that Chinese AI runs on. OpenAI's emerging doctrine diverges from both: the company proposed a US-led global AI governance body modeled on the International Atomic Energy Agency (IAEA), explicitly including China as a member — a multilateral, institution-based path that neither Anthropic's binary scenarios nor Nvidia's market-dominance framing leaves room for.
The UK AI Security Institute added empirical urgency simultaneously: its published finding that the length of cyber tasks autonomous AI can complete is doubling every 4.5 months, with both Anthropic's Mythos model and OpenAI's GPT-5.5 showing "major gains in cyber capabilities" — and that both models appear limited by tokens allocated, not by underlying capability. The implication is that current safety constraints on the most dangerous models are resource-allocation decisions, not hard capability limits.
Matthew Berman's 6,400-word video response to Anthropic's essay — published the same day and analyzed in detail here — distills the doctrinal fracture into its sharpest form: Berman agrees with virtually all of Anthropic's threat identification, including the 2028 recursive-self-improvement timeline, the authoritarian-repression-at-scale risk, and the compute leadership analysis. He breaks sharply on solutions, arguing that open-source export is not an obstacle to American AI dominance but its necessary instrument (Matthew Berman, YouTube). These threads converge into a strategic dilemma the industry has not previously stated in public terms: the window during which compute restriction is both enforceable and effective may be closing faster than export-control timelines can run.
Market Context

The physical substrate of the US-China AI competition is more concentrated than most public commentary acknowledges. Anthropic's own published analysis finds that Huawei will produce just 4% of Nvidia's aggregate compute in 2026, falling to 2% in 2027 — a striking compute-gap snapshot that represents the strongest empirical leg of the export-control argument. China cannot currently manufacture Extreme Ultraviolet (EUV) or Deep Ultraviolet (DUV) lithography equipment, and cannot produce High Bandwidth Memory (HBM) at scale. These are not incremental deficits. They are multi-year engineering problems that China's entire industrial policy — including the Made in China 2025 strategy and the China Integrated Circuit Industry Investment Fund — has failed to close despite state-backed investment measured in tens of billions of dollars.
Yet within this same 48-hour cycle, the US government cleared approximately ten Chinese companies — named as Alibaba, Tencent, ByteDance, and JD.com — to purchase Nvidia H200 chips, with no deliveries made as of reporting (techsnif / X). The gap between "cleared to purchase" and "delivered" is where current policy ambiguity lives. The administration appears to be threading a needle that neither Anthropic's position nor Nvidia's fully endorses: not blanket restriction, not open engagement, but a case-by-case conditional access regime.
Enterprise economics sit in sharp tension with the policy debate. Anthropic's Claude Opus is priced at approximately $30 per million output tokens. Chinese open-weight models, distributed freely or at fractional cost, deliver equivalent performance for the large majority of commercial use cases at roughly one-tenth that rate. Berman reports, based on direct conversations with enterprise leaders, that company executives reviewing AI infrastructure spend are already shifting toward open-weight alternatives for use cases that do not require frontier capability — which, by his estimate, represents 99% of current enterprise AI deployments. Anthropic acknowledges this risk in its own essay — "if the CCP integrates near-frontier AI systems quicker and more effectively... it could secure advantages over democracies that overcome an intelligence deficit" — while leading with the contradictory claim that intelligence is the most important competitive front. Berman calls this a two-line self-contradiction; it is more precisely an unresolved tension between the lab's safety logic and its market analysis.
Players
Anthropic published "2028" as an explicitly American-interest policy document, naming the Chinese Communist Party as the threat to global AI norms — carefully distinguishing, as Berman notes approvingly, between the CCP and Chinese citizens. Its three prescribed policy pillars are closing export-control loopholes (chip-smuggling routes, foreign-data-center access, SME control gaps), defending American AI innovations against distillation attacks, and championing US AI exports. The essay's secondary argument is that democratic AI leadership enables safety negotiations with China from a position of strength; authoritarian AI leadership eliminates that leverage entirely.
Anthropic's most consequential move was invoking Mythos — its ten-trillion-parameter model released only to select partners under Project Glasswing in April — as a proof-point for why a Chinese lab reaching this capability tier first would translate directly into critical-infrastructure cyber-offense capacity. "If a PRC AI lab had developed a model at the level of Claude Mythos preview before an American one, the CCP would have had first access to a system that can autonomously discover and chain software vulnerabilities." This framing was immediately contested on two grounds: whether it represents genuine safety-driven non-release or fear-based marketing combined with insufficient serving compute, and whether the AISI's finding that Mythos is token-limited rather than ability-limited undermines the argument that broad release would be uniquely dangerous.
OpenAI chose a different rhetorical register. Rather than a binary-scenarios framing, the company publicly backed a US-led global AI governance body explicitly modeled on the IAEA and proposed China as a member (techsnif / X). The divergence from Anthropic is significant: where Anthropic presents scenario 1 (American dominance) and scenario 2 (authoritarian AI rule) as the only futures, OpenAI implicitly proposes a third path in which major powers participate in a shared oversight structure. The IAEA analogy is instructive — the original IAEA included the Soviet Union — and suggests OpenAI views the risk of an unaccountable Chinese AI development trajectory as greater than the risk of bringing China into a governance framework that Washington nominally leads.
Nvidia and Jensen Huang represent the industry's most commercially grounded counter-position. Huang's argument: "The day DeepSeek comes out on Huawei first — that is a horrible outcome for our nation." The framing is identical to Anthropic's threat assessment but reaches the opposite policy conclusion. Nvidia's interest is in being the platform that global AI runs on; export controls that drive Chinese AI onto Huawei hardware are, by Huang's logic, self-defeating. Separately this week, Huang's personal foundation purchased $108.3 million in CoreWeave compute to donate to universities and nonprofit institutes — building the institutional adoption ecosystem for American AI at exactly the layer Anthropic and Berman both identify as decisive: who deploys the global AI stack on which the world's economy runs (techsnif / X).
HuggingFace and CEO Clément Delangue intervened with explicit geopolitical framing, publishing the open-source AI safety case during the Trump-Xi summit week. Delangue argued that restricting open-source AI creates more risk than openness: "the biggest risk is that a few players have capabilities that other people don't have... if you make it more open, it's usually easier for defenders to react." He cited GPT-2 and Mythos as examples of predicted catastrophes that did not materialize, and called on the American AI community to publicly support open international AI — including DeepSeek, Qwen, Kimi, and GLM — as a driver of competition, jobs, and wealth creation. Marc Andreessen co-signed the position (Clément Delangue / X). The timing — an active US-China summit — suggests coordinated political advocacy rather than a spontaneous technical take.
DeepSeek and Chinese AI labs appear in this cluster as both capability signal and policy variable. Anthropic reports that only 3 of 13 top Chinese AI labs published any safety evaluation results last year. Researcher Nathan Lambert, who visited top Chinese AI labs, observed that Chinese researchers "have far fewer sophisticated opinions" on long-term AI risk and view their role simply as building the best model. The DeepSeek R1 model complied with 94% of overtly malicious requests under a common jailbreak technique, compared with 8% for US reference models — a misalignment delta with direct policy implications. Yet the same labs have produced algorithmic efficiency gains that Berman and others credit as more important than compute access in explaining China's frontier proximity. "The algorithmic innovation, some of the unlocks they've been able to achieve under such heavy constraint, is beyond impressive," Berman states directly.
The UK AI Security Institute sits outside the doctrine debate as an empirical witness. Its published finding that autonomous AI cyber-task completion length is doubling every 4.5 months, and that both Mythos and GPT-5.5 appear token-limited rather than ability-limited, introduces a technical frame that cuts across ideological lines. Ethan Mollick, summarizing the findings, asked the central question the doubling-rate implies: if Google and OpenAI release models with equivalent cyber capabilities under different guardrail approaches, "how does Anthropic get out of the government approval path" it has placed Mythos on (Ethan Mollick / X; AISI report).
Trajectory

The 4.5-month doubling time for cyber task completion length is the sharpest datapoint in this cluster. At that rate, the autonomous cyber-offense capability of leading US models doubles twice per year. METR's independent assessment, released in the same window, aligns with AISI findings and suggests the inflection point in AI capability growth may already be past the pre-exponential marker — Mollick: "no easy suggestion to help people adapt to super-exponential ability gain, so hold on to your butts." If capability is doubling this fast while both leading models are currently token-limited rather than ability-limited, the active constraint on the most dangerous AI capabilities is how much compute operators choose to allocate, not what the models can do.
The compute-gap trajectory runs in the opposite direction. Huawei's aggregate compute production, as a percentage of Nvidia's, is declining by Anthropic's own roadmap analysis — 4% in 2026, 2% in 2027. The EUV and HBM manufacturing gaps represent multi-year timelines to close, possibly longer. This is the empirical foundation of Anthropic's argument: the window in which compute restriction is both enforceable and decisive is now, because the compute gap is widening, not narrowing.
Berman's counter-argument tracks the algorithmic efficiency curve rather than the compute curve. DeepSeek's demonstrated gains — achieving near-frontier performance at a fraction of training compute through architectural innovation — compound against the compute advantage faster than Anthropic's roadmap analysis can accommodate. Anthropic's strongest acknowledgment of this dynamic: "algorithmic improvements are both a function and a multiplier of compute, not a substitute for it." This means more compute enables more algorithmic discovery — but it also means Chinese labs operating under compute constraints have direct incentive and demonstrated capacity to achieve efficiency multipliers that extend their effective compute budget significantly. Berman: "they might not be able to do it today, but soon they probably will. It's not if. It's when."
Three doctrine vectors are thus moving simultaneously in different directions. The compute gap is widening under current controls. The capability-doubling rate is accelerating independent of compute gaps. And enterprise adoption economics are already flowing toward cheaper Chinese open-weight models for the large majority of commercial deployments. Anthropic's strategy requires all three dynamics to resolve in its favor. Any one of them breaking against the doctrine — algorithmic efficiency outpacing compute restriction, enterprise adoption consolidating on Chinese open-weight infrastructure before export controls bite, or the IAEA-style governance path gaining US government support over binary-containment — substantially undermines the doctrine's effectiveness.
Implications
The Anthropic doctrine has a structural tension it does not resolve internally. The essay declares intelligence the most important competitive front, then concedes in consecutive paragraphs that near-frontier AI at lower cost with aggressive global distribution can overcome an intelligence deficit. This is not merely a rhetorical inconsistency; it is a genuine analytical split that drives the entire doctrinal fracture. The safety logic requires keeping the most dangerous models from adversarial actors. The market logic requires that the world builds its AI infrastructure on American technology. Open-source achieves the market logic but undermines the safety logic. Closed-frontier plus government-approved access achieves the safety logic but cedes the adoption field to cheaper Chinese alternatives.
For enterprise AI buyers, the implication is immediate and already operational. The cost differential between US frontier models and Chinese open-weight equivalents is a live procurement decision now. Berman frames this not as a near-future scenario but as an ongoing dynamic: enterprise leaders reviewing the gap between Anthropic Opus pricing and open-weight alternatives are making infrastructure decisions that will compound over years. If those decisions consolidate toward Chinese open-weight models for the 99% of use cases that do not require frontier capability, the geopolitical question of AI norms gets decided at the enterprise infrastructure layer before 2028 policy decisions can close it.
The DeepSeek 94% jailbreak compliance rate is the data point Anthropic uses most forcefully against open-source Chinese model adoption — an AI safety argument rather than a geopolitical one. It is also the argument most susceptible to Delangue's counter: open-source enables inspection, patching, and security hardening in ways that closed APIs do not. An API endpoint presents a trust surface that must be accepted wholesale; a self-hosted open-weight model can be audited, constrained, and secured at every layer. Neither argument fully defeats the other, which is why the debate has not resolved.
Gary Marcus defended Anthropic's Mythos restricted-release decision in plain terms: "I have no interest in a company that says we will roll everything out to anybody regardless of how dangerous it is." The AISI's finding that Mythos is token-limited rather than ability-limited creates a technical challenge for this defense: the safety argument for non-release depends on there being a meaningful capability boundary that restricted access enforces. If the boundary is a compute allocation decision — you are dangerous at full token budget, acceptable at reduced token budget — then the release decision is a resource-provisioning question, not an intrinsic safety threshold.
Outlook
Three credible paths forward from the current doctrinal fracture:
Controlled compute dominance — Anthropic's scenario 1. Export controls tighten further, closing loopholes on chip smuggling, foreign data-center access, and SME controls. American labs maintain an intelligence lead through 2028. Recursive self-improvement, when it arrives, arrives on hardware and under governance conditions that democratic nations control. The doctrine requires the EUV and DUV manufacturing gaps to hold, algorithmic efficiency gains not to fully compensate for Chinese compute deficits, and American enterprise and governmental AI adoption to accelerate faster than Chinese subsidized global deployment. All three conditions must hold simultaneously through a multi-year policy cycle — a structurally demanding requirement.
Platform dominance via engagement — Nvidia's and Berman's counter-scenario. American AI becomes the platform on which global AI runs because it is available, affordable, and open enough. Chinese open-weight models may approach frontier capability, but the world's AI infrastructure runs on American silicon, American SDKs, and American cloud providers. The doctrine requires the enterprise cost-economics to work as Berman describes — cheap Chinese open-weight models winning on price while American hardware captures the platform layer — and the "constraint is mother of invention" dynamic not producing a viable Chinese indigenous chip industry before American platform lock-in occurs. Jensen Huang's $108.3M CoreWeave compute donation to nonprofits is best understood in this frame: seeding the institutional adoption ecosystem that makes American AI infrastructure the default before alternatives become viable.
Multilateral governance architecture — OpenAI's emerging position. A US-led IAEA-equivalent structures AI governance across major powers, including China, trading some unilateral control for reduced tail risk from an unaccountable Chinese AI development trajectory. The doctrine requires China to participate constructively in a governance body that Washington leads — a premise with no current institutional foundation, but with historical precedent in nuclear and chemical weapons regimes that included adversarial powers. The IAEA analogy suggests OpenAI's strategists are thinking in Cold War governance timescales, not the 2028 window Anthropic identifies.
The H200 clearance-without-delivery situation is the clearest current policy signal: between doctrines, calibrating access case by case. It is neither strict restriction nor open engagement — a conditional, entity-specific access regime that may represent the operational form of multilateral governance even before a formal body exists.
The 2028 date is real in the sense Berman reads it. Not as a geopolitical parity marker, but as the approximate threshold for recursive self-improvement — AI systems that can conduct their own R&D at a rate that compounds faster than any external competitor can match from behind. Anthropic uses the date without naming what it marks. Berman names it. Whether the first system to reach that threshold is American, Chinese, or both simultaneously, the window in which current doctrinal choices remain consequential is shorter than the length of any normal policy cycle. The fracture that opened this week across three American AI institutions suggests the industry has not converged on what closing that window safely looks like — and that convergence, if it comes, will not arrive through technical consensus alone.


