Dario Amodei — "We are near the end of the exponential"
Type: blog-post
Author: Dwarkesh Patel Source: https://www.dwarkesh.com/p/dario-amodei-2 Date: Feb 13, 2026
Overview
Dario Amodei, CEO of Anthropic, discusses his belief that AI development is approaching a critical inflection point. He predicts a "country of geniuses in a data center" within 1-3 years, with near-certain achievement by 2035.
Key Arguments
Scaling Hypothesis Remains Valid
Amodei maintains his "Big Blob of Compute Hypothesis" from 2017. The theory posits that seven factors matter most: raw compute, data quantity and quality, training duration, scalable objective functions, and numerical stability. He argues RL scaling follows identical patterns to pre-training scaling.
Why the Exponential Isn't Obvious
Despite clear technological progress matching expectations, Amodei finds it "absolutely wild" that society hasn't recognized how close we are to transformative AI. People continue debating traditional political issues while potentially approaching systems with superhuman capabilities.
Software Engineering Progress
Models are advancing toward handling complete software engineering tasks—not just code generation, but design decisions, system architecture, and deployment. Current models write 90% of code at Anthropic, though full automation requires traversing a spectrum of capabilities.
Diffusion Isn't Cope
Amodei pushes back on dismissing "diffusion" as an excuse. Real constraints exist: enterprises need legal review, security compliance, change management, and internal coordination. However, AI adoption proceeds 3-5x faster than historical technology diffusion.
Anthropic achieved 10x annual revenue growth: from $100M (2023) to $1B (2024) to $9-10B (2025). This suggests diffusion operates faster than previous technologies, though not infinitely fast.
Continual Learning May Not Be Necessary
Models might achieve superhuman capabilities through three mechanisms: - Pre-training generalization across broad internet-scale data - RL generalization across diverse task environments - In-context learning leveraging million-token context windows
Continual learning (learning on-the-job) could emerge within 1-2 years, but might prove unnecessary given these alternatives.
Compute Investment Paradox
Though Amodei predicts AGI-level capabilities within years, he argues Anthropic shouldn't buy unlimited compute. The mismatch stems from revenue prediction uncertainty. Buying $1 trillion annually risks bankruptcy if timelines slip by even one year.
Industry-wide compute growth reaches ~3x annually (10-15 gigawatts in 2025 → 300 gigawatts by 2029), generating multi-trillion-dollar capacity by 2028-2029. This matches capability timelines without requiring any single company to bankrupt itself through overcommitment.
Business Model Questions
Profitability Paradox
Amodei sketches an economics model where: - Inference carries 50%+ gross margins - Training consumes ~50% of compute capacity - Profitability emerges when demand predictions prove accurate
This creates an unusual dynamic: companies lose money when scaling compute ahead of demand, profit when demand materializes. The 2028 profitability target reflects demand prediction uncertainty, not underinvestment in research.
Why AI Labs Make Money
Unlike perfect competition, oligopoly dynamics favor 3-4 dominant players. High capital requirements ($100B+) and specialized expertise create barriers preventing disruption. Cloud computing provides the historical analog—profitable but not monopolistic.
However, once AI models develop robust research and software engineering capabilities, commoditization accelerates. Anyone could theoretically build AI models, flattening the entire economy. Amodei sees this as post-"country of geniuses" territory.
Geographic Concerns
Amodei worries about uneven global AI diffusion. Silicon Valley and connected regions might achieve 50% annual growth while other areas remain largely unaffected. This represents a potential "pretty messed up world" he considers preventing.
Timeline Predictions
| Capability | Timeline |
|---|---|
| Complete software engineering | 1-2 years |
| Nobel Prize-level intellectual work | 1-3 years |
| "Country of geniuses in data center" | 1-3 years (90% confidence by 2035) |
| Trillions in annual revenue | Before 2030 |
| Robotics revolution | 1-2 years after core AGI |
On Verification and Uncertainty
Amodei admits higher uncertainty on tasks lacking objective verification—novel writing, scientific discovery, mission planning. For verifiable domains (coding, mathematics), confidence extends to "almost certain" timelines.
He acknowledges gaps may persist: models might automate all verifiable work while struggling with subjective judgment. Yet existing generalization from verified to unverified domains already demonstrates partial capability transfer.