OpenAI Slashes Compute Spending to $600 Billion: The Dawn of Inference Economics
OpenAI cut its 2030 compute target by 57%—from $1.4 trillion to $600 billion—in one of the largest infrastructure forecast corrections in modern technological history. Despite tripling revenue to $13.1 billion, inference costs quadrupled and margins compressed from 40% to 33%. The AI industry has officially entered the inference era, where the economics of serving models at scale—not training them—defines market value.
OpenAI Key Financial Metrics: 2024–2030 Trajectory
↓ 57% reduction from $1.4T [1]
↑ Surpassed $10B target [1]
↓ Down from 40% in 2024 [3]
↑ Consumer + enterprise 50/50 [1]
The $800 Billion Revision
In February 2026, OpenAI informed investors that it had slashed its total compute spending target through 2030 to approximately $600 billion. This represents a staggering 57% reduction from the $1.4 trillion, 30-gigawatt infrastructure ambition previously circulated by CEO Sam Altman just months prior. [1]
This $800 billion downward revision is arguably one of the most significant infrastructure forecast corrections in modern technological history. It does not represent a failure of the technology, but rather an industry-wide capitulation to the uncompromising physical and financial laws of “inference economics.” [2]
Training vs. Inference: The Economics Shift
In the lifecycle of artificial intelligence, model training requires immense, episodic capital expenditure. However, model inference—the actual computational process of running the AI to generate answers, analyze documents, and execute agentic workflows for end-users—introduces a continuous, compounding operating expense. [4]
As mass-market adoption scales, inference rapidly overtakes training as the dominant cost driver. OpenAI’s internal metrics illustrate this crisis of success: in 2025, the company generated $13.1 billion in revenue, significantly surpassing its $10 billion projection, while managing to keep overall spending to $8 billion, under its $9 billion target. [1][3]
However, the underlying unit economics revealed a structural vulnerability: the expenses associated with running its AI models (inference costs) increased fourfold in 2025. Despite tripling revenue, the exploding costs of inference caused OpenAI’s adjusted gross margins to plummet from 40% in 2024 to just 33% in 2025. [3]
OpenAI Financial Overview: 2024 vs 2025 vs 2030 Target
| Financial Metric | 2024 Actual | 2025 Actual | 2030 Target |
|---|---|---|---|
| Total Annual Revenue | N/A | $13.1 Billion | >$280.0 Billion |
| Annual Operating Spend | N/A | $8.0 Billion | N/A |
| Adjusted Gross Margin | 40% | 33% | 60%+ (target) |
| Cumulative Compute Target | N/A | N/A | $600.0 Billion |
| Cumulative Cash Burn | N/A | N/A | $665.0 Billion |
The 900-Million-User Cost Problem
The sheer mathematics of serving hundreds of millions of weekly active users mandates a total shift in corporate focus. The ChatGPT user base alone rebounded to over 900 million, making continuous inference the dominant operational cost. [5]
Success is no longer defined strictly by headline model capabilities or parameter counts, but by rigorous unit economics: cost per inference, token generation speed, power efficiency, and total system utilization. [4]
Under these updated forecasts, OpenAI expects cumulative cash burn to reach an astonishing $665 billion through 2030, with the company not anticipating positive cash flow until the end of the decade. [1]
Fundraising at Scale: The $100 Billion Round
To bridge this massive cash flow gap until profitability, OpenAI is finalizing a historic fundraising round seeking over $100 billion, which would value the firm at $830 billion. [1]
A critical component of this round is a near-finalized $30 billion investment from Nvidia, reflecting the symbiotic reliance between the primary model developer and the primary hardware supplier. [1]
Despite the margin compression, OpenAI remains bullish on its top-line potential, projecting total revenue to exceed $280 billion by 2030, divided equally between consumer subscriptions and enterprise software sales, as it lays the groundwork for a potential initial public offering (IPO) in the fourth quarter of 2026. [1]
Key Metrics Defining the Inference Era
↑ Driving inference demand [5]
↑ Year-over-year growth [3]
→ $100B+ round [1]
→ Symbiotic hardware play [1]
“Success in the inference era is no longer defined by headline model capabilities or parameter counts, but by rigorous unit economics: cost per inference, token generation speed, power efficiency, and total system utilization.”
— Industry analysis, February 2026 [4]
Key Takeaways
- 57% compute target reduction: OpenAI slashed its 2030 compute spending from $1.4T to $600B—an $800 billion correction driven by inference cost realities.
- Margins compressed despite revenue growth: Revenue tripled to $13.1B, but inference costs quadrupled, driving adjusted gross margins from 40% to 33%.
- Inference is the dominant cost driver: With 900M+ ChatGPT users, running models at scale now far exceeds the cost of training them.
- $665B cumulative cash burn projected: OpenAI doesn’t expect positive cash flow until the end of the decade.
- $100B+ fundraising round at $830B valuation: Includes $30B from Nvidia, reflecting deep hardware-software symbiosis.
- IPO planned for Q4 2026: Consumer and enterprise revenue targeted at $280B+ by 2030, split 50/50.
- Industry paradigm shift: The AI sector has transitioned from the “training era” to the “inference era,” where operational efficiency trumps model scale.
References
- [1] “OpenAI Projects Over $280 Billion Revenue by 2030 Amid Adjusted Compute Spending,” MLQ.ai, February 2026. Available: https://mlq.ai/news/openai-projects-over-280-billion-revenue-by-2030-amid-adjusted-compute-spending/
- [2] “OpenAI Slashes Compute Spending Target to $600B, Down From $1.4T,” TechBuzz.ai, February 2026. Available: https://www.techbuzz.ai/articles/openai-slashes-compute-spending-target-to-600b-down-from-1-4t
- [3] “OpenAI expects compute spend of around $600 billion through 2030,” Indian Express, February 2026. Available: https://indianexpress.com/article/technology/artificial-intelligence/openai-expects-compute-spend-of-around-600-billion-through-2030-10545342/
- [4] “Investor says AI’s next phase will be shaped by power and inference economics,” Investing.com, February 2026. Available: https://www.investing.com/news/stock-market-news/investor-says-ais-next-phase-will-be-shaped-by-power-and-inference-economics-4476646
- [5] “As concerns spiral, Sam Altman reduces compute target by more than half,” Times of India, February 2026. Available: https://timesofindia.indiatimes.com/technology/tech-news/as-concerns-spiral-sam-altman-reduces-compute-target-by-more-than-half-tells-investors-it-is-/articleshow/128644845.cms
- [6] “OpenAI Targets $600 Billion Compute Spend by 2030,” Intellectia.ai, February 2026. Available: https://intellectia.ai/news/stock/openai-targets-600-billion-compute-spend-by-2030
- [7] “OpenAI revises projections upward with $112 billion extra cash burn by 2030,” MLQ.ai, February 2026. Available: https://mlq.ai/news/openai-revises-projections-upward-with-112-billion-extra-cash-burn-by-2030/