Tesla AI5 Chip: Comprehensive Summary
🔗 Оригинал →Tesla AI5 Chip: Comprehensive Summary
Overview
On April 15, 2026, Tesla completed the tape-out of its next-generation AI5 processor, locking the final design and sending it to U.S.-based foundries for fabrication. The chip marks a strategic pivot: rather than incremental automotive gains, AI5 is built for Optimus humanoid robots and supercomputer-scale training clusters, while Tesla maintains that current AI4 hardware is sufficient for Full Self-Driving (FSD) in vehicles.
Key Quote
“Congratulations to the Tesla AI chip design team on completing the tape-out of the AI5 chip! AI6, Dojo3, and other exciting chips are also in development.” — Elon Musk, via X
Tape-Out Significance
Tape-out is the semiconductor industry’s “point of no return”—the moment a design is finalized, sent to a photomask house, and transferred to a foundry for physical fabrication. Reaching this milestone signals that Tesla’s in-house silicon program has moved from concept to manufacturable reality.
U.S. Manufacturing & Supply Chain
| Detail | Specification |
|---|---|
| Dual-Sourced Production | TSMC (Arizona) + Samsung (Taylor, Texas) |
| Samsung Contract | Reported $16.5 billion eight-year agreement (signed July 2025) |
| Process Note | Same chip design, but physical implementation differs slightly per foundry |
| In-House Future Fab | Terafab (Austin, Texas) for higher future volumes |
| 2026 Capex | Roughly $20 billion allocated for Terafab and non-vehicle projects (Cybercab, Optimus) |
AI5 Performance Specifications
All figures reflect Tesla’s claims and Musk’s disclosures; independent verification awaits engineering samples.
- Compute: ~8x AI4 in a single chip
- Memory Capacity: ~9x AI4
- Bandwidth: ~5x previous generation
- On-Device Memory: Up to 192GB LPDDR5X
- Single AI5: Roughly equivalent to an NVIDIA H100 (Hopper-class) for Tesla’s inference workloads
- Dual AI5: Roughly equivalent to NVIDIA Blackwell—but at a fraction of the cost and power
- Power Efficiency: ~3x more efficient than NVIDIA Blackwell
- Cost: Under 10% of NVIDIA Blackwell
- Relative to Dual AI4: One AI5 chip delivers ~5x the useful compute
Strategic Shift: Cars vs. Next-Gen Workloads
Tesla is deliberately not forcing a fleet-wide retrofit. Musk clarified:
AI4 is sufficient to achieve Full Self-Driving safety levels “very far above human.”
This protects current vehicle resale value and avoids massive hardware swap costs. Instead, AI5 is optimized for two higher-margin domains:
1. Optimus Humanoid Robots
- Requires efficient, low-latency, real-time inference on a mobile, power-constrained platform
- Must handle dexterous manipulation, balance, environmental awareness, speech/gesture interpretation, and unstructured environments without cloud dependency
- Current FSD runs on a ~1 billion parameter model; FSD v15 expected at ~10 billion parameters, with Optimus models scaling similarly
2. Supercomputer Clusters
- AI5 chips will be packed onto server boards in configurations of 5 to 12 chips per board
- Forms the training backbone for FSD v15 and future Optimus models
Future Roadmap: AI6 & Dojo3
Tesla is not pausing after AI5:
| Chip | Status | Timeline | Performance/Notes |
|---|---|---|---|
| AI5 | Taped out | Engineering samples late 2026; volume 2027 | — |
| AI6 | In development | Tape-out December 2026; mass production 2027 | ~2x compute of AI5; reportedly exclusively with Samsung |
| Dojo3 | In progress | TBD | Next-gen training infrastructure |
- R&D Cadence: Musk states Tesla has shortened its chip development cycle to approximately 9 months, outpacing NVIDIA and AMD’s roughly yearly cadence.
Production Timeline
- Late 2026: Small-batch engineering samples (potential use in early Optimus testing or dev vehicles)
- Mid-to-Late 2027: Realistic window for volume production reaching customer-facing products at scale