AMD Unveils Venice Server CPU and MI400 Accelerator at CES 2026
At CES 2026 AMD showcased its Venice server CPU and MI400 accelerator, revealing new chiplet-based architectures that push silicon efficiency and core count. Venice employs a dualâIO die layout and advanced packaging reminiscent of the Strix Halo, while the MI400 features 12 HBM4 dies and extensive compute chiplets. These innovations position AMD to power nextâgeneration dataâcenter workloads and AMDâs Helios AI Rack.
AMD unveiled two cornerstone silicon products at the 2026 Consumer Electronics Show (CES) â the Venice series of server CPUs and the MI400 family of dataâcenter accelerators. While the company had first outlined technical specifications for Venice and the MI400 at its June 2025 âAdvancing AIâ event, this was the inaugural live presentation of the actual silicon.
**Venice â A ChipletâBased Server CPU**
Venice departs from AMDâs prior EPYC Rome design by moving to a more sophisticated form factor that reduces reliance on the organic substrate used to interconnect chiplets. The new packaging resembles the Strix Halo (used on MI250X), with 165âŻmm² silicon per core chiplet (CCD). Each CCD contains 32 ZenâŻ5 cores and 4âŻMB of L3 cache per core, yielding 128âŻMB of L3 per CCD â a total of up to 256 cores across eight CCDs.
Unlike earlier EPYC chips, Venice incorporates two IO dies instead of one. Each IO die is roughly 353âŻmm², for a combined silicon area of just over 700âŻmm²âan increase from the 400âŻmm² dedicated for IO on previous EPYC processors. The IO dies are arranged under the CCDs, connected by a dieâtoâdie interface that minimizes latency and improves power delivery. Surrounding the IO dies are eight ancillary dies, four on each side of the package, which likely serve as structural silicon or deepâtrench capacitors to sustain voltage stability across the highâcoreâcount workload.
**MI400 Accelerator â Expanding Compute and Memory**
The MI400 accelerator builds on AMDâs earlier MI350 architecture but expands in several dimensions. It is a large package that integrates 12 HBM4 memory dies alongside two base compute dies fabricated at 2âŻnm and 3âŻnm nodes. Two extra dies on the top and bottom of the base dies handle offâpackage I/O, including PCIe and UALink connections.
A rough dieâarea calculation shows each base die at about 747âŻmm², with the offâpackage IO dies at ~220âŻmm² each. The compute chiplets are divided across the two base dies; while the factory has not released exact figures, estimates place each compute die at 140â160âŻmm², with the entire compute block not exceeding 180âŻmm² in total.
**Broader MI400 Family and VeniceâX**
In addition to the newly revealed MI400 models, AMD announced a third member, the MI440X â a direct replacement for the MI300/350 series that is optimized for 8âway UBB chassis. The MI455X and Venice processors will feed the Helios AI Rack, a nextâgeneration platform poised to deliver both high core counts and advanced AI inference.
VeniceâX is also on the horizon, expected to be a VâCache extension of Venice. If AMD continues its current cache ratio, a 256âcore Vegaâ5 core chiplet could support up to 384âŻMB of L3 cache per CCD, translating to roughly 3âŻGB of L3 cache across the entire chip.
**Looking Forward**
Both Venice and the MI400 series are slated for launch later this year, and the industry is eagerly awaiting deeper insights into their internal architectures. AMDâs move toward more integrated, packageâlevel solutions marks a significant step in delivering higher performance while maintaining power efficiency.
For more inâdepth coverage and community discussion, contributors are encouraged to support the publication on Patreon or PayPal, and join the official Discord channel to engage with fellow experts.