AMD has been making waves in the server CPU market, especially with its EPYC processors, and their recent Advancing AI keynote highlighted just how critical these chips are when paired with AI accelerators like the Instinct MI300X. The focus was on how having the right server CPU can make or break the performance of AI workloads, and AMD is demonstrating its leadership in this space.

Historically, the CPU plays a more understated role in AI, often overshadowed by the GPU. However, AMD is now emphasizing that a balanced combination of CPU and GPU is crucial for optimizing performance in AI tasks. In a direct comparison between AMD’s latest EPYC 9575F and Intel’s Xeon 8592+ 5th Gen processor, both paired with the MI300X AI accelerator, AMD showcased substantial performance gains when using their latest server CPUs. Even though the comparison was fair on paper, with both CPUs having 64 cores and 128 threads, the results were telling.
The benchmarks presented by AMD revealed a 6% average performance uplift using the EPYC 9575F in a variety of tests, including the Llama 3.1 8B AI model. As the model’s complexity grew, that difference spiked to 17%, proving that CPU selection isn’t just a small factor-it’s a significant one. These results were particularly evident in inference workloads, where a powerful CPU has a noticeable effect on the overall speed and efficiency of AI tasks.
In an era where AI accelerators steal the spotlight, AMD is showcasing why having a robust CPU platform is just as important. Their growing market share is proof that businesses are taking notice and investing in EPYC-based solutions to power the next generation of AI applications.
3 comments
Not gonna lie, the new AMD chips look better than Intel’s at this point, especially in AI workloads
I wonder if optimizations played a role in those benchmarks 🤔
Yeah, but will it be enough to push AMD ahead of Intel in the long run? Hard to say