News

This milestone marks the first-ever multi-node MLPerf inference result on AMD Instinct ... on AMD MI300X GPUs, outperforming the previous best result of 82,749 TPS on NVIDIA H100 GPUs.
Advanced Micro Devices is well-positioned as open-source LLMs and tariffs reshape AI economics and infrastructure. See why ...
While it pales in comparison to the $5.5 billion in revenues Nvidia is likely out as a result of the licensing requirements ...
AMD benefits from Nvidia chip scarcity and a solid AI foothold, even amid China export risks and a falling share price. See ...
A system with eight Nvidia H100 80 GB GPUs generates a comparable number of tokens per second to a machine with eight AMD Instinct MI300X 192 GB GPUs in the MLPerf 4.1 generative AI benchmark on ...
But while Nvidia remains an AI infrastructure titan, it's facing stiffer competition than ever from rival AMD ... The MI300X boasts 5.3 TBps of memory bandwidth, versus 3.3 TBps on the H100 ...
This milestone marks the first-ever multi-node MLPerf inference result on AMD Instinct™ MI300X GPUs ... TPS in the server scenario on AMD MI300X GPUs, outperforming the previous best result of 82,749 ...
AMD has won several of Nvidia's top customers since launching its first AI GPU, Instinct MI300 series ... the older CNDA 3-based chips like the MI300X. Simply put, they could perform as well ...