
Advancing AI
Discover our end-to-end AI infrastructure products, solutions, and ecosystem.
Advancing AI from Cloud to Edge to Endpoints
AI is defining the next era of computing, and this is just the beginning. We see the benefits of AI every day—enabling medical research, curbing credit card fraud, reducing congestion in cities, or simply making life easier.
We believe the full potential of AI will be realized when the technology is pervasive and spans from the cloud to the edge to endpoints. AMD is helping drive this with a focus on three key areas.
Delivering a broad portfolio of high performance and adaptive hardware and software solutions that make AI possible
Enabling an open, proven, and ready software strategy and co-innovating with partners across the open ecosystem
Right-sizing AI solutions to fit the use and capabilities of the device and simplifying complex workloads into compelling user experiences
Generative AI is already transforming the way we work, live, and play. Training generative AI models—from large language models, to generative adversarial networks, and more—can involve billions of parameters, requiring massive compute capability.
AMD powers some of the world’s fastest supercomputers, including Lawrence Livermore National Laboratory’s El Capitan, EuroHPC’s LUMI, and Oak Ridge National Laboratory’s Frontier—the first to break the Exascale barrier.1 AMD Exascale-class compute technologies are uniquely well suited to deliver the processing power needed by even the most complex Generative AI models.
With the opportunity of AI comes the challenge of keeping the technology focused on positive outcomes: namely, helping solve some of the world's most vexing issues. AMD is committed to working with industry to innovate and deploy AI in a responsible manner.
Discover how AI solutions powered by AMD are helping drive advanced research.
AMD Instinct is the accelerator of choice for some of the world’s fastest and greenest supercomputers, including Lawrence Livermore National Laboratory’s El Capitan system.1,2 See how this two-Exascale supercomputer will use AI to run first-of-its-kind simulations to advance scientific research.
AMD products are built on scalable, power-efficient, and adaptable architectures designed for workloads ranging from large-scale AI model training to real-time inferencing.
Developers and partners can leverage AMD software tools to optimize AI applications on AMD hardware. Today, the stack includes AMD ROCm™ for AMD Instinct accelerators and AMD Radeon graphics cards, AMD Vitis™ AI for adaptive accelerators, SoCs, and FPGAs, as well as AMD ZenDNN open-source libraries for AMD EPYC processors.
In parallel, AMD is building towards a Unified AI Software Stack that will empower developers across our entire portfolio, enabling them to stay within familiar AI frameworks and target any AMD device.