TPU Interviews AMD Vice President: Ryzen AI, X3D, Zen 4 Future Strategy and More

Source: Tech Power Up added 17th Jul 2023

  • tpu-interviews-amd-vice-president:-ryzen-ai,-x3d,-zen-4-future-strategy-and-more

Introduction

AMD recently launched its Ryzen 7040HS line of mobile processors for thin-and-light notebooks, powered by Zen 4, RDNA 3 graphics, and Ryzen AI. The company also released the Ryzen PRO desktop and mobile Zen 4 processors for the commercial PC market. Over late 2022 and 2023, the company has been gradually expanding its desktop processor lineup, now 15 SKU-strong. The company isn’t done with Ryzen client processor announcements for 2023 by far. We caught up with David McAfee, Corporate Vice President and General Manager, Client Channel Business at AMD, who has a bird’s eye view of where the company’s client processor business sits, and where it’s headed in the future. He is the head person responsible for AMD’s Ryzen processors, for both desktop and mobile.

In our interview with David, we dive into the nuts and bolts of what the Ryzen processor family looking like across the market segments, what’s the buzz around Ryzen AI, AMD’s much talked about client-local AI hardware accelerator, and we try to understand what drove AMD to some of the product-stack decisions with its Ryzen 7000 processor family. This is not a Zen 4 architecture deep-dive interview, we’ve already done that with Robert Hallock. This interview should shed light on some of the more high-level decisions AMD took with its Ryzen 7000 processor family across market segments.

Ryzen AI

Could you talk a bit about the vision for AI in Ryzen CPUs?


AI has been a major focus area at AMD for a couple of years now, the timing of AMD and Xilinx coming together was very fortunate. When AMD and ATI came together in 2006, the first product that brought those two technologies together released in 2012. It took a long time to make that happen. Integrating the Xilinx AI engine in a Ryzen APU happened just over a year after the AMD and Xilinx acquisition closed. This speaks to our plans for AI and on the importance placed on AI becoming a big part of our processors going forward. MD’s vision about AI spans from the Edge to the Cloud and everything in-between, AI will enhance a lot of the things that we do. When we think about that on the client side, I truly believe that whether you’re a gamer, a creator, or both, or if you primarily use your PC for productivity, AI will touch all of those different workloads in some way, shape or form in increasing ways over the next couple of years.

That’s everything from generative AI in gaming such as more intelligent NPCs, or in the creator space using AI to automate workflows and lower the bar of entry for very complex development packages, be it modelling, Adobe suites or programming, even involving generative AI in those spaces. I think in the productivity space, it’s what Microsoft talked about at Build the other week. The idea of copilot type of functions that automate much of the work that you do day-to-day, integrating AI in the way that you personally interact with your computer. I think to steal a phrase from Panos Panay [Chief Product Officer Microsoft] when he talked about AI—Microsoft are looking at AI as something that will be more transformative to the PC experience than the mouse was. I think that’s a pretty bold statement, but it also speaks to how over time we will see how AI touches all these workflows. AI has amazing potential to positively influence so many things that we do with our PCs today, changing the way we interact with the digital space and creating a sort of individually tailored assistant and enhanced experience that today just simply isn’t possible.

Some recent slides released by AMD that talk about AI

AMD’s AI hardware approach seems a bit fragmented. On Ryzen 7040 “Phoenix,” you have Ryzen AI, and a comprehensive hardware feature-set. On Radeon RDNA3 GPUs, you have AI Accelerators (scalar matrix math accelerators), and on Zen 4 there’s support for AVX-512, bfloat16, and VNNI. Do you see a way to bring those together to provide a more unified interface for ISVs?


Yes, there is a way. What I can say is that different models prefer different types of precision in a way that they execute. If you look at GPU compute, for years, FP16 and FP32 have been really the most efficient operator types that the GPUs love. That’s how both our GPUs as well as NVIDIA’s excel at this type of calculations. CPUs, yes, VNNI, bfloat16, all of those instructions have been added to the x86 instruction set. When I look at Ryzen AI or XDNA, I think the important thing about that engine is that it’s really tuned for low-precision integer operations, so INT8, INT16. Some people are talking about INT4 even. The Ryzen AI engine does not bring new instruction sets necessarily to our new instruction types, or new operator types into that model, it’s just highly efficient at multiply-accumulate-collect operations in the same way that you think about the layers in a neural network being able to do that in a way where you get an engine that’s just completely built for that. I think that all these different types of execution engines complement each other in some ways.

Ryzen AI and the reason that it’s in a notebook is as much about throughput-per-watt and performance-per-watt as it is about capability. Because the truth is that a lot of these engines and models will just as easily run on the CPU or GPU, when quantized in the right way for these instruction types. The ultimate vision here is that what you’re seeing today is the early adopters of AI or things like processing video feeds, eye gaze correction, background segmentation and things like that. Which are interesting, but those are periodic uses of AI and I think the ultimate vision of where AI goes is that it becomes something that’s maybe not constantly-running but regularly-running as a background task in your system. Having a highly energy-optimized engine in your SoC to be able to do that without killing your battery life or generating excessive heat is very important. I think the reality is that while a CPU can do all those things, a CPU is not optimized for that type of work. To me it’s the analogy, you could take a VP9 video stream and decode that on the CPU and it’ll absolutely destroy your battery life. If you do that on a video playback engine that’s totally optimized for it, you get hours and hours of battery life playing back videos. I think that’s a reasonably good analogy for thinking about what the Ryzen AI engine is and what it will do in terms of enabling more continuous operations of AI as a part of the application and OS experience.

AMD’s XDNA AI Architecture

AI’s Value Proposition

So the main selling point is rather the energy efficiency than the ability to run a certain workload, which could run on the CPU as well?


Right, and I think it’s also true that today’s AI engines are sized in a particular way and you know they are not necessarily sized in a way where you could take a billion parameter large language model and run it on that engine on your notebook. I think you could on the CPU, but even that would be pretty taxing, but I think there will be sizes of models that just don’t fit within the footprint of what that AI engine can do today. So it really is about both as we think about the future, understanding the companies, OS providers and software providers who will have the biggest impact on AI experiences over the next several years, where their vision is, where they are going, what they want to implement and what does it take in hardware to turn that vision into a reality. Both as we’re talking about today as well as when we look at our future roadmap, that’s what you’ll see from us, it’s really a progression that’s very much tied to the way that some of the industry leaders are planning on adopting, implementing and deploying AI.

AMD Xilinx Versal SoC

Wouldn’t it make sense to put an FPGA in a CPU then to be more future-proof?


Honestly that’s something we’ve looked at and considered. You know FPGAs have their own issues and limitations, I guess when it comes to capability per millimeter of die area, FPGAs are less efficient than a structure like the Ryzen AI engine. I think the reality is that in a consumer device where every dollar matters, that’s a very hard investment to make in a significant enough way where you could implement enough capability to satisfy what a flexible engine might need to do in a system.

Read the full article at Tech Power Up

media: Tech Power Up  

Related posts


Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Related Products



Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91

Warning: Invalid argument supplied for foreach() in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91