AMD’s Radeon RX 7900 XTX really shines with the DeepSeek R1 AI model, outpacing NVIDIA’s GeForce RTX 4090 in inference tests.
### AMD Accelerates Support for DeepSeek’s R1 LLM Models, Delivering Top-Notch Performance
DeepSeek’s latest AI model has set the tech world abuzz, raising questions about the kind of computing muscle needed to train such a powerful tool. Thankfully, your average consumer can achieve impressive results with AMD’s “RDNA 3” technology found in the Radeon RX 7900 XTX GPU. In a show of strength, Team Red has released benchmarks for the DeepSeek R1 inference, clearly demonstrating how their top-tier RX 7000 series GPU outperforms NVIDIA’s equivalent in several scenarios.
— David McAfee (@McAfeeDavid_AMD) January 29, 2025
Using consumer GPUs for AI tasks has proven effective for many, largely because these GPUs offer a pretty good performance-to-cost ratio compared to specialized AI accelerators. Plus, when you run models on your own system, you keep your data private—a key concern with DeepSeek’s AI models. Luckily, AMD has rolled out a detailed guide on how to run DeepSeek R1 models on their GPUs. Here’s a breakdown of the process:
1. Start by ensuring your system is equipped with the 25.1.1 Optional Adrenalin driver or a later version.
2. Go to lmstudio.ai/ryzenai and download LM Studio version 0.3.8 or newer.
3. Install LM Studio and skip the onboarding process.
4. Once in the application, navigate to the discover tab.
5. Select your DeepSeek R1 Distill. For beginners, the smaller Qwen 1.5B model is recommended for its speed, although larger models offer greater reasoning strengths. All models are highly competent.
6. On the right, ensure “Q4 K M” quantization is checked, then click “Download.”
7. After downloading, return to the chat tab, select the DeepSeek R1 distill from the menu, and enable “manually select parameters.”
8. In GPU offload layers, drag the slider to the maximum setting.
9. Press model load.
10. Now, enjoy interacting with a reasoning model running entirely on your AMD hardware!
In case you hit a snag with these steps, AMD’s got you covered with a YouTube tutorial available for a more in-depth walkthrough. Running DeepSeek’s LLMs on your local AMD setup keeps your data safe, leveraging the growing capacities of upcoming NVIDIA and AMD GPUs. As newer models with dedicated AI engines arrive, we expect to see even bigger leaps in processing power.