ZLUDA was originally designed to run creative professional CUDA-based applications on Intel and then AMD GPUs, while the upcoming iteration of ZLUDA shifts focus to accommodate AI and machine learning workloads. Also, the emphasis is now not just on Intel or AMD. Instead, it offers multiple GPU vendor support, making ZLUDA applicable across different GPU architectures. Nonetheless, for the time being, most development efforts are concentrated on AMD GPUs, particularly RDNA1 and newer architectures. Support is being built around AMD’s ROCm 6.1+ compute stack, laying the foundation for broader, multi-architecture compatibility in the future.

Andrzej Janik is currently working to make AI/ML frameworks like PyTorch, TensorFlow, and Llama.cpp function seamlessly using CUDA on non-Nvidia GPUs using his translation layer, according to Phoronix, who spoke to the developer. Janik predicts it will take about a year to develop the new ZLUDA code to a stable state where it can effectively handle AI/ML workloads across multiple GPUs. Contributions from the open-source community will be welcomed as the project evolves. So, ZLUDA will remain open source, or at least it looks so today.

AMD

Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

Anton Shilov