Accroding toSharon Zhou, chief executive ofLaminiAI, AMD has begun to ship itsInstinct MI300XGPUs for artificial intelligence (AI) and high-performance computing (HPC) applications. As the ‘LamniAI’ name implies, the company is set to use AMD’s Instinct MI300X accelerators to run large language models (LLMs) for enterprises.

While AMD has been shipping itsInstinct MI300-series productsto its supercomputer customers for a while now andexpects the series to become its fastest product to $1 billion in sales in history, it looks like AMD has also initiated shipments of its Instinct MI300X GPUs. LaminiAI has partnered with AMD for a while, so it certainly has priority access to the company’s hardware. Still, nonetheless, this is an important milestone for AMD as this is the first time we have learned about volume shipments of MI300X. Indeed, the post indicated that LaminiAI had gotten multiple Instinct MI300X-based machines with eight accelerators apiece (8-way).

AMD

“The first AMD MI300X live in production,” Zhou wrote. “Like freshly baked bread, 8x MI300X is online. If you are building on open LLMs and you are blocked on compute, let me know. Everyone should have access to this wizard technology called LLMs. That is to say, the next batch of LaminiAI LLM pods are here.”

A screenshot published by Zhou shows that an 8-way AMD Instinct MI300X is in operation. Meanwhile, the power consumption listed in the system screenshot indicates the GPUs are probably idling for the pic — they surely aren’t running demanding compute workloads.

AMD

AMD’s Instinct MI300X is a brother of the company’s Instinct MI300A, the industry’s first data center-grade accelerated processing unit featuring both general-purpose x86 CPU cores and CDNA 3-based highly parallel compute processors for AI and HPC workloads.Unlike the Instinct MI300A, the Instinct MI300X lacks x86 CPU cores but has more CDNA 3 chiplets (for 304 compute units in total, which is significantly higher than 228 CUs in the MI300A) and therefore offers higher compute performance. Meanwhile, an Instinct MI300X carries 192 GB of HBM3 memory (at a peak bandwidth of 5.3 TB/s).

Based on performance numbers demonstrated by AMD, the Instinct MI300X outperforms Nvidia’s H100 80GB, which is available already and is massively deployed by hyperscalers likeGoogle, Meta (Facebook), andMicrosoft. The Instinct MI300X is probably also a formidable competitor to Nvidia’s H200 141GB GPU, which is yet to hit the market.

AMD

According toprevious reports, Meta and Microsoft are procuring AMD’s Instinct MI300-series products in large volumes. Yet again, LaminiAI is the first company to confirm using Instinct MI300X accelerators in production.

Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.

AMD

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

AMD

Anton Shilov