From LPUs and GPUs to CPUs and switches, everything you need to know about Nvidia's latest kit
theregister.co.ukGTC DEEP DIVE At Nvidia’s GTC conference this week, CEO Jensen Huang finally addressed a $20 billion question he’s dodged for months: Why spend so much to license AI chip startup Groq’s tech and hire away its engineers rather than build it themselves?
As we've said before, if Nvidia wanted to build an SRAM-heavy inference accelerator, it didn't need to buy Groq to do it. The company’s newly announced Groq 3 LPX racks, which pack 256 LP30 language processing units (LPUs) into a single system, show time-to-market was the reason Nvidia bought rather than built.
We're told the chip is based on Groq's second-gen LPU tech with a handful of last-minute tweaks made just before tapping out at Samsung's fabs.
The chip doesn't use Nvidia's proprietary NVLink interconnect, it lacks NVFP4 hardware support, and it isn't CUDA-compatible at ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

