Tech »  Topic »  Yes, local LLMs are ready to ease the compute strain

Yes, local LLMs are ready to ease the compute strain


KETTLE We've been experimenting with LLMs for a while here at The Register, and if you ask our systems editor Tobias Mann and senior reporter Tom Claburn, locally installed coding assistants have actually become so good they could relieve some of the compute load that's pushing AI companies to raise their prices.

This week on The Kettle, host Brandon Vigliarolo is joined by Mann and Claburn to discuss their work with locally-hosted LLMs, why we're revisiting the topic at all, how to do local LLMs safely, and whether there's orbital relief coming for the compute crunch.

You can listen to The Kettle here, as well as on Spotify and Apple Music, or read the full transcript of this episode below. ®

---

Brandon (00:01)

Welcome back to another episode of The Register's Kettle podcast. I'm Reg reporter Brandon Vigliarolo and with me this week ...


Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE