Tech »  Topic »  I tried local AI on my M1 Mac, and the experience was brutal - here's why

I tried local AI on my M1 Mac, and the experience was brutal - here's why


Screenshot by Tiernan Ray for ZDNET

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

  • Ollama makes it fairly easy to download open-source LLMs.
  • Even small models can run painfully slow.
  • Don't try this without a new machine with 36GB of RAM.

As a reporter covering artificial intelligence for over a decade now, I have always known that running artificial intelligence brings all kinds of computer engineering challenges. For one thing, the large language models keep getting bigger, and they keep demanding more and more DRAM memory to run their model "parameters," or "neural weights."

Also: How to install an LLM on MacOS (and why you should)

I have known all that, but I wanted to get a feel for it firsthand. I wanted to run a large language model on my home computer.

Now, downloading and running an AI model can involve a ...


Copyright of this story solely belongs to zdnet.com . To see the full text click HERE