Tech »  Topic »  Little LLM on the RAM: Google's Gemma 270M hits the scene

Little LLM on the RAM: Google's Gemma 270M hits the scene


Google has unveiled a pint-sized new addition to its "open" large language model lineup: Gemma 3 270M.

Weighing in at 270 million parameters and requiring around 550MB of memory, it's designed to make waves in on-device deployment and rapid model iteration — despite the usual caveats around hallucinations, shaky output, and probable copyright entanglements baked into its training data.

Google launched the original Gemma family in February 2024, and at the time offered two flavours: a two-billion-parameter version designed for on-CPU execution and a more capable seven-billion-parameter version targeting systems with GPU- or TPU-based accelerators.

While positioned as "open" models, in contrast to the company's proprietary Gemini family, they, like most competing "open" models, did not include the source nor training data - only pre-trained models and weights, something which remains true for the latest entry in the family (or, as Google would have it, the "Gemmaverse."

The new, smaller ...


Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE