PrismML debuts energy-sipping 1-bit LLM in bid to free AI from the cloud
theregister.co.ukPrismML, an AI venture out of Caltech, has released a 1-bit large language model that outperforms weightier models, with the expectation that it will improve AI efficiency and viability on mobile devices, among other applications.
The model, dubbed Bonsai 8B, manages to be small and fast, with modest power demands and benchmark performance characteristics that rival much larger models.
"Our first proof point is 1-bit Bonsai 8B, a 1-bit model that fits into 1.15 GB of memory and delivers over 10x the intelligence density of its full-precision counterparts," the company said in a social media post. "It is 14x smaller, 8x faster, and 5x more energy efficient on edge hardware while remaining competitive with other models in its parameter-class."
AI models based on the Transformer architecture involve neural networks with millions or billions of weights, which control the strength of connections between neurons and influence how the model performs ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

