“Rewriting the blueprint, not removing bricks”: Multiverse Computing says it can shrink large AI models and cut memory use in half
techradar.com
Spanish AI company Multiverse Computing has released HyperNova 60B 2602, a compressed version of OpenAI’s gpt-oss-120B, and published it for free on Hugging Face.
The new version cuts the original model’s memory needs from 61GB to 32GB, and Multiverse says it retains near-parity tool-calling performance despite the 50% reduction in size.
In theory, this means a model that once required heavy infrastructure can run on far less hardware. For developers with tighter budgets or energy constraints, that’s a potentially huge advantage.


Copyright of this story solely belongs to techradar.com . To see the full text click HERE

