The team behind continuous batching says your idle GPUs should be running inference, not sitting dark
venturebeatEvery GPU cluster has dead time. Training jobs finish, workloads shift and hardware sits dark while power and cooling costs keep running. For neocloud operators, those empty cycles are lost margin.
The obvious workaround is spot GPU markets — renting spare capacity to whoever needs it. But spot instances mean the cloud vendor is still the one doing the renting, and engineers buying that capacity are still paying for raw compute with no inference stack attached.
FriendliAI's answer is different: run inference directly on the unused hardware, optimize for token throughput, and split the revenue with the operator. FriendliAI was founded by Byung-Gon Chun, the researcher whose paper on continuous batching became foundational to vLLM, the open source inference engine used across most production deployments today.
Chun spent over a decade as a professor at Seoul National University studying efficient execution of machine learning models at scale. That research produced ...
Copyright of this story solely belongs to venturebeat . To see the full text click HERE

