Tech »  Topic »  MIT’s new ‘recursive’ framework lets LLMs process 10 million tokens without context rot

MIT’s new ‘recursive’ framework lets LLMs process 10 million tokens without context rot


Recursive language models (RLMs) are an inference technique developed by researchers at MIT CSAIL that treat long prompts as an external environment to the model. Instead of forcing the entire prompt into the model's context window, the framework allows the LLM to programmatically examine, decompose, and recursively call itself over snippets of the text.

Rather than expanding context windows or summarizing old information, the MIT team reframes long-context reasoning as a systems problem. By letting models treat prompts as something they can inspect with code, recursive language models allow LLMs to reason over millions of tokens without retraining. This offers enterprises a practical path to long-horizon tasks like codebase analysis, legal review, and multi-step reasoning that routinely break today’s models.

Because the framework is designed as a wrapper around existing models, it can serve as a drop-in replacement for applications that make direct calls to LLMs.

The LLM ...


Copyright of this story solely belongs to venturebeat . To see the full text click HERE