Beyond static AI: MIT’s new framework lets models teach themselves
venturebeatResearchers at MIT have developed a framework called Self-Adapting Language Models (SEAL) that enables large language models (LLMs) to continuously learn and adapt by updating their own internal parameters. SEAL teaches an LLM to generate its own training data and update instructions, allowing it to permanently absorb new knowledge and learn new tasks.
This framework could be useful for enterprise applications, particularly for AI agents that operate in dynamic environments, where they must constantly process new information and adapt their behavior.
The challenge of adapting LLMs
While large language models have shown remarkable abilities, adapting them to specific tasks, integrating new information, or mastering novel reasoning skills remains a significant hurdle.
Currently, when faced with a new task, LLMs typically learn from data “as-is” through methods like finetuning or in-context learning. However, the provided data is not always in an optimal format for the model to learn efficiently. Existing approaches ...
Copyright of this story solely belongs to venturebeat . To see the full text click HERE