Tech »  Topic »  Gemma Scope 2: helping the AI safety community deepen understanding of complex language model behavior

Gemma Scope 2: helping the AI safety community deepen understanding of complex language model behavior


Large Language Models (LLMs) are capable of incredible feats of reasoning, yet their internal decision-making processes remain largely opaque. Should a system not behave as expected, a lack of visibility into its internal workings can make it difficult to pinpoint the exact reason for its behaviour. Last year, we advanced the science of interpretability with Gemma Scope, a toolkit designed to help researchers understand the inner workings of Gemma 2, our lightweight collection of open models.

Today, we are releasing Gemma Scope 2: a comprehensive, open suite of interpretability tools for all Gemma 3 model sizes, from 270M to 27B parameters. These tools can enable us to trace potential risks across the entire "brain" of the model.

To our knowledge, this is the largest ever open-source release of interpretability tools by an AI lab to date. Producing Gemma Scope 2 involved storing approximately 110 Petabytes of data, as well as ...


Copyright of this story solely belongs to deepmind.com . To see the full text click HERE