Tech »  Topic »  Less is more: Meta study shows shorter reasoning improves AI accuracy by 34%

Less is more: Meta study shows shorter reasoning improves AI accuracy by 34%


Researchers from Meta’s FAIR team and The Hebrew University of Jerusalem have discovered that forcing large language models to “think” less actually improves their performance on complex reasoning tasks.

The study released today found that shorter reasoning processes in AI systems lead to more accurate results while significantly reducing computational costs.

“In this work, we challenge the assumption that long thinking chains results in better reasoning capabilities,” write the authors in their paper titled “Don’t Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning.”

The research contradicts the prevailing trend in AI development, where companies have invested heavily in scaling up computing resources to allow models to perform extensive reasoning through lengthy “thinking chains” — detailed step-by-step trajectories that AI systems use to solve complex problems.

AI accuracy jumps 34% when models use shorter reasoning chains

The researchers discovered that within the same reasoning task, “shorter reasoning chains ...


Copyright of this story solely belongs to venturebeat . To see the full text click HERE