Menu
Join the Club

Your Bi-Weekly Dose Of Everything Optimism

News Summary

A new study from MIT and Google DeepMind demonstrates a technique that allows large language models to solve complex problems more accurately by breaking them down into smaller, manageable steps. The method, called 'Step-Back Prompting,' trains models to first abstract core principles or high-level concepts from a specific question before applying that general knowledge to …

A new study from MIT and Google DeepMind demonstrates a technique that allows large language models to solve complex problems more accurately by breaking them down into smaller, manageable steps. The method, called ‘Step-Back Prompting,’ trains models to first abstract core principles or high-level concepts from a specific question before applying that general knowledge to find the solution. This approach significantly improved performance on challenging tasks in STEM reasoning, multi-hop question answering, and temporal reasoning when tested on models like PaLM-2L. The research suggests that enabling AI to perform this kind of abstracted ‘step-back’ thinking is key to advancing its reasoning capabilities beyond simple pattern recognition. For the full details, read the complete article at https://technologyreview.com/2023/10/25/1080819/ai-large-language-models-reasoning-step-back-prompting-mit-google-deepmind/

Join the Club

Like this story? You’ll love our Bi-Weekly Newsletter

Technology Review

Technology Review

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like

Ask Richard AI Avatar