Menu
Join the Club

Your Bi-Weekly Dose Of Everything Optimism

News Summary

A new study from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a technique that allows large language models to generate more accurate and verifiable content by integrating external knowledge databases during the reasoning process. The method, called 'Search-Augmented Factuality Enhancer' (SAFE), uses a multi-agent debate framework where one LLM generates an initial …

A new study from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a technique that allows large language models to generate more accurate and verifiable content by integrating external knowledge databases during the reasoning process. The method, called ‘Search-Augmented Factuality Enhancer’ (SAFE), uses a multi-agent debate framework where one LLM generates an initial answer, another breaks it down into individual facts, and a third researches each fact against Google Search results to verify or refute them. In tests, SAFE significantly improved factuality over the raw outputs of models like GPT-4, correctly verifying over 70% of claims and identifying false ones. The approach aims to address the common issue of ‘hallucinations’ in AI-generated text by grounding responses in retrievable evidence. The full details of the research are available in the published paper. Read the full article at https://technologyreview.com/2024/07/18/1094850/mit-ai-llm-fact-checking-search-safety/.

Join the Club

Like this story? You’ll love our Bi-Weekly Newsletter

Technology Review

Technology Review

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Ask Richard AI Avatar