Menu
Join the Club

Your Bi-Weekly Dose Of Everything Optimism

News Summary

A new study from MIT demonstrates a significant advancement in making large language models more efficient. Researchers have developed a method that allows these AI models to generate their own training data, reducing reliance on vast, human-created datasets. This technique, known as 'self-rewarding,' enables models to iteratively improve by creating and learning from synthetic examples …

A new study from MIT demonstrates a significant advancement in making large language models more efficient. Researchers have developed a method that allows these AI models to generate their own training data, reducing reliance on vast, human-created datasets. This technique, known as ‘self-rewarding,’ enables models to iteratively improve by creating and learning from synthetic examples that target their own weaknesses. Early results show models trained this way can match or exceed the performance of those trained on traditional datasets, while using far less computational power and data. The approach could lower the barrier to developing powerful AI systems and address concerns about data scarcity and copyright issues in current training practices. Read the full article at https://technologyreview.com/2024/07/15/1094756/ai-models-train-themselves-synthetic-data/

Join the Club

Like this story? You’ll love our Bi-Weekly Newsletter

Technology Review

Technology Review

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Ask Richard AI Avatar