A new study from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) demonstrates a novel method for training AI models that significantly reduces computational costs and energy consumption. The research, led by PhD student Rongxing Du, focuses on the often-overlooked 'backward pass' phase of training, where models adjust their internal parameters based on errors. By …
A new study from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) demonstrates a novel method for training AI models that significantly reduces computational costs and energy consumption. The research, led by PhD student Rongxing Du, focuses on the often-overlooked ‘backward pass’ phase of training, where models adjust their internal parameters based on errors. By selectively updating only the most critical parameters during this phase—a technique the team calls ‘Sparse Backward Propagation’—the researchers achieved training speeds up to two times faster with minimal impact on model accuracy. This approach could make advanced AI development more accessible and sustainable, particularly for resource-constrained researchers and organizations. The findings were published in the proceedings of a major machine learning conference. For the full details, read the complete article at https://technologyreview.com/2024/05/15/mit-ai-training-efficiency.
Join the Club
Like this story? You’ll love our Bi-Weekly Newsletter



