A new study from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) demonstrates a novel approach to AI training that significantly reduces energy consumption. The method, termed 'liquid neural networks,' uses more efficient, brain-inspired algorithms that require less computational power during the training phase. Researchers report that this technique could cut the carbon footprint of …
A new study from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) demonstrates a novel approach to AI training that significantly reduces energy consumption. The method, termed ‘liquid neural networks,’ uses more efficient, brain-inspired algorithms that require less computational power during the training phase. Researchers report that this technique could cut the carbon footprint of developing large AI models by up to 75% compared to standard deep learning methods. The approach focuses on creating smaller, more adaptable networks that learn continuously from data streams, similar to organic neural processes. While promising for edge computing and robotics, experts note that widespread adoption in major data centers would require further scaling and industry buy-in. For the full details, read the complete article at https://technologyreview.com/2023/10/ai-energy-efficient-training.
Join the Club
Like this story? You’ll love our Bi-Weekly Newsletter



