Artificial intelligence is consuming more energy every day, and data centers are struggling to meet this growing demand. However, a newly developed training method could reduce AI's energy usage while maintaining the same level of accuracy.
AI technologies, like large language models (LLMs), have become an integral part of daily life. However, the data centers that support these technologies consume vast amounts of energy. In Germany alone, data centers consumed about 16 billion kilowatt-hours (kWh) of electricity in 2020, and this figure is expected to reach 22 billion kWh by 2025. With the increasing complexity of AI applications, this energy demand will only grow.
100 TIMES FASTER, SAME ACCURACY
Training AI models, particularly neural networks, requires immense computational power. The new method developed to address this issue works 100 times faster than traditional approaches while maintaining the same level of accuracy. This breakthrough has the potential to significantly reduce the amount of energy needed for AI training.
Neural networks are systems inspired by the human brain, consisting of artificial neurons that process information by assigning specific weights to inputs. When a sufficient threshold is reached, the signal is passed to the next layer.
Training these networks requires substantial computation. Initially, the parameters within the network are set randomly and then adjusted over multiple iterations to improve the model's accuracy. However, this process results in high energy consumption.
PROBABILITY-BASED NEW TRAINING METHOD
Professor Felix Dietrich of Physics-Based Machine Learning and his team have developed a new method that could revolutionize AI training. Unlike traditional methods, this new approach uses probabilities instead of setting parameters through iterations.
This method targets values at critical points where large and rapid changes occur in the training data. The researchers aim to create dynamic systems that save energy using this approach. These systems evolve according to specific rules over time and are used in areas like climate modeling and financial markets.
HIGH EFFICIENCY WITH LESS ENERGY
"Our method allows for determining the necessary parameters with minimal computational power. This makes training neural networks much faster and more energy-efficient," said Felix Dietrich, also emphasizing that the accuracy of this method is comparable to that of iteratively trained networks.
This new approach could help make AI a more sustainable technology by reducing its environmental impact. Experts suggest that this breakthrough could be applied to broader AI applications in the future.