The neural networks that form the basis of social media can consume an infinite amount of energy
Artificial neural networks are widely used by social media platforms such as Twitter and Facebook to recommend content based on user preferences. This process is energy intensive and generates significant carbon emissions. In fact, the entire global power supply can be used to train a single neural network. That’s why the researchers behind a new study recommend using this technology where it’s most beneficial to the public interest.
Artificial neural networks are brain-inspired computer systems that can be trained to solve complex tasks better than humans.
These networks are often used in social media, broadcasting, online gaming, and other areas where users receive messages, movies, fun games, or other content tailored to their individual preferences. Elsewhere, neural networks are used in healthcare to recognize tumors in scans, among other things.
While the technology is incredibly effective, the Danish researcher behind a new study says it shouldn’t be abused. The authors of the study demonstrated that all the energy in the world could be used to train a single neural network without ever reaching perfection.
“The problem is that an infinite amount of energy can be used to, say, train these neural networks to target ads to us. The network never stops learning and improving. It’s like a black hole that absorbs all the energy you give it, which is by no means sustainable,” says Mikkel Abrahamsen, associate professor of computer science at the University of Copenhagen.
Therefore, this technology should be applied wisely and carefully reviewed before each use, as simpler and more energy-efficient solutions may suffice. Mr. Abrahamsen explains:
“It is important that we consider where to use neural networks to provide the greatest value to humans. Some believe neural networks are better suited to scanning medical images of tumors than targeting ads and products on our social media and streaming platforms. In some cases, less resource-intensive methods such as regression exercises or random decision forests may be sufficient.“
Endless training
Neural networks are trained by feeding them data. These can be scanned images of tumors from which a neural network learns to detect cancer in a patient.
In principle, this training can last indefinitely. In their new study, the researchers demonstrate that this is a bottomless pit, as the process is similar to solving very sophisticated equations with many unknowns.
“Today’s best algorithms can only handle up to eight unknowns, while neural networks can be configured to consider billions of parameters. Therefore, it is possible that the optimal solution will never be found when training the network, even if all the energy on the planet is used,” says Mikkel Abrahamsen.
Neural networks use the energy given to them more and more poorly.
“As we train neural networks, things get progressively slower. For example, they can reach 80% accuracy in a day, but it takes more than a full month to reach 85%. So you get less and less of the energy used for training without ever reaching perfection,” he explains.
Many people don’t realize that barbies can be trained indefinitely, which is why Abrahamsen thinks we need to pay attention to their high energy appetite.
“Compared to our awareness of the impact of, say, intercontinental flights or clothing purchases when we log on to Facebook or Twitter, we underestimate the contribution we make to this enormous energy consumption. Therefore, we need to open our eyes to the extent to which this technology pollutes and affects our climate,” concludes Abrahamsen.
What is a neural network?
Un réseau neuronal est un modèle d'apprentissage automatique inspiré de l'activité des neurones du cerveau humain. Il peut être entraîné à exécuter des tâches complexes avec une efficacité surhumaine.
Les réseaux neuronaux ont de nombreux paramètres qui doivent être ajustés pour qu'ils fournissent des résultats significatifs - un processus appelé formation.
Les réseaux neuronaux sont généralement formés à l'aide d'un algorithme appelé rétropropagation, qui ajuste progressivement les paramètres dans la bonne direction.
Photo by Adem AY on Unsplash
[ Communiqué ]
Main link: science.ku.dk