Artificial Intelligence (AI) has revolutionized countless industries, from healthcare to finance, with its ability to learn and adapt. However, a new study by researchers at the University of Alberta has revealed a significant challenge in AI’s ability to continually learn.
The study, published in Nature, found that deep learning systems, which power most modern AI, gradually lose their ability to learn new information, a phenomenon known as “loss of plasticity.” This happens when AI systems are exposed to continuous streams of new data over time, eventually causing them to become rigid and less capable of learning effectively.
Lead researcher Shibhansh Dohare explains, “Our findings indicate that deep learning systems, which are the backbone of AI, can suffer from a significant loss of learning ability over time. This challenges the assumption that these systems can indefinitely improve with more data.”
To explore this issue, the researchers employed a method known as continual learning, where AI systems are trained on a sequence of tasks over time. They used well-known datasets like ImageNet and CIFAR-100 to simulate a variety of learning scenarios. The AI systems were initially able to learn effectively, but as they were exposed to more and more tasks, their performance began to degrade. This drop in performance was linked to the systems’ inability to maintain “plasticity,” or the flexibility needed to adapt to new information.
The researchers then introduced a new method called “Continual Backpropagation.” This approach differs from standard backpropagation by periodically resetting a small fraction of the AI system’s units, or neurons, to their initial state. This reset helps maintain the diversity and flexibility of the system, allowing it to continue learning effectively even after being exposed to many tasks.
Another key researcher, Dr. Richard S. Sutton, adds, “By understanding and addressing the loss of plasticity, we can develop AI systems that are not only smarter but also more adaptable to changing environments and data.”
The implications of these findings are profound, particularly for industries that rely heavily on AI for continuous learning and adaptation, such as autonomous vehicles, finance, and personalized medicine. If left unaddressed, the loss of plasticity could lead to AI systems that are less effective over time, potentially stalling progress in critical areas.
However, the researchers’ proposed solution offers hope. By integrating the Continual Backpropagation method, AI systems can maintain their adaptability, ensuring they continue to improve and perform at high levels even as they encounter new and varied data.
The researchers recommend that AI practitioners incorporate Continual Backpropagation into their deep learning models. This can be done by periodically reinitializing underused neurons, which keeps the model flexible and capable of learning from new data. Additionally, they suggest using regularization techniques like L2 regularization, which helps prevent the weights of the neural network from growing too large, another factor that contributes to loss of plasticity.
This study highlights a critical challenge in the field of AI, the loss of plasticity in deep learning models. However, with the innovative solutions proposed by the researchers, there is a clear path forward. By adopting these strategies, we can develop AI systems that remain adaptable, efficient, and capable of learning throughout their operational life. This not only enhances the performance of AI but also ensures its long-term viability in a rapidly changing world.
For citation:
Dohare, S., Hernandez-Garcia, J.F., Lan, Q. et al. Loss of plasticity in deep continual learning. Nature 632, 768–774 (2024). https://doi.org/10.1038/s41586-024-07711-7