Why neural networks are catastrophically forgetful
About Prof. Dr. Alexander Gepperth
Prof. Dr. Alexander Gepperth is a Professor of Programming and Machine Learning in the Department of Applied Computer Science at Fulda University of Applied Sciences.
Gepperth obtained his Bachelor’s degree in physics from Ludwig-Maximilians-Universität in Munich, and later pursued a doctorate at the Institute for Neuroinformatics at Ruhr-Universität Bochum. During his time there, he conducted research on neural learning methods for visual object recognition in the Driver Assistance department.
After obtaining his doctorate in 2006, Gepperth joined Honda Research Institute Europe GmbH, where he worked as a senior scientist on basic research related to in-vehicle learning and the development of prototypes
In 2011, Gepperth returned to university: until 2016, the scientist was a professor at ENSTA ParisTech in France and was responsible, among other things, for the “Intelligent Vehicle” specialisation course.
In 2016, Gepperth followed the call to Fulda University of Applied Sciences.
How artificial neural networks should learn more thoroughly
Gepperth researches neural networks for object and image recognition. A topic that has accompanied him since his doctorate. In the past ten years, he has focused on continuous learning. “This is what we humans do all the time. That’s why it’s difficult to explain to someone that it’s something special,” Gepperth says.
Even the best deep-learning methods today fail at continuous learning for machines. An example: a classifier has learned to recognise cats. If the network is now to learn to recognise dogs as well, it has to be completely retrained – or it forgets everything it has already learned. In research, this is called “catastrophic forgetting”. Continuous learning is supposed to provide a remedy.
Gepperth is researching this fundamental problem. With his team, he is developing new concepts for continuous learning. In March 2022, for example, he published a paper on the algorithmic foundations of learning with constant time complexity.
Neural Networks and Time Complexity
Gepperth sees the greatest challenge in time complexity: in biological brains, he says, there are mechanisms that prevent us from forgetting everything when we learn something new – no matter how much we already know.
In machine learning, some methods could prevent catastrophic forgetting, such as freezing certain parts of the neural network. But with these methods, learning over time takes longer the more the network already knows, says Gepperth.
He is therefore looking for methods that can protect what has been learned without depending on what has already been learned. A scientific paper by his team on the topic was recently submitted to CVPR, one of the most important conferences for computer vision.
Robots that learn new faces and the big picture
hessian.AI supports Gepperth in his research. The centre offers exchange with first-class researchers and a scientific community in which, for example, questions about the review process or problem solutions for young scientists are discussed.
The big picture also counts: “Is what I am doing useful at all?” is a question that researchers can answer better thanks to the community.
In the case of continuous learning, the answer is clearly yes. Advances in this technology would enable the widespread use of AI systems, says Gepperth. “Systems that can’t learn to do that will have to be completely retrained at some point.”
Continuous learning systems, on the other hand, require less maintenance and training. Robots, for example, could easily learn to recognise new faces.
The same methods also enable selective forgetting in neural networks – an important capability for data protection and privacy issues.