making AI models more efficient in sequential learning by focusing on the essential parts of the data

Continuous learning should make AI fit for global applications

About Subarnaduti Paul

Subarnaduti Paul began his scientific career in Germany about five years ago with a Master’s degree at the Technical University of Darmstadt. After completing his studies, which initially focused on electronic systems, Paul moved more and more toards artificial intelligence, inspired by an internship and a subsequent master’s thesis at Bosch.

The core of his research: continuous learning

Paul’s previous work has focused on distributed systems, particularly federated systems, in which multiple clients learn data locally and then send this information to a global server.

His current research interest is in continuous learning. He explains how this concept aims to teach machines to memorize information in a similar way to humans: “We learn to read and write as children and remember these basic skills decades later.” He compares this to the challenge for machines when they are sequentially fed new data and often forget what they have previously learned – a phenomenon known as catastrophic forgetting.

Recently, he has been focusing on so-called base models, for example for image recognition, and on the integration of continuous learning in an environment with limited memory. “I’m working on how we can incrementally train the Vision Transformer on a more economical model,” explains Paul. “It’s about making models more efficient in sequential learning by focusing on the essential parts of the data,” he explains. This research could help to reduce the memory requirements and computing power for machine learning.

Subarnaduti Paul

Replay buffer as memory

One of the biggest challenges is the development of interpretation methods that make it possible to identify and use the most relevant parts of a data set. Paul illustrates this using the example of a photo: “If we have a picture of a dog against the background of a park, the model has to decide whether it is a dog or another animal. The challenge is to select and save only the relevant part of the image – in this case the dog.

This task requires not only a deep understanding of how AI models work, but also innovative approaches to deal efficiently with limited storage space. Paul’s goal is to design replay buffers that store only a fraction of the data, but contain enough information to effectively support continuous learning. These buffers serve as a type of memory storage that allows AI models to learn from past experiences while minimizing the problem of catastrophic forgetting.

The balance between memory efficiency and retaining relevant information is a balancing act and is the focus of Paul’s research.


Continuous learning could make AI fit for global applications

For Paul, hessian.AI offers an ideal platform for his research. “It’s great to have so many PhD students from different fields under one roof. The opportunity to collaborate and the access to resources such as large GPU clusters are crucial for my work,” Paul emphasizes.

Paul sees great potential for social progress in his field of research. As an example, he cites the gradual introduction of vaccines for different age groups, which follows the principle of continuous learning.

“Through continuous learning, we could reduce distortions in data and models and achieve greater generalization across different population groups,” he explains. This also applies to cases where, for example, a language model was initially trained with European languages and is now to be extended to other languages.

In the future, Paul therefore plans to extend his research to socially relevant applications and also to advocate the development of standardized benchmarks in the field of continuous learning. In this way, he hopes to significantly advance this young field of research.