Dr. Anirban Mukhopadhyay completed his Ph.D. in Computer Science at the University of Georgia. He then worked as a postdoc at the IMT Lucca Institute and subsequently at the Zuse Institute Berlin. In 2017, he joined TU Darmstadt and is currently the head of the Medical and Environmental Computing Labs (MEC-Lab).
He organizes leading international conferences, challenges, and workshops. Dr. Mukhopadhyay leads several national infrastructure projects on federated and continuous learning in healthcare such as RACOON, EVA-KI, and FED-PATH. He is also the host of the “AI-ready Healthcare” podcast.
In our conversation, Dr. Anirban Mukhopadhyay shares his vision: „To Reverse Engineer the Doctor’s Mind.“
At the core of Mukhopadhyay’s work are assistive AI technologies focused on improving diagnostic accuracy and providing more consistent and precise analyses in surgery.
A key project is the Radiological Cooperative Network (RACOON), in which radiology departments from 38 German university hospitals have joined forces, the scientist tells me. This network enables the training of deep learning models without sharing medical data at a central location. In AI research, this approach is known as „Federated Learning.“
A primary task in Mukhopadhyay’s research is the continuous updating of AI models in the face of ever-changing medical data. He explains that medical knowledge is dynamic and must quickly adapt to new situations such as a pandemic.
The major challenge lies in developing AI models that can be regularly updated without having to undergo a complete regulatory process again. This requires an innovative approach at the technical, clinical, and regulatory levels.
A look at AI systems authorised in the EU or the USA for medical use shows that they are always static AI. “You train them once with what you have and hope that nothing will happen afterwards. The universe remains the same,“ the researcher concludes. „So we are looking at how we can enable technologies that constantly update themselves when the AI model becomes outdated or obsolete.“
In addition to developing better diagnostic tools, Mukhopadhyay and his team are also working on multimodal „No-Line-of-Sight Computer Vision“ for minimally invasive surgery. Specifically, the aim is to improve the accuracy and safety of surgical interventions by combining the electromagnetic tracking of surgical instruments with image signals, such as X-ray images.
These techniques should enable surgeons to perform operations with sub-millimetre precision without being able to see their instruments directly.
Alongside his research activities, Mukhopadhyay is also a co-host of the “AI-ready Healthcare” podcast. With guests ranging from doctors and scientists to industry experts, global healthcare representatives and patient advocates, the podcast discusses topics from technical challenges to societal impacts. Now in its 9th season, the podcast is also supported by Hessian.AI.