Opening of the AI Innovation Lab of hessian.AI, unique in Germany, at the
hessian.AI at the GSI Helmholtz Centre in Darmstadt provides companies with
application-oriented AI computing infrastructure and AI expertise

Wiesbaden. Hessen’s Minister for Digital Affairs, Prof. Dr. Kristina Sinemus, opened the AI Innovation Lab of the Hessian Centre for Artificial Intelligence (hessian.AI) at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt today. The project of the TU Darmstadt, which is funded with around 10 million euros, serves as a contact point for companies, start-ups, and science with the central aim of providing access to an AI supercomputer infrastructure. In the laboratory, AI systems and applications can be developed, trained, tested, and evaluated.

Users from research and application not only receive support in the conception and implementation of AI projects and access to the infrastructure, but also support in adapting to alternative hardware architectures and performing computationally intensive AI tasks. Companies can thus accelerate processes, make workflows more efficient, and develop breakthroughs. Industries that benefit from this computing infrastructure include finance, biotechnology, pharmaceuticals, mobility, and logistics.

„Sustainable and cutting-edge AI computing infrastructure is a prerequisite for the long-term economic success of companies. With the AI Innovation Lab, we are creating a unique center nationwide that will increase the startup dynamics in Hessen, enhance the state’s innovation capacity, and provide a competitive advantage“, emphasized Hessen’s Minister for Digital Affairs, Prof. Dr. Kristina Sinemus.

Many Hessian start-ups are using AI for their innovative business models – from agri-technology to finance and environmental technologies. That’s why the AI Innovation Lab at the Green IT Cube plays a key role in transferring knowledge from science to industry. At the same time, we are strengthening Hessen as a start-up location for sustainable business ideas. And we need these for the economic transformation in Hessen: we want to support the transition to climate-neutral economy and make Hessen the leading location for green start-ups,” said Hessen’s Minister of Economics Tarek Al-Wazir, pointing out that already one third of the start-ups in Hessen are green start-ups.

The TU Darmstadt concluded a framework agreement with the GSI Helmholtz Centre for the use of the water-cooled Green IT Cube to house the hardware of the AI Innovation Lab in hessian.AI, one of the most sustainable computing infrastructures in the world. The AI Innovation Lab will be among the top 300 AI supercomputers worldwide in its entirety. With its 38 computing nodes and 304 GPUs (graphics cards) and half a petabyte of storage, it provides excellent infrastructure for research and development. The computers weigh approximately six tons and several kilometers of cable were installed.

“The establishment of the AI Innovation Lab in the Green IT Cube creates a bridge between cutting-edge research and application, because low energy consumption is a key requirement for the sustainable use of AI and the operation of powerful data centers,” explained Minister of Science Angela Dorn. “Therefore, the Ministry of Science has provided 5.5 million euros from the European Regional Development Fund (EFRE) for the expansion of the Green IT Cube into a research and transfer center for water-cooling of large-scale computers.”

Prof. Dr. Tanja Brühl, President of the TU Darmstadt: “As a forward-looking component of the strong Hessian AI ecosystem, the AI Innovation Lab creates excellent conditions to transfer excellent AI research from the TU Darmstadt and all participating universities at hessian.AI into applications in breadth and depth, With the help of robust, secure, and efficient AI systems, we want to develop solutions for global challenges in exchange with our partners in business and society. I am delighted that we can continue to pursue this goal in hessian.AI thanks to the great support of the Hessian state government.”

Prof. Dr. Dr. h.c. Mira Mezini, Co-Director of hessian.AI: “The AI Innovation Lab of hessian.AI, the Hessian Center for Artificial Intelligence, opens up new opportunities for Hessian companies, start-ups, and science in dealing with AI as a key technology. Access to large compute infrastructures and the offer of individual services from a single source in close connection with the cutting-edge research of hessian.AI are necessary prerequisites for lifting the potentials for new AI innovations in the Hessen region and thus promoting AI sovereignty. It is great that with the help of the Hessian state government, we can further advance AI cutting-edge research and application in Hessen.”

“High-performance computing and the use of artificial intelligence play a big role in modern science and are rapidly gaining importance. Our sustainable data center, Green IT Cube, provides the best conditions to drive forward the development of AI and connect it with our research. The funding we received through the REACT-EU program allowed us to expand our available capacity for use by external project partners. They also enable the development of important synergies, such as the one we are proud to inaugurate today,” said Professor Dr. Dr. h.c. Paolo Giubellino, Scientific Managing Director of GSI and FAIR.

Dr. Ulrich Breuer, Administrative Managing Director of GSI and FAIR, added: “With the Digital Open Lab real-world laboratory, an environment for the development, testing and upscaling of energy-efficient high-performance computing to the scale of industrial demonstrators has been provided. Through collaborations with academic institutions and companies, particularly startups, we offer a platform to contribute to green computing and the development of AI-based technologies. We are delighted to welcome hessian.AI as a partner on our campus.”

Prof. Dr. rer. nat. Johannes Kabisch, Chief Scientific Officer of Proteineer GmbH: “Proteineer GmbH uses AI on a large scale to find new proteins for the production of mRNA activate ingredient in huge datasets for our customers, for example. The graphics processors and computing nodes in the KI-Innovationslabor will help us significantly improve and accelerate these developments.”

Michael Wilczynska, CEO of WIANCO OTT Robotics, said, “The advancements of the disruptive cognitive AI solution EMMA include AI modules based on neural networks that require a high level of computing power to train models, for example to classify wood defects in production batches and automatically adjust the resulting evaluation for the process. In addition to excellent AI computing infrastructure, the KI-Innovationslabor provides a holistic range of development-related components that make the business location even more attractive and significantly increase its performance.”

“Machine learning and computationally intensive algorithms are at the core of our products and research activities. The GPU cluster in the Green IT Cube provides the necessary computing capacity regionally to further expand our competitive advantage,” said Dr.-Ing. Stéphane Foulard, CEO of Compredict GmbH.

Dr. Andreas Knirsch, Head of Software at Wingcopter GmbH, said: “The computing infrastructure of the AI Innovation Laboratory could help us tremendously in training and testing our AI to the extent necessary for autonomous, yet safe and reliable flights. The initiative strengthens our location and keeps know-how and experts on a central topic of the future in the country.”

“Given the steadily increasing complexity of deep learning models, the requirements for both humans and machines to use the systems are also increasing. The AI Innovation Laboratory addresses both areas and creates a very good starting point for start-ups from the Rhine-Main area and beyond,” said Erik Kaiser, CEO of summetix GmbH.

Interlocking building blocks of the AI future agenda

“Hessen has the potential to become the Silicon Valley of Europe, and as a state government, we are investing in the future technology of AI to make Hessen future-proof in both cities and rural areas. We are convinced that AI can only unleash its potential if people have confidence in the development and use of AI. This applies to existing measures such as the Hessian Center for Artificial Intelligence hessian.AI or the Center for Responsible Digitalization ZEVEDI, as well as our nationally unique “AI Quality & Testing Hub”. And with our funded Center for Applied Quantum Computing, Hessen is already preparing for the use of the next generation of supercomputers,” concluded Sinemus.

About Prof. Dr Michael Gucker

Prof. Dr Michael Guckert conducts research at the Technical University of Central Hesse in the Department of Mathematics, Natural Sciences and Data Processing. He studied mathematics at the Justus Liebig University in Gießen and earned his doctorate in computer science at the Philipps University in Marburg.

After a few years in industry, Guckert moved to the Technical University of Central Hesse, where he now conducts research on artificial intelligence. He is a founding member of the hessian.AI.

Prof. Dr Michael Gucker

Guckert brings man and machine together with Deep Learning

The computer scientist juggles with numbers: Time series are his balls, the patterns in them their trajectories. Using deep learning methods, Guckert examines data produced by industrial plants or (!) the human body.

Both have something in common: they provide helpful information for diagnostics. In the case of the human heart, this can be an ECG, for example, which indicates arrhythmias or infarctions. With the machine, it may be an incorrect setting or a missing part that leads to lower performance.

Using AI diagnosis, Guckert predicts how likely the human or machine is or will be to suffer from a disease. His research provides exciting insights: The patterns of humans and machines are similar.

Guckert and his team have shown, for example, that the same deep-learning methods can be used to analyse ECGs, the noise propagation in a salt dome and the power consumption curve of an aluminium-dyeing machine. “Compared to traditional methods, the DL model achieved equally good results for ECGs,” says Guckert.

Guckert sees a central challenge in the availability of data and regulation: “Where do we get the data? This is particularly difficult in the medical environment because it is especially sensitive personal data that requires careful handling.” The models developed must not be too specific. Only then could they recognise a broad spectrum of diseases. That is why Guckert works with combinations of deep learning models.

When it comes to translating research into applications, Guckert sees a challenge in the requirements for medical devices and the use of AI in them. Not least for this reason, he appreciates the close cooperation of his department with companies.

In his search for sufficient computing capacity for his Deep Learning methods, Guckert found what he was looking for in the hessian.AI network. Meanwhile, he sees the proximity of the research network to industry and Hessian SMEs as a great opportunity for the transfer of AI from research into practice: “The balancing act between basic research at the highest level and application in SMEs – that’s where hessian.AI will play a big role.

AI helps in emergencies and against the shortage of skilled workers

Guckert sees concrete use cases for artificial intelligence from his field of expertise, particularly in medical and industrial use cases. The patterns that Guckert recognises with his DL methods bring several advantages at once:

About Prof. Dr. Li Zhang

Prof. Dr. Li Zhang is a researcher at the Department of Electrical Engineering and Information Technology at the Technische Universität Darmstadt. She received her Ph.D. from TU Munich, where she was a group leader for Heterogeneous Computing from 2018 to 2022.

Since 2022 she is Assistant Professor on Hardware for AI at TU Darmstadt.

Prof. Dr. Li Zhang

Understanding why AI works

Artificial intelligence has become an integral part of our daily lives, from recommending products we might like to use, to detecting anomalies in medical scans, or generating an image from words. But despite the prevalence of AI, there is still much to understand about it.

Particularly with complex and multi-layered models, it can be difficult to understand exactly how an AI system arrives at its final predictions or decisions. Neural networks are even considered “black boxes” because their decision-making process remains partly nontransparent.

Zhang’s research focuses on understanding the nature of artificial intelligence from a hardware perspective: her hardware and algorithmic designs enable more efficient and advanced models that provide increased interpretability for AI.

“My goal is to explore the relationship between logic design and deep neural networks,” Zhang says, adding that her research focuses on the hardware perspective. Analogue and digital circuit designs play a crucial role in the implementation of AI systems by providing the computational resources needed to run and interpret AI algorithms and models, such as neural networks – and both have their strengths and weaknesses.

Her primary goal is to develop hardware and algorithmic designs that can utilize both circuits’ strengths while mitigating their limitations. Analog circuits, for example, are more energy-efficient and faster, but they lack the accuracy and robustness of digital circuits: “Analog accelerators can be up to 70% faster compared to digital ones, but they are less accurate and not reliable enough”, says Zhang.

However, compute demands are increasing faster than hardware can keep up with: “The development of AI is fast. But hardware for AI cannot catch up with that.”

Paving the way for sustainable AI systems

Increasing the efficiency and computational power of hardware and algorithmic designs would allow researchers to gain better insights into how AI reaches a conclusion, optimize their performance, and develop more advanced algorithms that take advantage of the hardware’s capabilities.

Currently, most AI systems utilize Graphics Processing Units (GPUs) to train neural networks. This approach has one key drawback, according to Zhang: “The high power consumption of GPUs is a very, very large issue”.

Her hardware and algorithmic designs offer researchers and industry a way to train models with better power efficiency, ultimately leading to more sustainable AI systems.

Collaboration is a key aspect of Zhang’s research, and she sees interdisciplinary research as essential to developing sustainable, efficient and robust AI systems. For Zhang, hessian.AI provides a means for researchers from different disciplines to collaborate, find synergies and gain a deeper understanding of AI.

About Prof. Dr. Gemma Roig

Prof. Dr. Gemma Roig started her career in telecommunications engineering, signal processing and mathematics, where she developed her interest in artificial intelligence. From 2011, she deepened her AI research in a PhD programme at ETH Zurich, which she completed in computer vision in 2014.

Prof. Dr. Gemma Roig

In Zurich, Roig began to work on the question of how knowledge about the human brain can be used for better AI. From 2014, she did research for this at the Massachusetts Institute of Technology at the “Center for Brains Minds and Machines” with the renowned neuroscientist Tomaso Poggio.

In 2017, Roig accepted an assistant professorship at the Singapore University of Technology and Design. In 2020, she finally accepted an appointment as assistant professor at Goethe University Frankfurt am Main.

Computational vision instead of computer vision

“I research artificial intelligence and its relationship to human intelligence, from both perspectives,” is how Prof. Dr Gemma Roig describes her research focus. Her projects are correspondingly interdisciplinary: Sometimes she tries to use AI to better understand the brain, then she uses findings from cognitive science and her projects to design better AI systems.

With this approach, she wants to narrow down the almost infinite space of possible AI architectures and methods – after all, “the brain seems to work quite well”.

Her research group therefore also bears the name “Computational Vision and Artificial Intelligence” – and thus distinguishes itself from the common term computer vision, which often develops methods for machine image recognition without recourse to cognitive science findings.

Can AI systems predict brain data?

The scientist is researching the relationship between AI and biological brains, among other things as part of the Algonauts 2023 Challenge: The project, which started in 2019, focuses on predicting recorded responses of the human brain when perceiving complex natural visual scenes…

Roig is researching AI models that can predict the progression of MRI recordings. Such models help in understanding the brain and at the same time could enable more robust AI systems, says Roig. The FU Berlin, MIT and the University of Minnesota are also involved in the international project.

One of the sponsors of the Algonauts Challenge: Hessian.AI. Roig sees her participation in the centre as an opportunity to collaborate with other AI disciplines: “I want to explore how I can integrate the models that other Hessian.AI scientists are developing into my research,” says Roig.

For the Algonauts project, for example, she says integrating continuous learning methods into multimodal systems is interesting. She also appreciates the focus on transparent and interpretable AI systems in Hessian.AI.

Progress at the interface of sciences

Multimodality is another of Roig’s research areas. With her team, she develops systems that process not only images, but also text and audio, and is guided by the findings of cognitive science.

She is also a bridge professor between AI and cognitive science in the DFG-funded interdisciplinary research project ARENA, which aims to better understand how knowledge is organised at different levels of abstraction – both in the brain and in AI models.

Roig sees interdisciplinarity as a central challenge of her research: “It takes time to acquire knowledge from other fields.” She is therefore developing a multidisciplinary study programme that will produce researchers in the future who have studied directly at the interface of computer science and cognitive science.

“At the interface of different research fields, we can make many discoveries. This is a promising way to advance in science,” says Roig.

About Dr. Ivan Habernal

Dr Ivan Habernal is a researcher at the Department of Computer Science at Darmstadt University of Technology. He studied and obtained his doctorate at the University of West Bohemia in Pilsen, Czech Republic.

After various positions in industry, he has been a junior research group leader in the research group “Trustworthy Human Language Technologies” since 2021.

Dr Ivan Habernal

His AI methods protect personal data

Natural language processing (NLP) with artificial intelligence has made rapid progress in recent years. Whether it’s voice assistants in smartphones or AI-based text generation with ChatGPT: NLP is now ubiquitous and provides great benefits.

For an AI to learn and process natural language, it needs large amounts of data. Often, the protection of personal data is a particular challenge: reviews of medicines or court rulings can contain sensitive information that should not be included in the AI models.

Habernal wants to optimise these models and is researching how AI can preserve privacy. The simplest form is, for example, the anonymisation of texts. However, sometimes even blackened data contains sensitive information that can be used to draw conclusions about people: “There are models that can be used to calculate a person’s gender, social class or entire past history.”

That is why Habernal and his research group are developing their own AI models that automatically recognise indirect correlations of personal data and exclude them.

Habernal sees a central challenge in finding the balance between the accuracy of the AI model and the degree of privacy. The AI should work as well as possible and ideally not process sensitive data.

Interdisciplinary research for more logic in AI

The computer scientist is working in this area of tension in another research project on Legal-Natural-Language-Processing. Habernal and his team are developing a model to facilitate the work of legal scholars.

The AI is supposed to automatically recognise argumentation patterns and logic in court decisions and replace manual annotation as far as possible. For Habernal, this requires close cooperation with lawyers: “Law is not computer science. The approach is different and takes time – but it’s worth it.”

Habernal therefore pursues interdisciplinary research. This is where he sees the strengths of hessian.AI, which supports with funding programmes or start-up financing, for example. In a funded project, for example, Habernal and lawyers are jointly analysing court hearings in order to use an AI to find out more about the arguments, their logic and the influence on judgements – always with an eye on protecting privacy.

About Prof. Dr. Alexander Gepperth

Prof. Dr. Alexander Gepperth is a Professor of Programming and Machine Learning in the Department of Applied Computer Science at Fulda University of Applied Sciences.

Gepperth obtained his Bachelor’s degree in physics from Ludwig-Maximilians-Universität in Munich, and later pursued a doctorate at the Institute for Neuroinformatics at Ruhr-Universität Bochum. During his time there, he conducted research on neural learning methods for visual object recognition in the Driver Assistance department.

Alexander Gepperth

After obtaining his doctorate in 2006, Gepperth joined Honda Research Institute Europe GmbH, where he worked as a senior scientist on basic research related to in-vehicle learning and the development of prototypes

In 2011, Gepperth returned to university: until 2016, the scientist was a professor at ENSTA ParisTech in France and was responsible, among other things, for the “Intelligent Vehicle” specialisation course.

In 2016, Gepperth followed the call to Fulda University of Applied Sciences.

How artificial neural networks should learn more thoroughly

Gepperth researches neural networks for object and image recognition. A topic that has accompanied him since his doctorate. In the past ten years, he has focused on continuous learning. “This is what we humans do all the time. That’s why it’s difficult to explain to someone that it’s something special,” Gepperth says.

Even the best deep-learning methods today fail at continuous learning for machines. An example: a classifier has learned to recognise cats. If the network is now to learn to recognise dogs as well, it has to be completely retrained – or it forgets everything it has already learned. In research, this is called “catastrophic forgetting”. Continuous learning is supposed to provide a remedy.

Gepperth is researching this fundamental problem. With his team, he is developing new concepts for continuous learning. In March 2022, for example, he published a paper on the algorithmic foundations of learning with constant time complexity.

Neural Networks and Time Complexity

Gepperth sees the greatest challenge in time complexity: in biological brains, he says, there are mechanisms that prevent us from forgetting everything when we learn something new – no matter how much we already know.

In machine learning, some methods could prevent catastrophic forgetting, such as freezing certain parts of the neural network. But with these methods, learning over time takes longer the more the network already knows, says Gepperth.

He is therefore looking for methods that can protect what has been learned without depending on what has already been learned. A scientific paper by his team on the topic was recently submitted to CVPR, one of the most important conferences for computer vision.

Robots that learn new faces and the big picture

hessian.AI supports Gepperth in his research. The centre offers exchange with first-class researchers and a scientific community in which, for example, questions about the review process or problem solutions for young scientists are discussed.

The big picture also counts: “Is what I am doing useful at all?” is a question that researchers can answer better thanks to the community.

In the case of continuous learning, the answer is clearly yes. Advances in this technology would enable the widespread use of AI systems, says Gepperth. “Systems that can’t learn to do that will have to be completely retrained at some point.”

Continuous learning systems, on the other hand, require less maintenance and training. Robots, for example, could easily learn to recognise new faces.

The same methods also enable selective forgetting in neural networks – an important capability for data protection and privacy issues.