Connectom Networking and Innovation Fund

The Connectom Networking and Innovation Fund offers seed funding for collaborative research by hessian.AI members and other colleagues at and between the participating universities. In a competitive process, time-limited projects in the entire spectrum of research, teaching, education or application are funded with a maximum of 40,000 EUR per project. The Connectom Networking Fund is announced once a year. The evaluation and selection of the project outlines to be funded is carried out by a selection committee consisting of representatives of all universities participating in hessian.AI. The submitted outlines will be evaluated on the basis of the following criteria:

Eligible to apply are employees of the universities participating in hessian.AI who can prove that they have at least completed a doctorate. A founding member of hessian.AI must be part of the applying consortium. External researchers can be involved in the project as cooperation partners, but cannot receive funding from the Connectom Networking Fund.

Funded projects in the fourth round of calls (2023)

Smart Assistant for Image-guided Needle Insertion

Dr. Anirban Mukhopadhyay, TUDa, FB Elektrotechnik und Informationstechnik
Prof. Dr. Jan Peters, TUDa, FB Informatik

Ultrasound (U/S) guided percutaneous needle navigation is a common task for clinicians, to perform diagnosis (biopsy) and treatment (ablation, neoadjuvant therapy). Such procedures require dextrous movement of the needle in tandem with the U/S probe. Depending on the insertion angle, the needle becomes hardly visible on the U/S image. However, this task is challenging under patient and tissue motion, and requires two steady hands under massive cognitive load. The combined factor of increased rate of cancer incidence in the aging population of Europe with the lack of skilled clinical workforce makes timely diagnostics and care of cancer a critical health problem. Our goal is to develop a smart robotic assistant that can enable unskilled health workers to perform routine cancer diagnostics and care.

Multi-objective Design of Advanced Materials via Compensated Bayesian Neural Networks

Prof. Hongbin Zhang, TUDa, FB Materials Sciences
Prof. Grace Li Zhang, TUDa, FB Elektrotechnik und Informationstechnik

The project aims at developing a neural network-based adaptive design framework and applying it to perform multi-objective design of advanced materials. To achieve this goal, robust constructions of latent spaces and uncertainty quantification are required. However, such two requirements cannot be directly satisfied using the conventional weight-based neural networks with singlevalued predictions. In this project, we will apply error suppression and compensation to enhance the robustness of neural networks in performing dimensional reduction, and the uncertainties will be modeled and evaluated with Bayesian neural networks based on statistical training. Such implementations will be utilized to perform multi-objective adaptive design of novel functional materials including permanent magnets, thermoelectric materials, and high-entropy alloys using the existing databases, and further be developed into a generic framework for future autonomous experimentation.

hessian.FLY: Sustainable Cereal Cultivation through AI-based Early Detection of Pests

Prof. Dr. Dominik L. Michels, TUDa, FB Computer Science
Prof. Dr. Bernd Freisleben, Philipps-Universität Marburg, FB Mathematics and Computer Science
Prof. Dr. Kristian Kersting, TUDa, Department of Computer Science

Plant protection products are used in agricultural fields to protect crops from pests, diseases and weeds. On the one hand, this favors the cultivation practices of closely timed, monotonous crop rotations, but is accompanied by highly problematic losses in landscape and species diversity. Furthermore, pesticide residues end up in animal feed and food for humans and are thus also consumed by consumers. The cultivation of grain on the scale required (climate change, war in Ukraine, etc.) is currently only possible with the use of enormous quantities of pesticides, herbicides, fungicides and insecticides. As a result, however, around 385 million people fall ill each year from poisoning caused by pesticides. In order to reduce the dramatic consequences of the current form of cereal cultivation for nature, humans and animals, the use of pesticides must be curtailed to a large extent. Instead of applying pesticides on a large scale to entire fields, they should only be applied in a targeted manner to those areas where this is really indicated.
This requires an automatic early detection of diseases and pests. The objective of this research project is therefore to develop a prototype for an early detection system for pests in cereal crops. First, a flying drone (UAV) will take close-up images of various groups of cereal plants in the field using a high-resolution camera. These images will then be examined using computer vision for signs of the so-called Hessian fly (lat. Mayetiola destructor, eng. Hessian fly, also known as cereal devastator), one of the most problematic pests in cereal crops. Subsequently, a suitable insecticide can be sprayed on the affected areas in a targeted manner (precision spraying). The machine learning (ML) sub-discipline of AI has suitable deep neural networks (deep learning) that can be used to detect the yellowed spots caused by the presence of the Hessian fly.

„AI-ready Healthcare” Podcast

Dr. Anirban Mukhopadhyay, TUDa, FB Elektrotechnik und Informationstechnik

When developing AI-powered technology for healthcare, which is high-risk and intensely human, public engagement and open conversation plays a central role in the societal readiness for the technology. As such, knowledge dissemination while conversing with the diverse stakeholders is crucial, and should not be treated as an afterthought. Yet, without any dedicated channel, the global academic community of medical AI was underutilizing the full potential of public engagement. “AI-ready Healthcare” podcast bridges this gap. Podcasts are a great medium for knowledge dissemination, constructive arguments and deep insightful discussions. By firmly rooting ourselves to the advanced AI technology developed within the academic community, we explore the dynamic landscape of healthcare AI. We converse with international stakeholders from widely different backgrounds such as medical AI colleagues, physician scientists, industry colleagues, regulatory personnels, patient representatives, global-health advocates to name a few. These conversations lead to deep insights about the translational aspects of AI research in clinical care, which are often not discussed in traditional forms of communication such as peer-reviewed articles. The podcast has two-fold impacts, broadening the horizon of technical problems for academic researchers, and significant increase in the visibility of medical AI research.

Lifelong Explainable Robot Learning

Prof. Dr. Carlo D’Eramo, University of Würzburg and TUDa, FB Computer Science
Prof. Dr. Georgia Chalvatzaki, TUDa, FB Computer Science

Current demographics and reports regarding the absence of caring staff make the need for intelligent robots that can act as general-purpose assistants imperative. While robot learning holds the promise of endowing robots with complex skills through experience and interaction with the environment, most methods overfit single tasks that do not generalize well in the non-stationary real world. Conversely, humans constantly learn while building on top of existing knowledge. Lifelong robot learning stipulates that an agent can form representations that are useful for learning a series of tasks continually, avoiding catastrophic forgetting of earlier skills. We propose to study a method that allows robots to learn, through multimodal cues, a series of behaviors that are easily composable for synthesizing more complex behaviors. We propose adapting large pretrained foundation models on language and vision into robotic-oriented tasks. We will investigate the design of novel parameter-efficient residuals for lifelong reinforcement learning (RL) that would allow us to build on previous representations for learning new skills while avoiding the two key issues of task interference and catastrophic forgetting. Crucially, we will investigate forward and backward transfer, and inference, under the lens of explainability, to enable robots to explain to non-expert users similarities found across tasks throughout their training life and even to map its actions while executing a task into natural language explanations. We argue that explainability is a crucial component for increasing the trustworthiness of AI robots during their interaction with non-expert users. Therefore, we call this line of work Lifelong Explainable Robot Learning ( LExRoL), which opens new avenues of research in the field of lifelong RL and robotics.

SPN to AI-Engine Compiler (SAICo)

Prof. Dr.-Ing. Andreas Koch, TUDa, Department of Embedded Systems and Applications (ESA)
Prof. Dr. Kristian Kersting, TUDa, Department of Computer Sciences

The Artificial Intelligence and Machine Learning LAB (AIMLL) and the Department of Embedded Systems and their Applications (ESA) have been working on various aspects of Sum-Product Networks (SPNs) for several years. SPNs are a machine learning model closely related to the class of probabilistic graphical models and allow the compact representation of multivariate probability distributions with efficient inference. These properties allow, for example, the augmentation of neural networks, thus increasing their accuracy and allowing evaluations of the predictions. In the context of SPNs, SPFlow is one of the most relevant software libraries. SPFlow is primarily developed and maintained by AIMLL and allows to create and train different types of SPNs quickly and easily. Thereby SPFlow offers various possibilities to create SPNs: Standard training routines can be used as well as custom training approaches. Furthermore SPFlow is extensible, so that the models, as well as training and inference, can be adapted accordingly. A loose collaboration between AIMLL and ESA already resulted in a project in 2018 that focused on accelerating SPN inference. The know-how related to MLIR and acceleration of SPNs will be used to extend the SPNC to allow compilation for AI Engines (AIEs) in the future. The overall goal of the project is to open up AI Engines as a possible target architecture for SPN inference. In particular, various possibilities for optimizing the models are also to be evaluated, since new architectures in particular can often only be used optimally if corresponding peculiarities of the architecture, as well as of the corresponding models, are exploited. The SAICo project proposal thus optimally combines the compiler/architecture-specific experience of the ESA FG with the model-related know-how of the AIMLL.

SyCLeC: Symposium on Continual Learning beyond Classification

Dr. Simone Schaub-Meyer, TUDa, Department of Computer Sciences
Dr. Martin Mundt, TUDa, Department of Computer Sciences

Recent advances in artificial intelligence (AI) are in large parts dominated by improving performance numbers or qualitatively appealing examples. In parts, this may be due to the way we set up a conventional machine learning workflow. We tend to start by defining a constrained, well-defined task, gather data for it, choose a statistical model to learn it, and later commonly conclude that an approach is successful if it performs well on the corresponding, dedicated testing data. Often this process is repeated several times to improve the outcome by either tuning the model, a model-centric view, or by collecting or curating more data, a data-centric view. However, well-grounded real-world systems tend to require more than featuring the best performance number on some benchmark. Not only are dedicated test sets limited by not accounting for the variance in appearance of novel unknown data during deployment, they also do not account for the various ways in which tasks drift over time. A single good number on a test set thus is not reflective of the experiences and changes a system undergoes in the real world. As much as we seem to know how to tune our popular static benchmarks, we seem to know less about how to formulate general learning processes that are able to continually learn from endless streams of data, and to adapt and generalize to modified scenarios, as we humans do.
Continual, or lifelong, machine learning addresses the crucial questions that arise when aiming to overcome the limitations of single training cycles, rigid inference engines, and fixed datasets. In contrast to traditional machine learning benchmarks, it investigates how learners can continue using, expanding and adapting their knowledge when experiencing changing and novel tasks over time. At the center, it acknowledges that data selection, models, training algorithms, and evaluation measures are not static. Despite its recently emerged popularity, current research has just started to grasp how we can accommodate these factors into human-like lifelong learning AI systems. Although presently ongoing efforts start to take into account complex sequences of datasets, they focus predominantly on image classification tasks. Unfortunately, such tasks significantly simplify learning, e.g. by only performing image-level recognition, assuming that all data is always labelled and that continual learning consists mainly of learning to recognize new object types. In this initiative, we thus set out to bring together researchers who are at the forefront of continual learning and computer vision to jointly catalyze the foundation for continual learning beyond classification. In an inaugural symposium, we will establish common grounds and discover synergies towards continual learning for a plethora of relevant computer vision tasks, such as semantic segmentation and learning that does not require labels (unsupervised learning).

Neural cellular automata enables federated cell segmentation (FedNCA)

Dr. Anirban Mukhopadhyay, TUDa, Department of Electrical Engineering and Information Technology
Prof. Dr. Heinz Koeppl, TUDa, Department of Electrical Engineering and Information Technology

The trend of ever-increasing resource-intensive models is orthogonal to the goal of democratizing deep learning for all. Cheap access to AI technology and a low entry barrier for contribution are necessary to empower and fully leverage the potential of citizen scientists around the world. The possibility of widespread collaboration fosters the collection of diverse data, ultimately enabling the solution of complex problems. These include frugal digital health innovations, potentially providing access to care for the last billion. Limited by patient-privacy restrictions, federated learning is a feasible solution, but due to high network bandwidth and computational requirements, it is inaccessible for clinics that cannot afford such infrastructure. The recently emerging field of Neural Cellular Automata (NCA), which are models that converge towards a defined goal only through local communication, contrasts with this, since they are lightweight in terms of parameters and computational requirements. Yet training NCAs in a centralized setting is already difficult and federated training of NCA is never attempted before. We propose to (1) combine the expertise of self-organization with the experience of developing the first NCA for medical image segmentation, to develop a novel lightweight federated NCA learning and inference algorithm. (2) The developed algorithms will be tested for segmentation of histopathology images within the established collaboration with Peter Wild from university hospital Frankfurt. (3) As an extreme example, the capacity of federated learning with NCA will be demonstrated over a VAST network of cheap computing devices.

The Algonauts Project Demonstrator: Explaining the brain with AI models

Prof. Dr. Gemma Roig, Goethe-University, Department of Computer Sciences

The Algonauts Project is an ongoing effort that aims to explore human and machine intelligence with the latest algorithmic tools (Cichy et al., 2019). In this way, the Algonauts Project serves as a catalyst to bring together biological and machine intelligence researchers on a common platform to exchange ideas and advance both fields in a form of challenges and join workshops ( Here, we propose to leverage the ongoing success of the Algonauts Project to engage and motivate young talent, including high school students and bachelor students, into becoming the future leading scientists in AI from an interdisciplinary perspective. For that, we aim at building a demonstrator that showcases how AI models, specifically artificial deep neural networks, can be used to unveil brain functioning that lead to human behavior, as well how the gained insights can be used to guide the design of brain-inspired AI models that might have desirable properties similar to human cognition, such as modularity of functions. This could inform how to support a more transparent and explainable behavior of the model decisions, as well as designing models that are robust to perturbations and noise. The demonstrator is aimed at being interactive with a user-friendly interface to go through the main 3 steps to use AI models to understand the human brain. Step 1 consists on acquiring brain data of humans watching images or videos, step 2 is selecting existing AI models or building your own model to explain the brain data, and the 3rd step is to match both to gain insights of what is happening inside the brain while humans look at the stimuli. For this purpose, we will integrate in the core of the demonstrator our lab toolbox, called Net2Brain, whose purpose is integration of AI models to predict brain data (Bersch et al., 2022). We will enhance it and further develop it such that it can also be open to the scientific community later on. Importantly, we aim at integrating the AI models that are being developed in hessian.AI, e.g., those with human learning characteristics, such as continual learning models (Dr. Mundt) and embodied models from robotics (Prof. Chalvatzaki), as well as interpretability and explainability algorithms.

Development, Evaluation and Transfer of Data Science Tools for Right-Censored and High-Dimensional Data

Prof. Dr. Antje Jahn, Darmstadt University of Applied Sciences, Department of Mathematics and Natural Sciences
Prof. Dr. Sebastian Döhler, Darmstadt University of Applied Sciences, Department of Mathematics and Natural Sciences
Prof. Dr. Gunter Grieser, Darmstadt University of Applied Sciences, Department of Computer Science
Prof. Dr. Bernhard Humm, Darmstadt University of Applied Sciences, Department of Computer Science

In Machine Learning (ML), open source and free software packages for common ecosystems such as R and Python are responsible for the transfer into practice and the entry point for many beginners into the field of Data Science. Advanced users make informed choices about their tools based on information from academia, where systematic evaluations of different packages and implementations support this choice for a specific application purpose. The overall goal of this project is to make knowledge about new methods for high-dimensional and right-censored data available to novice and experienced users. High-dimensional data occurs for example in the field of text mining or genetic statistics.
Based on recent research on goodness-of-fit (GOF) tests for high-dimensional data, we aim to create an R package that simplifies the application of these methods for the aforementioned domains. Right-censored data often occurs in medical data or in the field of predictive maintenance. Examples are predictions of survival probabilities under different medical interventions or prediction of optimal maintenance times for technical equipment. Right-censored data require special methods of Data Science, for which insufficient support is available in places for an informed choice of implementations. This support is provided in the present project. The transfer of the results achieved will be carried out under the aspect of the “Third Mission”, among other things in the form of a multimedia campaign- consisting of a short video channel, a video channel and a blog.

KIPP TransferLab – AI in Planning and Production (KI in Planung und Produktion)

Prof. Dr. Michael Guckert, THM, Department of Mathematics, Natural Sciences and Data Processing
Prof. Dr. Thomas Farrenkopf, THM, Department of Mathematics, Natural Sciences and Data Processing
Prof. Dr. Nicolas Stein, THM, Department of Mathematics, Natural Sciences and Data Processing
Prof. Holger Rohn, THM, Department of Industrial Engineering and Management
Prof. Dr. Udo Fiedler, THM, Department of Industrial Engineering and Management
Prof. Dr. Carsten Stroh, THM, Department of Industrial Engineering and Management

Small and medium-sized enterprises (SMEs) are nowadays under increasingly high pressure to innovate. They operate with limited financial and human resources and thus have to operate complex manufacturing structures, often with one-off and small batch production. Increasing efficiency in production is often of existential importance. Limited capacities also force them to make efficient use of the available production resources while at the same time meeting the increasing quality requirements of the markets.
Artificial intelligence (AI) in production can be used in a wide range of corporate processes and bring about lasting effects. Systematic, automated collection of data generated directly in the machines during production allows consistent application of AI algorithms and supports more accurate predictions of actual resource utilization.
Insights gained from the data can enable forecasts of output quantities and qualities or machine availability.Immediate effects of such intelligent machine and process monitoring are higher delivery reliability, more efficient utilization of resources in the company (incl. energy and resource efficiency) and increased transparency about the condition of the manufacturing equipment used. In order to advance the AI maturity level in SMEs, the high potential of the technology is to be demonstrated with the help of demonstrators in a real laboratory environment. In doing so, already set impulses for the introduction of AI can be used as leverage and, in addition to the already known application possibilities, systematic implementation can be started. Thus, in addition to the processes, the operational use in a laboratory environment is also shown with the help of demonstrators.

Automatic classification of toxic and false content for the young generation with advanced AI methods

Prof. Dr. Melanie Siegel, Darmstadt University of Applied Sciences, Department of Computer Science
Prof. Dr. Bernhard Humm, Darmstadt University of Applied Sciences, Department of Computer Science

The idea of social media was originally to enable the most open possible exchange of information and opinions between people and thus to support communication. This idea of social participation is being massively disrupted by current developments: where an open exchange of opinions on political topics was possible, forums are increasingly flooded with hate and threats of violence. Where free access to information was the goal, false factual claims are increasingly being posted and in some cases automatically disseminated. Texts, images and videos are used and semantically linked with each other. It is becoming increasingly difficult for children and young people in particular to classify information. The recognition of toxic information can fundamentally occur in two different ways: intrinsically through the analysis and evaluation of published content or extrinsically through the evaluation of such content in the context of other information. One must be able to classify a post as harmless banter or opinion, insult, discrimination, or even threat.
In addition, a distinction must be made between a harmless private false claim, a socially relevant false claim that should at least be commented on journalistically, and even acts of disinformation that are relevant under criminal law. Automatic procedures can help with the classification, as the DeTox project has already shown. Nevertheless, the topics and language of toxic content change continuously, so it is necessary for the models (automatic procedures – intelligent systems) to learn on a regular basis. However, for models based on neural networks, further training can lead to previously trained content being overwritten and the models now no longer functioning on the original (old) data (“catastrophic forgetting”). A complete retraining is usually not practicable due to the high model complexities and the associated high computational effort. False messages are not only composed of language (text). To transfer opinions, images and text are often assembled from another context and placed in a new, non-existent context. This makes human and automatic recognition particularly difficult. Therefore, approaches are needed that analyze the text and the image in context.

Cooperation in the Repeated Prisoner’s Dilemma with Algorithmic Players / How Cooperative are Humans and AI Algorithms?

Prof. Dr. Oliver Hinz, Goethe University Frankfurt, Department of Economics
Prof. Dr. Matthias Blonski, Goethe University Frankfurt, Department of Economics

The goal of this project at the interface between AI and microeconomics is to understand how cooperative behavior changes when interacting repeatedly with learning machines instead of humans. To this end, the following research questions are considered: How does the willingness to cooperate in a repeated prisoner’s dilemma change when one of the players is replaced by an artificially intelligent algorithm? How does this willingness depend on the expected game duration and on the human’s knowledge of the opponent’s identity? Do any differences in cooperative behavior result from the altered nature of the opponent (human or machine) or from deviant strategic behavior?

The virtual doc: An AI-based medical decision support system

Prof. Dr. Dominik Heider, Philipps-Universität Marburg, FB Mathematics/Computer Science
Prof. Dr. Thorsten Papenbrock, Philipps-Universität Marburg, FB Mathematics/Computer Science
Prof. Dr. Bernd Freisleben, Philipps-Universität Marburg, Mathematics/Computer Science

The COVID pandemic has revealed the flaws of health systems worldwide and the immense pressure physicians are exposed to. In addition, the WHO estimates a shortage of 12.9 million healthcare workers by 2035. The virtual doc project aims to support healthcare workers by applying advanced sensor technologies and state-of-the-art artificial intelligence (AI) methods. The virtual doc performs various medical tasks with a patient in an intelligent examination cabin. The cabin sensors measure non-invasive parameters (e.g., BMI, heart rate, pulse) and the computational infrastructure records the medical history of the patient interactively in order to avoid invasive measurements. The clinical parameters are made available to physicians, including advanced disease predictions based on machine learning models for defined (or yet unknown) disease patterns (e.g., diabetes mellitus type 2 (T2DM)). In this way, the virtual doc can relieve the medical staff from performing these tasks, such that capacities for treatment, emergencies, and care are created. With this project proposal, we plan to expand our existing prototype of the virtual doc with further sensors and analysis modules and eliminate potential sources of error. We also aim to strengthen our collaboration in this multi-faceted project by including further research groups and their expertise in the development of the virtual doc. The extent to which such a preliminary AI-based examination is useful and accepted by the population will be investigated in parallel with the help of a survey (cooperation with Prof. Dr. Michael Leyer, School of Business and Economics, Uni Marburg) and on-site testing using a twin-cabin at Bochum University Hospital (cooperation with Prof. Dr. Ali Canbay, UK RUB).

Visual Analysis for Prediction of Relevant Technologies by Neural Networks (VAVTECH)

Prof. Dr. Kawa Nazemi, Hochschule Darmstadt, Department of Computer Sciences
Prof. Dr. Bernhard Humm, Hochschule Darmstadt, Department of Computer Sciences

New technologies, but also existing unused technologies, have the potential to sustainably increase the innovative power of companies and to ensure their future success. However, if these relevant technologies and the associated new application areas are not recognized early enough, competitors can establish themselves in these areas at an early stage. Furthermore, disregarded new technologies bear the risk of disruptively changing the respective market upon market entry and displacing unprepared companies. A valid analysis and prediction of potential future technologies is therefore more important than ever before. The VAVTech project aims to develop a visual analysis system that enables people to recognize relevant technologies as early as possible and predict their potential course.
Scientific publications will serve as the data basis for the analysis system, as these present the respective technologies at a very early stage and are thus suitable for early technology detection. The system will primarily combine neural networks and interactive visualizations and allow companies, company founders and strategic consultants to analyze and predict the potential of new and largely unknown technologies. The neural network will be developed in a modular way, so that the transfer to other domains is guaranteed. The project will create a working demonstrator with real data and lay the foundation for further work in the area of strategic foresight by applying artificial intelligence methods. The demonstrator will serve both for the acquisition of third-party funding, for networking with other AI researchers and for the visibility of research, through visualizations.

Women in the Field of AI in Healthcare “Women AI Days

Prof. Dr. Barbara Klein, Frankfurt University of Applied Sciences, Department of Social Work and Health
Prof. Dr. Martin Kappes, Frankfurt University of Applied Sciences, Department Computer Sciences and Engineering

The UNESCO Recommendation on the Ethics of Artificial Intelligence formulates globally accepted standards for AI technologies to which 193 member states have committed themselves. Ethical rules are linked to human rights obligations and a focus is placed on blind spots such as AI and gender, education, sustainability and others. For Germany, there is a great need for action in terms of equal treatment and diversity of the AI development teams. Diversity is seen as one of the prerequisites for being able to receive appropriate consideration in the programming of AI. The field of artificial intelligence in Germany needs, among other things, a higher proportion of women in order to avoid social bias and gender inequality in the future due to unconscious bias in algorithms. Particularly in healthcare and medicine, women are insufficiently considered, which leads to fatal effects in medical care if, for example, drugs are only tested with men.
Access for women to classic male domains such as the IT sector is often still difficult. The goal of the project is therefore a three-day impulse workshop (Women AI Days) for networking national female experts as well as an analysis of needs, e.g. to strengthen the share of women and to make the research and work areas visible for young talents. Through accompanying social media, a publication and subsequent public lectures at Frankfurt UAS, the contents are to be made known to the public with a focus on Hesse.

Systematic evaluation of AI explainability techniques through a case-study in biology

Prof. Dr. Gemma Roig, Goethe University Frankfurt, Department of Computer Sciences
Prof. Dr. Visvanathan Ramesh, Goethe University Frankfurt, Department of Computer Sciences

AI can support field biologists with taxonomic tasks where they determine the species of an animal from images. The challenge in this setting is that the difference between species in the same genus can be very subtle and rely on specific characteristics of the animal or plant. One way of supporting the biologists with this task is to use explainable AI (XAI) along with likely results. Instead of producing one classification output, the AI produces the most likely candidate results and also explains what parts of the image are important for each. This way, the explanations can then help the biologists to find the correct one of the proposed results and to efficiently include the AI’s output into their own decision process to improve their performance and speed. We, therefore, propose to perform a case-study where we use questionnaires and structured interviews to qualitatively evaluate how existing ‘off-the-shelf’ XAI algorithms perform in this specific setting, analyze what specific requirements the domain experts have regarding an AI explanation, and investigate how existing XAI techniques can be adapted to better match their needs. The domain-specific knowledge and data will be provided by Prof. Thomas Wilke and the Systematics & Biodiversity Lab at Justus-Liebig-University Giessen.

Funded projects in the third round of calls (2022)

Enabling multimodal and multilingual argument mining in court hearings

Dr. Ivan Habernal, TUDa, FB Computer Sciences
Prof. Dr. Christoph Burchard, Goethe University Frankfurt, Faculty of Law

This purposefully interdisciplinary project is designed to provide research for understanding legal reasoning in the hearings of the European Court of Human Rights. The ECtHR hearings are only available as video recordings with mixed languages, including native and non-native speakers. The goal is to create a basic dataset that will 1) serve NLP researchers as a basis for building and evaluating NLP legal models and 2) allow legal scholars to answer empirical legal questions related to the arguments presented before the Court.

Reliable Anonymization Methods for Enabling Language Understanding Models toAssist with Air Traffic Controller Communication

Charles Welch, Ph.D., PU Marburg, FB Computer Sciences
Dr. Ivan Habernal, TUDa, FB Computer Sciences

The goal of the project is to unlock highly sensitive operational speech data from air traffic controllers for research and development purposes, with the underlying goal of improving DFS service performance and safety via effective speech recognition-based controller assistance systems. The planned follow-up project in 2023-2025 aims to develop robust domain-specific speech recognition and semantic annotation models to automate the communication logging tasks of the traffic controllers.

KIDeR – KI basierte Detention von Rissbildung in Bahnschwellen | AI based detection of cracking in railroad sleepers

Prof. Dr. Michael Guckert, TH Mittelhessen, Department of Mathematics, Natural Sciences and Data Processing, FG Business Informatics – Artificial Intelligence
Prof. Dr. Gerd Manthei, TH Mittelhessen, Department of Mechanical and Power Engineering

The project is intended to investigate cracking in prestressed concrete sleepers in service using SE testing (acoustic emission measurements) in order to gain a better insight into the damage behavior. The analysis of the measurement rakes and their classification is a typical use case for Artificial Intelligence approaches (such as Deep Learning (DL) models). The aim of KIDeR is to develop a prototype of an efficient, non-destructive testing method for the detection of cracking in railroad sleepers in service. A particular challenge here is the differentiation of real acoustic emissions and ambient noise.

AI4BirdsDemo: A Demonstrator for Robust Bird Species Recognition in Arbitrary Sound Environments

Dr. Markus Mühling PU Marburg, FB Mathematics & Computer Science
Prof. Dr. Nina Farwig, PU Marburg, FB Biology
Prof. Dr. Bernd Freisleben, PU Marburg, Dept. of Mathematics & Computer Science, Distributed Systems and Intelligent Computing

The project relies on the preliminary work to extend the project AI4Birds by a software application that increases the public visibility of the work and demonstrates the quality of the bird species recognition model. The team will build a real-world bird species recognition model that tackles the previously mentioned challenges, such as out-of-distribution data, to improve the robustness of their bird species recognition approach.

Funded projects in the second round of calls (2022)

Accelerating Cardinality Estimation (ACE)

Prof. Dr.-Ing. Andreas Koch, TUDa, Department of Embedded Systems and Applications (ESA)
Prof. Dr. Carsten Binnig, TUDa, Data Management

Sum-Product-Networks (SPNs) belong to the class of graphical probabilistic models and allow the compact representation of multivariate probability distributions. While the ESA group has mainly investigated the acceleration possibilities of SPNs, the DM group has dealt with for which applications in the field of databases SPNs can be used. This includes, for example, cardinality estimation. It can be used to predict the result sizes of database queries and thus optimize the query processing of database management systems (DBMS). The overall goal of the project is to generally accelerate Cardinality Estimation using RSPNs (Relational SPNs), to automate the development and training process of RSPNs, and furthermore to investigate the potential usability in the context of large databases. The extension of the SPNC, as well as the provision of corresponding training processes, promise in combination highly interesting, practically relevant research results that can also be incorporated into the other projects in the two participating research areas.

AI4Birds: Bird Species Recognition in Complex Soundscape Recordings

Dr. Markus Mühling PU Marburg, FB Mathematics & Computer Science
Prof. Dr. Nina Farbig, PU Marburg, FB Biology
Prof. Dr. Bernd Freisleben, PU Marburg, Dept. of Mathematics & Computer Science, Distributed Systems and Intelligent Computing

In this project we focus on automatically recognizing bird species in audio recordings. To improve current biodiversity monitoring schemes, AI4Birds will use audio recordings across a forest ecosystem to develop novel transformer models based on self-attention for recognizing bird species in soundscapes. Thus, sustainability regarding biodiversity is at the heart of the project. Sustainability with respect to continuing AI4Birds by acquiring additional financial funding is very likely; it is planned to use AI4Birds to explore the funding opportunities within the federal biodiversity sustainability programs. Furthermore, we plan to contribute our results to Microsoft’s “AI for Earth” initiative.

AIQTHmed | AI Quality and Testing Hub in Healthcare

Prof. Dr. Martin Hirsch, Artificial Intelligence in Medicine, UMR and Director of the Institute for Artificial Intelligence at UKGM Marburg
Prof. Dr. Thomas Nauss, Department of Mathematics, Natural Sciences and Data Processing, FG Business Informatics – Artificial Intelligence, PU Marburg

In May 2021, the Hessian Minister for Digital Strategy and Development and the VDE agreed on the establishment of a first nationwide “AI Quality & Testing Hub” (AIQTH1). In the environment of hessian.AI and the Center for Responsible Digitalization (ZEVEDI), this is intended to promote the quality and trustworthiness of AI systems through standardization and certification in the model topic areas of “Mobility”, “Finance” and “Health”, making them verifiable and credible to the population. The aim of the project is to use the EU program DIGITAL EUROPE to strengthen the model topic area “Health” of the AIQTH of hessian.AI and thus the chance to establish this institution in Hesse.

Memristors – a central Hardware for AI

Prof. Dr. Lambert Alff, TUDa, FB Material Sciences
Prof. Dr.-Ing. Andreas Koch, TUDa, Department of Embedded Systems and Applications (ESA)
Prof. Dr. Christian Hochburgen, TUDa, FB ETIT

Artificial Intelligence will find its way almost ubiquitously into the most diverse areas of life. At the same time, however, this also means that the energy efficiency aspect of the associated computing effort will become increasingly important. Therefore, the development of the computer architectures used for AI is an important field of research. A major new component for AI-adapted computer architectures is the so-called memristor. There are several material science approaches that can be used to realize (highly energy-efficient) memristors, but these result in different device behavior or realistic application scenarios have not even been explored. This project aims to bring together the chain necessary for memristors from the material to the component to the circuit for specific applications of AI and to promote joint research projects in the sense of this interdisciplinary and holistic approach.

Mind the gap! Huddle between Materials and Computer Sciences

Prof. Dr. Leopoldo Molina-Luna, TUDa, FB Material Sciences
Prof. Dr. Kristian Kersting, TUDa, FB Computer Sciences

Many of the designers for AI algorithms don’t have the enough background knowledge to keep up with the state-of-the-art research in a natural science field as e.g. the materials sciences. On the other hand, the materials science researchers usually rely on an “educated guess” fashion for determining the parameters for developing AI algorithms and tolls, they pay little to none. There exists a knowledge gap between the computer science and materials science communities and more cross-talk on a fundamental level is needed. The project builds up a seeding platform for implementing and consolidating an inclusive regular exchange between all interested parties. It strengthens the preparation activities for an IRTG application in the field of on operando TEM for Memristors and ML-based data analysis routines.

Innovative UX for User-Centered AI Systems

Prof. Dr. Bernhard Humm, h_da, FB Computer Sciences
Prof. Dr. Andrea Krajewski, h_da, FB Media

Human-centered AI includes, among other things, the appropriate explanation of decisions or recommendations made by the AI system, e.g., by means of Machine Learning (keyword “Explainable AI”). User Experience (UX), on the other hand, is concerned with the development of products, especially IT systems, that are intended to provide the best possible user experience. In this project, innovative UX concepts will be designed, tuned, implemented and evaluated for three different prototype AI systems that are being developed within the BMBF-funded project “Competence Center for Work and Artificial Intelligence (KompAKI)”. One of the AI systems deals with the provision of Machine Learning (ML) for broad user groups with and without programming skills. Two AI systems are intended for operational use in the manufacturing industry (Industry 4.0). This project ideally complements other AI initiatives and promotes networking between hessian.AI partners and different disciplines.

Funded projects in the first round of calls (2021)


Dr. Florian Stock, TUDa, Department of Mechanical Engineering, Department of Automotive Engineering (FZD)
Prof. Dr. Andreas Koch, TUDa, Department of Embedded Systems and Applications (ESA)

The focus of research on autonomous driving has so far clearly been on cars, but only a few projects have looked at other means of transport. To remedy this, Hessian.AI is funding innovative interdisciplinary research with the SpeedTram project, which focuses on autonomous/assisted driving of streetcars. In it, the Department of Automotive Engineering (FZD) and the Department of Embedded Systems and Applications (ESA) at TU Darmstadt are investigating the accelerated execution of machine learning algorithms required for automation in and of assistance systems for streetcars. Real data recorded during operation on a test vehicle of the local public transport company HEAG are processed. The evaluation of this growing data set, which now exceeds 140 TB, was no longer reasonably possible with existing methods. The work in SpeedTram made it possible to accelerate the two most time-consuming steps of data analysis, namely object recognition based on neural networks and the processing of LIDAR sensor data, by factors of three and 24, respectively. SpeedTram makes an important contribution to raising the innovation potential of automated streetcar guidance and making it usable for future applications.

AI4Bats: Recognizing Bat Species and Bat Behavior in Audio Recordings of Bat Echolocation Calls

Dr. Nicolas Frieß, PU Marburg, FB Geography, Environmental Informatics
Prof. Dr. Bernd Freisegen, PU Marburg, Dept. of Mathematics & Computer Science, Distributed Systems and Intelligent Computing
Prof. Dr. Thomas Nauss, PU Marburg, FB Geography, Environmental Informatics

Biodiversity is important for various ecosystem services that form the basis of human life. The current decline in biodiversity requires a transformation from manual periodic biodiversity assessment to automated real-time monitoring. Bats are one of the most widespread terrestrial mammal species and serve as important bioindicators of ecosystem health. Typically, bats are monitored by recording and analyzing their echolocation calls. In this project, AI4Bats, we present a novel AI-based approach to bat echolocation call detection, bat species recognition, and bat behavior detection in audio spectrograms. It is based on a neural transformer architecture and relies on self-attention mechanisms. Our experiments show that our approach outperforms current approaches for detecting bat echolocation calls and recognizing bat species in several publicly available datasets. While our model for detecting bat echolocation calls achieves an average precision of up to 90.2%, our model for detecting bat species achieves an accuracy of up to 88.7% for 14 bat species found in Germany, some of which are difficult to distinguish even for human experts. AI4Bats lays the foundation for breakthroughs in automated bat monitoring in the field of biodiversity, the potential loss of which is likely to be one of the most significant challenges facing humanity in the near future.


Dr. Joachim Bille, TH Mittelhessen, Head of Department FTN
Prof. Dr. Michael Guckert, TH Mittelhessen, Department of Mathematics, Natural Sciences and Data Processing, FG Business Informatics – Artificial Intelligence
Prof. Holger Rohn, TH Mittelhessen, Department of Industrial Engineering and Management, FG Life Cycle Management & Quality Management, Makerspace Friedberg

The goal of the AI@School project was the development of a demonstrator for the vivid communication of basic knowledge of artificial intelligence, which should provide pupils with an early and low-threshold access to AI topics. On the one hand, the demonstrator should contain suitable examples and exhibits for the descriptive transfer of knowledge; on the other hand, an interactive introductory course for the transfer of knowledge should be developed using the exhibits and examples. Based on these offers, a prototypical teaching unit at the advanced course level will also be developed. The project results are to be implemented permanently at hessian.AI; in addition, a Hessian-wide transfer of the concept to suitable institutions in the other parts of the state is planned in the medium to long term.

Robot Learning of Long-Horizon Manipulation bridging Object-centric Representations to Knowledge Graphs

Prof. Dr. Georgia Chalvatzaki, TUDa, FB Informatik, iROSA: Robot Learning of Mobile Manipulation for Intelligent Assistance
Prof. Dr. Iryna Gurevych, TUDa, FB Computer Science, Ubiquitous Knowledge Processing Lab

The goal of this project was to investigate the links between high-level natural language commands and robot manipulation. Humans are able to effectively abstract and decompose natural language commands, e.g. “Make me a coffee”, but such an action is not detailed enough for a robot to execute. The task execution problem in robotics is usually approached as a task and motion planning problem, where a task planner decomposes the abstract goal into a set of logical actions that must be translated into actual actions in the world by a motion generator. The connection between abstract logical action and real-world description (e.g., in terms of the exact position of objects in the scene) makes task and motion planning a very challenging problem. In this project, we approached this problem from three different directions, looking at sub-problems of the topic with respect to our ultimate goal of learning long time horizon manipulation plans using human commonsense and scene graphs:

  1. The association of the object scene with robot manipulation plans using graph neural networks (GNNs) and RL,
  2. Using voice instructions and vision in transformer networks to output subgoals for a low-level planner, and
  3. Translating human instructions into robot plans.

Project results from 2. and 3. are scheduled to be published at a major machine learning conference in the near future. Work from iii will continue as part of a current collaboration between iROSA and UKP.