Connectom Networking and Innovation Fund

Seed Funding

The Connectom Fund offers seed funding for joint research between hessian.AI members and other colleagues from participating universities. It is financed by the state of Hesse and is announced and carried out by the Technical University of Darmstadt as the lead partner of hessian.AI. The fund is used to support projects across the entire spectrum of research, teaching, demonstrator/prototype development or application with up to EUR 100,000 per project.  

The evaluation and selection of the project outlines to be funded is carried out by a selection committee consisting of representatives of all universities participating in hessian.AI. The submitted outlines will be evaluated on the basis of the following criteria:

  • Significance for the focus of hessian.AI or the goals set with hessian.AI.
  • Promotion of interdisciplinary and inter-institutional cooperation
  • Plausibility of the sustainability of the funded scientific contact
  • Start-up effect or prospects for follow-up funding through third-party funds
  • Appropriate consideration of the respective specialist discussion, innovation potential and specialist competencies of the applicants.  

Peer Reviewed Selection Procedure

Professorial hessian.AI members and DEPTH junior research group leaders are eligible to apply. It is possible to involve other colleagues at the participating universities.

Funded projects 2024


Machine Learning-Driven Approach for Reliable Diseases Prediction Using Comprehensive Imaging Biomarkers

Prof. Dr. Gemma Roig, Goethe-University, Department of Computer Sciences
Dr. Med. Andreas Bucher (KGU)

The project focuses on revolutionizing medical imaging by integrating AI-driven image processing and Explainable AI (xAI). This approach will enable efficient biomarker identification for multiple medical hypotheses, overcoming the time constraints of human analysis. Automated image analysis, or opportunistic screening, enhances the capabilities of radiologists and provides a wider range of diagnostic information beyond traditional methods, which is essential for personalized medicine. The goal is to develop a comprehensive imaging biomarker platform that identifies a wider range of health indicators missed by standard clinical evaluation, thereby revolutionizing diagnostics. 


AI4DNA: AI Methods for DNA-based Data Storage

Prof. Dr. Anke Becker, Philipps-Universität Marburg, FB Biologie / Synmikro
Prof. Dr. Bernd Freisleben, Philipps-Universität Marburg, FB Mathematik / Informatik

In this project, called AI4DNA, we focus on developing a new AI-based encoder/decoder (“codec”) approach supported by machine learning (ML) methods capable of adapting to different storage requirements to ensure optimal error tolerance and coding efficiency. In particular, we plan to use deep neural networks (NNs) and leverage recent DNA-related foundation models based on transformer architectures to create novel DNA codecs that will be adaptable to different storage conditions, utilizing a realistic AI-based DNA channel model. Furthermore, we combine fountain codes with the ML methods to obtain a hybrid rule-aware AI approach to provide error correction and data recovery in DNA storage.


From toddlers to robots: Multimodal object representation learning for multitask robot behaviors

Prof. Dr. Gemma Roig, Goethe-University, Department of Computer Sciences
Prof. Dr. Georgia Chalvatzaki, TUDa, Department of Computer Sciences

How can robots learn robust and generalizable object representations that are useful for tasks, such as manipulating objects, grasping an object, avoiding an object during navigation, etc.? One common way to approach this problem is to learn such objects representations while the robot is learning to perform the specific task, or by deploying pretrained visual models which do not necessarily provide relevant features for robotic task learning. Such techniques might result in object representations that are not generalizable to other scenarios and are not easily transferable to other tasks. Another possibility could be to first learn a general object representation from the robot’s perspective, which is then used to adapt to various tasks. Just like babies and toddlers, robots can inject sequential information from the environment with different sensory modalities. As we explored in previous work on computational modeling visual representation learning in objects, self-learning objectives following the slowness principle and the multi-modality co-occurrence with aligning visual input with sparse and noisy language representations lead object representations that contain both categorical and instance information. Moreover, in prior work we have shown how information-theoretic self-supervised objectives can be leveraged to provide sparse object representations that enable predictive modeling and learning for control, while preliminary work on adapting pretrained visual-language models for robot control affirmed that while pretrained models can help speed up learning, they do not lead to robust and generalizable robot task-oriented learning. In this project, we propose to analyze the impact of object representation learning on the ability of the robot to perform certain tasks. How does including language alignment improve performance, and to what degree is it robust to noise and sparsity at the input level while training? We will adapt the framework proposed in. We will first extend the simulation environment from which contains only toys, to other object categories. Then, we will use the same strategy to train visual models, as well as visual language models, with our dataset. It is important to note that for a thorough analysis of the impact of the pretrained model and what factors are determining the generalization to manipulation tasks to the robot, we need control over the dataset that we use for training, the models, and the level of sparsity and noise of the language input for the alignment. The generation of a simulated dataset enables fine-grained adjustment of all these parameters.


Symposium on Explainable Artificial Intelligence Beyond Simple Attributions

Prof. Stefan Roth, TUDa, Department of Computer Sciences
Prof. Simone Schaub-Meyer, TUDa, Department of Computer Sciences

In recent years, deep learning (DL) has established itself as an integral part of artificial intelligence (AI) for computer vision. Increasingly complex deep neural networks (DNNs) and larger amounts of data have allowed researchers to achieve unrivalled results in various areas, such as semantic segmentation and image generation. While this might be seen as a triumph of DL, DNNs still come with the critical limitation that humans do not comprehend how they actually work. As a consequence, they often receive only limited user trust and should not inconsiderately be applied to safety-critical domains, such as autonomous driving or medical imaging. To solve this problem and open up the “black box”, the field of explainable AI (XAI) aims to better understand DNNs and how they function. In particular attribution research, i.e., understanding how relevant each input feature is for the classification of a single data point, has been a major focus of existing work. While this simple setup is a necessary first step, it rarely helps to gain a significantly greater understanding of the model under inspection, especially for more complex image and video analysis tasks beyond classification.

With this initiative, we aim to assemble researchers at the forefront of XAI research and computer vision to jointly discuss the foundation, requirements, and expectations for XAI beyond simple attributions. In an inaugural symposium, we will establish common grounds, question established explanation types, and dis- cover synergies towards XAI methods that truly promote the understanding of DNNs. Potential topics include mechanistic interpretability, the proper evaluation of XAI methods, prototype-based explanations, and XAI beyond classification. The insights and personal connections gained from the symposium will serve as a basis for future collaborations, potential grant applications, and internationally visible workshop proposals.


Improving the training efficiency of reinforcement learning agents for controlling interlinked production facilities

Prof. Dr. Horst Zisgen, h_da

tbd


Mitigating Shortcut Learning in Machine Learning Models for Medical Imaging

Prof. Christin Seifert, University of Marburg, Department of Computer Sciences
Prof. Dr. Gemma Roig, Goethe-University, Department of Computer Sciences

tbd


Economics of Optimizing Organizational LLMs (EcOOL)

Prof. Dr. Oliver Hinz, Goethe-University

tbd


Predicting dropouts from psychotherapy

Prof. Dr. Bernhard Humm, Hochschule Darmstadt

tbd


“AI-ready Healthcare” Podcast

Prof. Dr. Anirban Mukhopadhyay, TUDa,

When developing AI-powered technology for healthcare, which is high-risk and intensely human, public engagement and open conversation plays a central role in the societal readiness for the technology. As such, knowledge dissemination while conversing with the diverse stakeholders is crucial, and should not be treated as an afterthought. Yet, without any dedicated channel, the global academic community of medical AI was underutilizing the full potential of public engagement.
“AI-ready Healthcare” podcast bridges this gap. Podcasts are a great medium for knowledge dissemination, constructive arguments and deep insightful discussions. By firmly rooting ourselves to the advanced AI technology developed within the academic community, we explore the dynamic landscape of healthcare AI. Often with my co-host Henry Krumb, we converse with international stakeholders from widely different backgrounds such as medical AI colleagues, physician scientists, industry colleagues, regulatory personnels, patient representatives, global-health advocates to name a few.
These conversations lead to deep insights about the translational aspects of AI research in clinical care, which are often not discussed in traditional forms of communication such as peer-reviewed articles. The podcast has two-fold impacts, broadening the horizon of technical problems for academic researchers, and significant increase in the visibility of medical AI research.


CovenantAI

Prof. Sascha Steffen, Frankfurt School of Finance & Management

tbd


Medical Digital Twin Control with Artificial Neural Networks

Prof. Lucas Böttcher, Frankfurt School of Finance & Management, Computational Science

The goal of personalized medicine is to tailor interventions for maintaining or restoring an individual’s health based on their unique biology and life situation. Key to this vision are computational models known as medical digital twins (MDTs), which integrate a wide range of health-related data and can be dynamically updated. Medical digital twins play a growing role in predicting health trajectories and optimizing the impact of inter- ventions to guide a patient’s health effectively. While MDTs are increasingly adopted in biomedicine, their high dimensionality, multiscale nature, and stochastic characteristics complicate tasks related to optimization and control, such as the development of treatment protocols.

Recent advancements in neural-network control methods show great promise in addressing difficult control problems. However, their application to biomedical problems is still in its early stages. In this project, our goal is to develop dynamics-informed neural network controllers that leverage existing knowledge about the structural characteristics of MDTs to effectively address optimal control problems in medicine. This encompasses tasks like minimizing side effects while removing pathogens. We will illustrate the effectiveness of the proposed control approaches using an MDT focused on pulmonary aspergillosis, a common respiratory fungal infection.


Funded projects in the third round of calls (2023)


Smart Assistant for Image-guided Needle Insertion

Dr. Anirban Mukhopadhyay, TUDa, FB Elektrotechnik und Informationstechnik
Prof. Dr. Jan Peters, TUDa, FB Computer Sciences

Ultrasound-guided percutaneous needle navigation is a common task for clinicians to perform diagnoses (biopsy) and treatments (ablation, neoadjuvant therapy). Such procedures require skilful movement of the needle in conjunction with the U/S probe. Depending on the angle of insertion, the needle is barely visible on the U/S image. However, this task is challenging with patient and tissue movement and requires two steady hands under massive cognitive load. The increasing incidence of cancer in Europe’s ageing population, combined with the lack of qualified clinical staff, makes the timely diagnosis and treatment of cancer a critical health issue. Our goal is to develop an intelligent robotic assistant that can enable unskilled healthcare professionals to perform routine cancer diagnosis and treatment.


Multi-objective Design of Advanced Materials via Compensat

Prof. Hongbin Zhang, TUDa, Department of Materials Sciences
Prof Grace Li Zhang, TUDa, Department of Electrical Engineering and Information Technology

The project aims to develop an adaptive design framework based on neural networks and to use it for the multi-criteria design of advanced materials. To achieve this goal, robust latent space constructions and uncertainty quantification are required. However, these two requirements cannot be directly met with the conventional weight-based neural networks with single-valued predictions. In this project, we will apply error suppression and compensation to improve the robustness of neural networks in dimensionality reduction, and the uncertainties will be modelled and evaluated using Bayesian neural networks based on statistical training. These implementations will be used to perform adaptive multi-objective design of novel functional materials, including permanent magnets, thermoelectric materials and high-entropy alloys, using the existing databases and further developed into a generic framework for future autonomous experiments.


Sustainable grain cultivation through AI-based early detection of pests

Prof. Dr. Dominik L. Michels, TUDa, FB Computer Science
Prof. Dr. Bernd Freisleben, Philipps-Universität Marburg, FB Mathematics and Computer Science
Prof. Dr. Kristian Kersting, TUDa, Department of Computer Science

Plant protection products are used in agricultural fields to protect crops from pests, diseases and weeds. On the one hand, this favours the cultivation practices of closely timed, monotonous crop rotations, but is accompanied by highly problematic losses in landscape and biodiversity. Furthermore, pesticide residues end up in animal feed and human food and are thus also consumed by consumers. Cereals can currently only be grown to the extent required (climate change, war in Ukraine, etc.) with the use of enormous quantities of plant protection products such as pesticides, herbicides, fungicides and insecticides. As a result, however, around 385 million people are poisoned by pesticides every year. In order to reduce the dramatic consequences of the current form of cereal cultivation for nature, humans and animals, the use of pesticides must be reduced as far as possible. Instead of applying pesticides on a large scale to entire fields, they should only be applied in a targeted manner to those areas where this is really indicated. This requires an automatic early detection of diseases and pests.The aim of this research project is therefore to develop a prototype for an early detection system for pests in cereal crops.Firstly, a flying drone (UAV) will record close-up images of various groups of cereal plants in the field using a high-resolution camera.These images will then be analysed using computer vision for signs of the so-called Hessian fly (lat. Mayetiola destructor, eng.Hessian fly), one of the most problematic pests in cereal cultivation.A suitable insecticide can then be sprayed on the affected areas in a targeted manner (precision spraying).The sub-discipline of machine learning (ML) in AI has suitable deep neural networks (deep learning) that can be used to recognise areas yellowed by the presence of the Hessian fly.


Lifelong Explainable Robot Learning

Prof. Dr. Carlo D’Eramo, University of Würzburg and TUDa, FB Computer Science
Prof. Dr. Georgia Chalvatzaki, TUDa, FB Computer Science

Current demographic trends and reports of a shortage of carers make the need for intelligent robots that can act as universal assistants essential.While robot learning promises to equip robots with complex skills through experience and interaction with the environment, most methods are too tailored to individual tasks that do not generalise well in the non-stationary real world.Conversely, humans are constantly learning and building on existing knowledge.Lifelong learning of robots requires that an agent can form representations that are useful for continuous learning of a set of tasks to avoid catastrophic forgetting of previous skills. We propose to investigate a method that enables robots to learn a set of behaviours through multimodal cues that can be easily assembled to synthesise more complex behaviours. We propose to incorporate large pre-trained base models for language and vision into robot-oriented tasks. We will explore the design of novel parameter-efficient residuals for lifelong reinforcement learning (RL), which would allow us to build on previous representations to learn new skills while avoiding the two main problems of task interference and catastrophic forgetting.Crucially, we investigate forward and backward transfer and inference from the perspective of explainability to enable robots to explain non-experts similarities they have found during their training life in different tasks, and even to translate their actions during the execution of a task into natural language explanations.We argue that explainability is a crucial component to increase the trustworthiness of AI robots during their interaction with non-expert users.Therefore, we call this area of work Lifelong Explainable Robot Learning (LExRoL), which opens new avenues of research in the field of lifelong learning and robotics.


SPN to AI-Engine Compiler (SAICo)

Prof. Dr.-Ing. Andreas Koch, TUDa, Department of Embedded Systems and Applications (ESA) 
Prof. Dr. Kristian Kersting, TUDa, Department of Computer Sciences

The Artificial Intelligence and Machine Learning LAB (AIMLL) and the Department of Embedded Systems and their Applications (ESA) have been working on various aspects of Sum Product Networks (SPNs) for several years. SPNs are a machine learning model closely related to the class of probabilistic graphical models and allow the compact representation of multivariate probability distributions with efficient inference.These properties allow, for example, neural networks to be augmented in order to increase their accuracy and allow predictions to be evaluated.In the context of SPNs, SPFlow is one of the most relevant software libraries.SPFlow is primarily developed and maintained by AIMLL and allows different types of SPNs to be created and trained quickly and easily. SPFlow offers a wide range of options for creating SPNs: Standard training routines can be used as well as customised training approaches. In addition, SPFlow can be extended so that the models, training and inference can be adapted accordingly. As part of a loose collaboration between AIMLL and ESA, a project dealing with the acceleration of SPN inference was launched back in 2018. The expertise in relation to MLIR and the acceleration of SPNs is to be used to extend the SPNC so that it can also be compiled for AI engines (AIEs) in the future. The overall aim of the project is to develop AI engines as a possible target architecture for SPN inference.In particular, various options for optimising the models are also to be evaluated, as new architectures in particular can often only be used optimally if corresponding peculiarities of the architecture and the corresponding models are exploited.The SAICo project proposal thus optimally combines the compiler/architecture-specific experience of the ESA research group with the model-related expertise of AIMLL.


SyCLeC: Symposium on Continual Learning beyond Classification

Dr. Simone Schaub-Meyer, TUDa, Department of Computer Sciences
Dr. Martin Mundt, TUDa, Department of Computer Sciences

Much of the recent progress in artificial intelligence (AI) has been focussed on improving performance numbers or qualitatively appealing examples. This may be due in part to the way we set up a traditional machine learning workflow. Typically, we start by defining a limited, well-defined task, collect data for that task, select a statistical model to learn it, and later typically conclude that an approach is successful if it performs well on the appropriate test data.This process is often repeated several times to improve the result, either by optimising the model (model-centric view) or by collecting or processing more data (data-centric view).However, well-designed real-world systems require more than just the best performance number in a benchmark. Dedicated test sets are limited not only by the fact that they do not take into account the variance of the appearance of new, unknown data during deployment, but also the different ways in which tasks change over time. A single good number in a test set therefore does not reflect the experiences and changes that a system undergoes in the real world. As much as we seem to know how to tune our popular static benchmarks, we seem to know less about how to formulate general learning processes that are able to continually learn from endless streams of data, and to adapt and generalize to modified scenarios, as we humans do. Continual, or lifelong, machine learning addresses the crucial questions that arise when aiming to overcome the limitations of single training cycles, rigid inference engines, and fixed datasets. In contrast to conventional benchmarks for machine learning, the project investigates how learners can continue to use, expand and adapt their knowledge when confronted with changing and novel tasks over time. At the heart of this is the realisation that data selection, models, training algorithms and evaluation benchmarks are not static. Despite their recent popularity, current research has only just begun to understand how we can accommodate these factors in human-like, lifelong learning AI systems. Although ongoing efforts are beginning to consider complex sequences of datasets, they are predominantly focused on image classification tasks.Unfortunately, such tasks greatly simplify learning, e.g. by only performing image-level recognition and assuming that all data is always labelled and that continuous learning mainly consists of recognising new object types.In this initiative, we therefore want to bring together researchers who are leaders in the field of continuous learning and computer vision to jointly lay the foundations for continuous learning beyond classification. In an inaugural symposium, we will identify commonalities and synergies with respect to continuous learning for a variety of relevant computer vision tasks, such as semantic segmentation and learning without classification.


Neural cellular automata enables federated cell segmentation (FedNCA)

Dr. Anirban Mukhopadhyay, TUDa, Department of Electrical Engineering and Information Technology
Prof. Dr. Heinz Koeppl, TUDa, Department of Electrical Engineering and Information Technology

The trend towards increasingly resource-intensive models is at odds with the goal of democratising deep learning for all. Favourable access to AI technology and a low barrier to entry for participation are necessary to promote and fully exploit the potential of participation around the world. The possibility of widespread collaboration encourages the collection of diverse data, which ultimately enables complex problems to be solved. This includes frugal digital health innovations that could provide access to healthcare for the last billion people. Federated learning is a viable solution, but is inaccessible due to high network bandwidth and computing requirements for clinics that cannot afford such infrastructure.The recently emerging field of neural cellular automata (NCA), which are models that converge to a defined target only through local communication, is in contrast to this as they are lightweight in terms of parameters and computational requirements.However, training NCAs in a centralised environment is already difficult and federated training of NCAs has never been attempted. We propose to (1) combine the expertise on self-organisation with the experience from the development of the first NCA for medical image segmentation to develop a novel lightweight federated NCA learning and inference algorithm. (2) The developed algorithms will be tested within the established collaboration with Peter Wild from the University Hospital Frankfurt for the segmentation of histopathology images. (3) As an extreme example, the capability of federated learning with NCA over a VAST network of low-cost computing devices is demonstrated.


Memristors – a central Hardware for AI

Prof. Dr. Lambert Alff, TUDa, FB Material Sciences
Prof. Dr.-Ing. Andreas Koch, TUDa, Department of Embedded Systems and Applications (ESA)
Prof. Dr. Christian Hochburgen, TUDa, FB ETIT

Artificial Intelligence will find its way almost ubiquitously into the most diverse areas of life. At the same time, however, this also means that the energy efficiency aspect of the associated computing effort will become increasingly important. Therefore, the development of the computer architectures used for AI is an important field of research. A major new component for AI-adapted computer architectures is the so-called memristor. There are several material science approaches that can be used to realize (highly energy-efficient) memristors, but these result in different device behavior or realistic application scenarios have not even been explored. This project aims to bring together the chain necessary for memristors from the material to the component to the circuit for specific applications of AI and to promote joint research projects in the sense of this interdisciplinary and holistic approach.


The Algonauts Project Demonstrator: Explaining the brain with AI models

Prof. Dr. Gemma Roig, Goethe-University, FB Informatik

The Algonauts project is an ongoing project that aims to explore human and machine intelligence using the latest algorithmic tools (Cichy et al., 2019). In this way, the Algonauts project serves as a catalyst to bring together researchers from the fields of biological and machine intelligence on a common platform to exchange ideas and advance both fields in the form of challenges and joint workshops (http://algonauts.csail.mit.edu/). Here we propose to leverage the ongoing success of the Algonauts project to motivate young talents, including high school and undergraduate students, to become the future leaders in the field of AI from an interdisciplinary perspective. To this end, we aim to build a demonstrator that shows how AI models, in particular artificial deep neural networks, can be used to reveal the workings of the brain that lead to human behaviour, and how the insights gained can be used to guide the design of brain-inspired AI models that could have desirable properties similar to human cognition, such as modularity of functions. This could shed light on how to support more transparent and explainable behaviour of model decisions, as well as how to develop models that are robust to perturbations and noise. The demonstrator is intended to be interactive and provide a user-friendly interface to go through the three main steps of using AI models to understand the human brain. Step 1 is to collect brain data from people viewing images or videos, step 2 is to select existing AI models or create your own model to explain the brain data, and the third step is to compare the two to gain insights into what is happening inside the brain while people are viewing the stimuli. To this end, we will integrate into the core of the demonstrator our laboratory toolbox called Net2Brain, whose purpose is to integrate AI models to predict brain data (Bersch et al., 2022). We will improve and further develop it so that it can later be made available to the scientific community. An important goal is the integration of AI models developed in hessian.AI, e.g. those with human learning characteristics, such as continuous learning models (Dr Mundt) and embodied models from robotics (Prof Chalvatzaki), as well as interpretability and explainability algorithms.


Development, evaluation and transfer of data science tools for right-censored and high-dimensional data

Prof. Dr. Antje Jahn, Darmstadt University of Applied Sciences, Department of Mathematics and Natural Sciences
Prof. Dr. Sebastian Döhler, Darmstadt University of Applied Sciences, Department of Mathematics and Natural Sciences
Prof. Dr. Gunter Grieser, Darmstadt University of Applied Sciences, Department of Computer Science
Prof. Dr. Bernhard Humm, Darmstadt University of Applied Sciences, Department of Computer Science

In machine learning (ML), open-source and free software packages for common ecosystems such as R and Python are responsible for the transfer into practice and the entry point for many beginners into the data science field. Advanced users make the informed choice of their tools through the information from academia, where systematic evaluations of different packages and implementations support this choice for a specific application purpose. The overall goal of this project is to make knowledge about new methods for high-dimensional and right-censored data available to beginners and experienced users. High-dimensional data occurs for example in the field of text mining or genetic statistics. Based on recent research on goodness-of-fit (GOF) tests for high-dimensional data, we aim to create an R package that simplifies the application of these methods for the aforementioned domains. Right-censored data often occurs in medical data or in the field of predictive maintenance. Examples include predictions of survival probabilities under various medical interventions or the prediction of optimal maintenance times for technical devices. Right-censored data requires special data science methods, for which there is sometimes insufficient support for an informed selection of implementations. This support is provided in this project. The transfer of the results achieved is to take place in the form of a multimedia campaign – consisting of a short video channel, a video channel and a blog – under the aspect of the ‘third mission’.


KIPP TransferLab – KI in Planung und Produktion | AI in planning and production

Prof. Dr. Michael Guckert, THM, Department of Mathematics, Natural Sciences and Data Processing
Prof. Dr. Thomas Farrenkopf, THM, Department of Mathematics, Natural Sciences and Data Processing
Prof. Dr. Nicolas Stein, THM, Department of Mathematics, Natural Sciences and Data Processing
Prof. Holger Rohn, THM, Department of Industrial Engineering and Management
Prof. Dr. Udo Fiedler, THM, Department of Industrial Engineering and Management
Prof. Dr. Carsten Stroh, THM, Department of Industrial Engineering and Management

Small and medium-sized enterprises (SMEs) are under increasing pressure to innovate these days. They operate with limited financial and human resources and therefore have to operate complex production structures, often with one-off and small batch production. Increasing efficiency in production is often of existential importance. Limited capacities also force them to make efficient use of the available production resources while at the same time meeting the increasing quality requirements of the markets.
Artificial intelligence (AI) in production can be used in a wide range of corporate processes and bring about lasting effects. Systematic, automated collection of data generated directly in the machines during production allows consistent application of AI algorithms and supports more accurate predictions of actual resource utilization. Insights gained from the data can enable forecasts of output quantities and qualities or machine availability.Immediate effects of such intelligent machine and process monitoring are higher delivery reliability, more efficient utilization of resources in the company (incl. energy and resource efficiency) and increased transparency about the condition of the manufacturing equipment used. In order to advance the level of AI maturity in SMEs, the high potential of the technology is to be illustrated with the help of demonstrators in a real laboratory environment. The impetus already given for the introduction of AI can be used as a lever and systematic implementation can be started in addition to the already known application possibilities. In addition to the processes, the operational use in a laboratory environment is also shown using demonstrators.


Automatic classification of toxic and false content for the younger generation using advanced AI methods

Prof. Dr. Melanie Siegel, Darmstadt University of Applied Sciences, Department of Computer Science
Prof. Dr. Bernhard Humm, Darmstadt University of Applied Sciences, Department of Computer Science

The original idea of social media was to enable people to exchange information and opinions as openly as possible and thus support communication. This idea of social participation is being massively disrupted by current developments: where an open exchange of opinions on political issues was possible, the forums are increasingly being flooded with hate and threats of violence. Where free access to information was the goal, false factual claims are increasingly being made and in some cases automatically disseminated. Texts, images and videos are used and semantically linked to each other. It is becoming increasingly difficult for children and young people in particular to categorise information. There are two basic ways in which toxic information can be recognised: intrinsically by analysing and evaluating published content or extrinsically by evaluating such content in the context of other information. One must be able to classify a post as harmless banter or opinion, insult, discrimination, or even threat.
In addition, a distinction must be made between a harmless private false claim, a socially relevant false claim that should at least be commented on journalistically, and even acts of disinformation that are relevant under criminal law. Automatic processes can help with categorisation, as the DeTox project has already shown. However, the topics and language of toxic content are constantly changing, so it is necessary for the models (automatic processes – intelligent systems) to be regularly retrained. In the case of models based on neural networks, however, further training can lead to previously trained content being overwritten and the models no longer functioning on the original (old) data (‘catastrophic forgetting’). Complete retraining is usually not practicable due to the high model complexity and the associated high computational effort. False reports are not only made up of language (text). In order to transfer opinions, images and texts are often combined from a different context and placed in a new, non-existent context. This makes human and automatic recognition particularly difficult. Therefore, approaches are needed that analyse the text and the image in context.


Cooperation in the Repeated Prisoner’s Dilemma with Algorithmic Players / Wie kooperativ sind Menschen und KI-Algorithmen?

Prof. Dr. Oliver Hinz, Goethe University Frankfurt, Department of Economics
Prof. Dr. Matthias Blonski, Goethe University Frankfurt, Department of Economics

The aim of the project at the interface between AI and microeconomics is to understand how cooperative behaviour changes with repeated interaction with learning machines instead of humans. The following research questions are considered: How does the willingness to cooperate in the repeated prisoner’s dilemma change when one of the players is replaced by an artificially intelligent algorithm? How does this willingness depend on the expected duration of the game and the knowledge of the human about the identity of the opponent? Do any differences in cooperative behaviour result from the changed nature of the opponent (human or machine) or from deviating strategic behaviour?


The virtual doc: An AI-based medical decision support system

Prof. Dr. Dominik Heider, Philipps-Universität Marburg, FB Mathematics/Computer Science
Prof. Dr. Thorsten Papenbrock, Philipps-Universität Marburg, FB Mathematics/Computer Science
Prof. Dr. Bernd Freisleben, Philipps-Universität Marburg, Mathematics/Computer Science

The COVID pandemic has exposed the weaknesses of healthcare systems worldwide and the immense pressure doctors are under. In addition, the WHO estimates that there will be a shortage of 12.9 million healthcare professionals by 2035. The Virtual Doc project aims to support medical staff through the use of advanced sensor technologies and state-of-the-art artificial intelligence (AI) methods. The virtual doctor performs various medical tasks on a patient in an intelligent examination cubicle. The sensors in the cabin measure non-invasive parameters (e.g. BMI, heart rate, pulse) and the computer infrastructure interactively records the patient’s medical history to avoid invasive measurements. Clinical parameters are made available to physicians, including advanced disease predictions based on machine learning models for specific (or as yet unknown) conditions (e.g. type 2 diabetes mellitus (T2DM)). In this way, the virtual doctor can relieve medical staff of these tasks, freeing up capacity for treatment, emergencies and care. With this project proposal, we want to expand our existing prototype of the virtual doctor with additional sensors and analysis modules and eliminate potential sources of error. We also want to strengthen collaboration in this multi-faceted project by involving other research groups and their expertise in the development of the virtual doctor. The extent to which such an AI-supported preliminary examination makes sense and is accepted by the population will be investigated in parallel with the help of a survey (cooperation with Prof Dr Michael Leyer, Department of Economics, University of Marburg) and an on-site test in a double cabin at Bochum University Hospital (cooperation with Prof Dr Ali Canbay, UK RUB).


Visual analysis for predicting relevant technologies using neural networks (VAVTECH)

Prof. Dr. Kawa Nazemi, Hochschule Darmstadt, Department of Computer Sciences
Prof. Dr. Bernhard Humm, Hochschule Darmstadt, Department of Computer Sciences

New technologies, as well as already existing but unused technologies, have the potential to sustainably increase the innovative capacity of companies and secure their future success. However, if these relevant technologies and the associated new areas of application are not identified early enough, competitors may establish themselves in these fields ahead of time. Furthermore, neglected new technologies carry the risk of disrupting the corresponding market when they are introduced, potentially displacing unprepared companies. A valid analysis and prediction of potential future technologies is therefore more important than ever.The VAVTech project aims to develop a visual analysis system that enables people to recognize relevant technologies as early as possible and predict their potential trajectory. Scientific publications will serve as the data foundation for the analysis system, as they present respective technologies at a very early stage, making them suitable for early technology detection.The system will primarily combine neural networks and interactive visualizations, allowing companies, startups, and strategic consultants to analyze and predict the potential of new and largely unknown technologies. The neural network will be developed in a modular way to ensure its transferability to other domains. As part of the project, a functional demonstrator will be created using real-world data, laying the foundation for further work in the field of strategic foresight through the application of artificial intelligence methods.The demonstrator will serve multiple purposes: acquiring third-party funding, networking with other AI researchers, and increasing the visibility of the research through visualizations.


Women in the Field of AI in HealthcareWomen AI Days

Prof. Dr. Barbara Klein, Frankfurt University of Applied Sciences, Department of Social Work and Health
Prof. Dr. Martin Kappes, Frankfurt University of Applied Sciences, Department Computer Sciences and Engineering

The UNESCO Recommendation on the Ethics of Artificial Intelligence establishes globally accepted standards for AI technologies, to which 193 member states have committed. Ethical guidelines are linked to human rights obligations, with a focus on so-called “blind spots,” such as AI and gender, education, sustainability, and others. For Germany, there is significant need for action in the areas of equal treatment and diversity within AI development teams. Diversity is considered one of the prerequisites for ensuring that such considerations are appropriately reflected in AI programming.The field of artificial intelligence in Germany, among other things, requires a higher proportion of women to avoid future social biases and gender inequalities caused by unconscious biases in algorithms.This aligns with the UNESCO Recommendation, which was adopted on November 23, 2021, as the first globally negotiated legal framework for the ethical development and use of AI. Particularly in healthcare and medicine, women are insufficiently considered, which leads to fatal effects in medical care if, for example, drugs are only tested with men. Access for women to classic male domains such as the IT sector is often still difficult. The goal of the initiative is therefore a three-day workshop (Women AI Days) to connect national female experts and analyze needs, such as strengthening the proportion of women and making research and work areas visible to young talents. Through accompanying social media efforts, a publication, and subsequent public lectures at Frankfurt UAS, the content will be made known to the public, with a particular focus on the state of Hesse.


Funded projects in the second round of calls (2022)


Accelerating Cardinality Estimation (ACE)

Prof. Dr.-Ing. Andreas Koch, TUDa, Department of Embedded Systems and Applications (ESA)
Prof. Dr. Carsten Binnig, TUDa, Data Management

Sum-Product-Networks (SPNs) belong to the class of graphical probabilistic models and allow the compact representation of multivariate probability distributions. While the ESA group has mainly investigated the acceleration possibilities of SPNs, the DM group has dealt with for which applications in the field of databases SPNs can be used. This includes, for example, cardinality estimation. It can be used to predict the result sizes of database queries and thus optimize the query processing of database management systems (DBMS). The overall goal of the project is to generally accelerate Cardinality Estimation using RSPNs (Relational SPNs), to automate the development and training process of RSPNs, and furthermore to investigate the potential usability in the context of large databases. The extension of the SPNC, as well as the provision of corresponding training processes, promise in combination highly interesting, practically relevant research results that can also be incorporated into the other projects in the two participating research areas.


AI4Birds: Bird Species Recognition in Complex Soundscape Recordings

Dr. Markus Mühling PU Marburg, FB Mathematics & Computer Science
Prof. Dr. Nina Farbig, PU Marburg, FB Biology
Prof. Dr. Bernd Freisleben, PU Marburg, Dept. of Mathematics & Computer Science, Distributed Systems and Intelligent Computing

In this project we focus on automatically recognizing bird species in audio recordings. To improve current biodiversity monitoring schemes, AI4Birds will use audio recordings across a forest ecosystem to develop novel transformer models based on self-attention for recognizing bird species in soundscapes. Thus, sustainability regarding biodiversity is at the heart of the project. Sustainability with respect to continuing AI4Birds by acquiring additional financial funding is very likely; it is planned to use AI4Birds to explore the funding opportunities within the federal biodiversity sustainability programs. Furthermore, we plan to contribute our results to Microsoft’s “AI for Earth” initiative.


AIQTHmed | AI Quality and Testing Hub in Healthcare

Prof. Dr. Martin Hirsch, Artificial Intelligence in Medicine, UMR and Director of the Institute for Artificial Intelligence at UKGM Marburg
Prof. Dr. Thomas Nauss, Department of Mathematics, Natural Sciences and Data Processing, FG Business Informatics – Artificial Intelligence, PU Marburg

In May 2021, the Hessian Minister for Digital Strategy and Development and the VDE agreed on the establishment of a first nationwide “AI Quality & Testing Hub” (AIQTH1). In the environment of hessian.AI and the Center for Responsible Digitalization (ZEVEDI), this is intended to promote the quality and trustworthiness of AI systems through standardization and certification in the model topic areas of “Mobility”, “Finance” and “Health”, making them verifiable and credible to the population. The aim of the project is to use the EU program DIGITAL EUROPE to strengthen the model topic area “Health” of the AIQTH of hessian.AI and thus the chance to establish this institution in Hesse.


Memristors – a central Hardware for AI

Prof. Dr. Lambert Alff, TUDa, FB Material Sciences
Prof. Dr.-Ing. Andreas Koch, TUDa, Department of Embedded Systems and Applications (ESA)
Prof. Dr. Christian Hochburgen, TUDa, FB ETIT

Artificial Intelligence will find its way almost ubiquitously into the most diverse areas of life. At the same time, however, this also means that the energy efficiency aspect of the associated computing effort will become increasingly important. Therefore, the development of the computer architectures used for AI is an important field of research. A major new component for AI-adapted computer architectures is the so-called memristor. There are several material science approaches that can be used to realize (highly energy-efficient) memristors, but these result in different device behavior or realistic application scenarios have not even been explored. This project aims to bring together the chain necessary for memristors from the material to the component to the circuit for specific applications of AI and to promote joint research projects in the sense of this interdisciplinary and holistic approach.


Mind the gap! Huddle between Materials and Computer Sciences

Prof. Dr. Leopoldo Molina-Luna, TUDa, FB Material Sciences
Prof. Dr. Kristian Kersting, TUDa, FB Computer Sciences

Many of the designers for AI algorithms don’t have the enough background knowledge to keep up with the state-of-the-art research in a natural science field as e.g. the materials sciences. On the other hand, the materials science researchers usually rely on an “educated guess” fashion for determining the parameters for developing AI algorithms and tolls, they pay little to none. There exists a knowledge gap between the computer science and materials science communities and more cross-talk on a fundamental level is needed. The project builds up a seeding platform for implementing and consolidating an inclusive regular exchange between all interested parties. It strengthens the preparation activities for an IRTG application in the field of on operando TEM for Memristors and ML-based data analysis routines.


Innovative UX for User-Centered AI Systems

Prof. Dr. Bernhard Humm, h_da, FB Computer Sciences
Prof. Dr. Andrea Krajewski, h_da, FB Media

Human-centered AI includes, among other things, the appropriate explanation of decisions or recommendations made by the AI system, e.g., by means of Machine Learning (keyword “Explainable AI”). User Experience (UX), on the other hand, is concerned with the development of products, especially IT systems, that are intended to provide the best possible user experience. In this project, innovative UX concepts will be designed, tuned, implemented and evaluated for three different prototype AI systems that are being developed within the BMBF-funded project “Competence Center for Work and Artificial Intelligence (KompAKI)”. One of the AI systems deals with the provision of Machine Learning (ML) for broad user groups with and without programming skills. Two AI systems are intended for operational use in the manufacturing industry (Industry 4.0). This project ideally complements other AI initiatives and promotes networking between hessian.AI partners and different disciplines.


Funded projects in the first round of calls (2021)


SpeedTram

Dr. Florian Stock, TUDa, Department of Mechanical Engineering, Department of Automotive Engineering (FZD)
Prof. Dr. Andreas Koch, TUDa, Department of Embedded Systems and Applications (ESA)

The focus of research on autonomous driving has so far clearly been on cars, but only a few projects have looked at other means of transport. To remedy this, Hessian.AI is funding innovative interdisciplinary research with the SpeedTram project, which focuses on autonomous/assisted driving of streetcars. In it, the Department of Automotive Engineering (FZD) and the Department of Embedded Systems and Applications (ESA) at TU Darmstadt are investigating the accelerated execution of machine learning algorithms required for automation in and of assistance systems for streetcars. Real data recorded during operation on a test vehicle of the local public transport company HEAG are processed. The evaluation of this growing data set, which now exceeds 140 TB, was no longer reasonably possible with existing methods. The work in SpeedTram made it possible to accelerate the two most time-consuming steps of data analysis, namely object recognition based on neural networks and the processing of LIDAR sensor data, by factors of three and 24, respectively. SpeedTram makes an important contribution to raising the innovation potential of automated streetcar guidance and making it usable for future applications.


AI4Bats: Recognizing Bat Species and Bat Behavior in Audio Recordings of Bat Echolocation Calls

Dr. Nicolas Frieß, PU Marburg, FB Geography, Environmental Informatics
Prof. Dr. Bernd Freisegen, PU Marburg, Dept. of Mathematics & Computer Science, Distributed Systems and Intelligent Computing
Prof. Dr. Thomas Nauss, PU Marburg, FB Geography, Environmental Informatics

Biodiversity is important for various ecosystem services that form the basis of human life. The current decline in biodiversity requires a transformation from manual periodic biodiversity assessment to automated real-time monitoring. Bats are one of the most widespread terrestrial mammal species and serve as important bioindicators of ecosystem health. Typically, bats are monitored by recording and analyzing their echolocation calls. In this project, AI4Bats, we present a novel AI-based approach to bat echolocation call detection, bat species recognition, and bat behavior detection in audio spectrograms. It is based on a neural transformer architecture and relies on self-attention mechanisms. Our experiments show that our approach outperforms current approaches for detecting bat echolocation calls and recognizing bat species in several publicly available datasets. While our model for detecting bat echolocation calls achieves an average precision of up to 90.2%, our model for detecting bat species achieves an accuracy of up to 88.7% for 14 bat species found in Germany, some of which are difficult to distinguish even for human experts. AI4Bats lays the foundation for breakthroughs in automated bat monitoring in the field of biodiversity, the potential loss of which is likely to be one of the most significant challenges facing humanity in the near future.


AI@School

Dr. Joachim Bille, TH Mittelhessen, Head of Department FTN
Prof. Dr. Michael Guckert, TH Mittelhessen, Department of Mathematics, Natural Sciences and Data Processing, FG Business Informatics – Artificial Intelligence
Prof. Holger Rohn, TH Mittelhessen, Department of Industrial Engineering and Management, FG Life Cycle Management & Quality Management, Makerspace Friedberg

The goal of the AI@School project was the development of a demonstrator for the vivid communication of basic knowledge of artificial intelligence, which should provide pupils with an early and low-threshold access to AI topics. On the one hand, the demonstrator should contain suitable examples and exhibits for the descriptive transfer of knowledge; on the other hand, an interactive introductory course for the transfer of knowledge should be developed using the exhibits and examples. Based on these offers, a prototypical teaching unit at the advanced course level will also be developed. The project results are to be implemented permanently at hessian.AI; in addition, a Hessian-wide transfer of the concept to suitable institutions in the other parts of the state is planned in the medium to long term.


Robot Learning of Long-Horizon Manipulation bridging Object-centric Representations to Knowledge Graphs

Prof. Dr. Georgia Chalvatzaki, TUDa, FB Informatik, iROSA: Robot Learning of Mobile Manipulation for Intelligent Assistance
Prof. Dr. Iryna Gurevych, TUDa, FB Computer Science, Ubiquitous Knowledge Processing Lab

The goal of this project was to investigate the links between high-level natural language commands and robot manipulation. Humans are able to effectively abstract and decompose natural language commands, e.g. “Make me a coffee”, but such an action is not detailed enough for a robot to execute. The task execution problem in robotics is usually approached as a task and motion planning problem, where a task planner decomposes the abstract goal into a set of logical actions that must be translated into actual actions in the world by a motion generator. The connection between abstract logical action and real-world description (e.g., in terms of the exact position of objects in the scene) makes task and motion planning a very challenging problem. In this project, we approached this problem from three different directions, looking at sub-problems of the topic with respect to our ultimate goal of learning long time horizon manipulation plans using human commonsense and scene graphs:

  1. The association of the object scene with robot manipulation plans using graph neural networks (GNNs) and RL,
  2. Using voice instructions and vision in transformer networks to output subgoals for a low-level planner, and
  3. Translating human instructions into robot plans.

Project results from 2. and 3. are scheduled to be published at a major machine learning conference in the near future. Work from iii will continue as part of a current collaboration between iROSA and UKP.