The European Association for Computer Graphics has awarded TU Computer Science Professor Justus Thies the prestigious Eurographics Young Researcher Award 2024. The prize is considered one of the most important awards in Europe for computer graphics and is awarded annually to two promising young scientists.

Thies was honored in particular for his groundbreaking work in the field of markerless motion capture and synthesis in computer graphics. His research on facial reenactment and video manipulation has been widely recognized.

This’ research group works at the intersection of computer graphics, computer vision and machine learning. The new AI-based methods focus on markerless motion capture of facial expressions, human bodies and non-rigid objects in general.

Justus Thies has been a full professor for ‘3D Graphics & Vision’ at the Technical University of Darmstadt since 2023 and independently heads the ‘Neural Capture & Synthesis’ research group at the Max Planck Institute for Intelligent Systems in Tübingen. He received his doctorate from the University of Erlangen-Nuremberg in 2017 with a thesis on markerless motion capture of facial representations and its applications. In addition, Justus Thies is part of hessian.AI’s RAI research collective, which is dedicated to the further development of current DL-based AI systems towards “Reasonable AI”.

Photo: Patrick Bal

About Felix Friedrich

Felix Friedrich is a researcher at the Department of Computer Science at the Technical University of Darmstadt. He studied electrical engineering at TU Dortmund University and holds two master’s degrees from TU Darmstadt in Autonomous Systems and Computer Science with a minor in Psychology.

Since 2021, Friedrich has been doing his doctorate at the Machine Learning Lab at the Department of Computer Science at TU Darmstadt.

Felix Friedrich

AI Cognition and machine intelligence

How do humans think and learn? The question of human cognition can be transferred to artificial intelligence and is even a decisive element in the comprehensibility of AI systems. Why does an AI system make a certain decision, for example, and how can the decision-making process be explained so that a human can also understand it?

Friedrich explains that AI systems often use shortcuts and that this results in spurious correlations. He cites the machine recognition of dogs and wolves as an example: If the animal to be recognized is in a forest, the AI system will probably recognize a wolf, but in a domestic context it will recognize a a dog.

AI systems are therefore heavily dependent on human feedback – and the better you understand how the machine works and where the errors lie, the better you understand the potential of AI systems to improve them.

Friedrich emphasizes the pragmatism that characterizes his research: “Today, we have a good idea of what AI does and why. But AI is not perfect. Even though we understand more and more about how it works, the question remains as to how the systems are used. And they need to be improved.”

Friedrich sees great potential for improvement in the fairness and bias of AI models. For example, diffusion models such as image generators often always output the same images for certain age, occupation and gender specifications: The banker is often a white man in a suit, the nurse a woman and the child from Africa in poor circumstances. 

“The characteristics of appearance in generated images do not necessarily reflect reality,” says Friedrich. So how do you get more diversity in, and prejudices and beauty ideals out?

Friedrich points to a paradigm: You have to train AI models with data that reflects the whole: “A child who is not supposed to use swear words still needs to know and understand them in order to avoid them.”

AI models are similar, for example with regard to differences in cultural factors. Although targeted prompts can be used to counteract this, they do not always solve the problem of stereotypes or even introduce other stereotypes.

Bias reduction is a balancing act

Friedrich emphasizes a core problem: normative terms such as “right” and “wrong” or “good” and “bad” are difficult to prove in AI models: “It is virtually impossible to train an AI model that has no biases. But you have to look: Where does this cause damage and how can it be prevented?”

A certain amount of influence is possible, says Friedrich: “If you had trained an AI model 20 years ago, a question about whether it is good to fly to New York would probably have been viewed positively. Today, the AI model would probably be more critical of this question because the context has changed. Teach the AI to understand such contexts independently. Create an understanding of normative values.”

Training your own AI models is time-consuming and cost-intensive and only feasible for a small number of companies. For Friedrich, one solution is to overwrite moral values in an AI model, i.e. to adapt them to a personal idea.

In technical jargon, this is called a “revision corpus”: the knowledge of the AI model does not change, it is just queried differently. This depends on the accuracy of the language.

Friedrich again recognizes parallels to humans here, for example in explanations and judgments based on similar cases, examples and situations.

“The challenge is the normative question: What is right and what is wrong? Am I perhaps overwriting other values that I don’t want to overwrite? It is incredibly difficult to remove or reduce biases without introducing new biases.”

Error tolerance and pragmatism in AI development

With “Fair Diffusion”, an adapted diffusion model, Friedrich and his team have developed an image generator that aims to reduce feature bias.

AI models often generate blonde-haired and female people, for example, even though they are underrepresented worldwide. “Fair Diffusion” helps to achieve a fairer distribution of the sexes. At the same time, other attributes such as skin color, hair length or similar can also be influenced.

Removing prejudice from AI models altogether? Probably an impossible undertaking.

These ethical considerations are a central component of Friedrich’s research. The researcher once again appeals to human pragmatism: risks are to be expected with humans, for example in road traffic. It is therefore wrong to expect machines to never make mistakes.

“You have to have a margin for error in any case. But you should still minimize the risk. That is the top priority,” says Friedrich.

For Friedrich, the question of causality or correlation has social implications: “Who actually decides what the world that an AI model reflects should look like?” It is a major challenge when a few large companies own AI models that determine how AI models function and what values they represent.

Friedrich’s research addresses these questions. A recent paper was accepted in the renowned journal “Nature Machine Intelligence”, a great success for the young researcher.

In any case, his work in fairness and biases has met with very positive feedback in the community, such as Fair Diffusion: “My work has a social benefit and high relevance. It gives me a lot of meaning in my everyday life,” says Friedrich.

Friedrich appreciates the research network and infrastructure of hessian.AI | The Hessian Center for Artificial Intelligence : “The infrastructure of hessian.AI | The Hessian Center for Artificial Intelligence is unparalleled in Germany. You can conduct cutting-edge research here, supported by a unique network of researchers and professors.”

The “AI Startup Landscape Hessen 2024” is intended to increase the visibility of AI startups in Hessen and thereby also facilitate access for companies, investors, politicians and those interested in startups. Thanks to the excellent cooperation with other partners from the startup ecosystem in Hessen, the team of AI Startup Rising created a data-based tool that provides a comprehensive overview of AI startups in Hessen.

In contrast to other startup maps, the AI Startup Landscape Hessen also depicts startups in the late pre-foundation phase. This area in particular has a lot of potential in Hesse, says Tobias Kehl, the project leader of AI Startup Rising:

“It is important to us to also make projects visible that have not yet been officially founded. Hesse has a great strength here thanks to its many universities, colleges and support programs. In addition, it is immensely important for these projects in particular to come into contact with potential customers and investors at an early stage.”

The database behind the Landscape is to be further expanded in the longer term to provide more detailed information on the teams, funding, programs completed, etc. In the future, this collected data could help to identify success factors and stumbling blocks and incorporate this knowledge into the funding programs of the ecosystem and authorities.

Take me to the AI Startup Landscape Hessen