KI im Kino

AI in Cinema: “The Matrix” and Society’s Debate on Artificial Intelligence – An Interview with hessian.AI Professor Dr. Ralph Ewerth

Art is often one or more steps ahead of reality. Ideas and visions must first find their way into people’s minds—in thoughts, stories, and images—before they become a technological reality.

“AI in Cinema” is the theme of an event at the Rex arthouse cinema in Darmstadt, inviting visitors to discover the fascinating world of artificial intelligence from a cinematic and scientific perspective. Together with experts from the fields of computer science, psychology, and media studies, the event will examine how AI is portrayed in pop culture—and what current research has to say about it.

The series kicked off with “The Matrix” – the screening of the cult film from 1999 was followed by a discussion with hessian.AI professor Dr. Ralph Ewerth from the University of Marburg. He is an expert in multimodal modeling, machine learning, and generative AI systems. We spoke with professor Ewerth about the fascination of science fiction dystopia and current references to AI research.

Ralph Ewerth

Prof. Dr. Ralph Ewerth

From the perspective of AI researchers, several aspects of the film “The Matrix” are noteworthy: the advanced capabilities of the AI systems, “Agent Smith” and the robots, as well as the fact that humans in the Matrix are almost completely monitored. Perhaps with the difference that today, in reality, we voluntarily disclose our data and trust that it will not be misused by corporations, government agencies, and those in power.
Prof. Dr. Ralph Ewerth

hessian.AI: The film “The Matrix” was released in 1999, when the internet and mobile phones were making their commercial breakthrough. What fascinates you personally about this film, also from the perspective of an AI researcher?

Ralph Ewerth: I remember watching the film in late summer at the open-air cinema in Rebstockbad in Frankfurt and staring at the screen, completely spellbound, as the credits rolled. I sat there on my sleeping mat and thought: Wow, what was that! I had the feeling that I had just seen a very special film. Until then, I had thought that I didn’t like action films because they usually tell a simple and superficial story – but Matrix was different and is still one of my favorite films.

Several aspects of the film fascinated me: technically, there were the innovative action and fight scenes with the bullet time effect, which, to my knowledge, was also innovative in terms of film design. But above all, the beginning of the film, or rather the first part of the trilogy, is so well done that it is exciting from the start and makes you curious about what lies behind the Matrix. I find the way the story is told in part 1 particularly gripping. Piece by piece, interesting effects and dialogues explain what lies behind the Matrix.

And, of course, the film also raises philosophical questions: What is reality? Can we trust our senses or news, images, and videos? These questions are very relevant today due to AI-generated content, disinformation campaigns, but also social media and the power of a few monopolists to control and influence information flows. And other questions: What is free will, can we decide freely?

From the AI researcher’s point of view, several things are interesting: the high level of capability of the AI systems, “Agent Smith” and the robots, and the fact that the people in the Matrix are almost completely monitored. Perhaps with the difference that today, in reality, we voluntarily disclose our data and trust that it will not be misused by corporations, government agencies, and those in power.

h.AI: Is there a favorite motif or a particular scene that you find particularly exciting?

R.E.: There are several favorite motifs, but I find the following recurring motif amusing: in several scenes, the classic telephone serves as a lifeline for Trinity, Morpheus, Neo, and the others when they are in danger. Applied to everyday life today, it could be ironically interpreted as a call for more digital abstinence.

h.AI: Part of your research work revolves around the automatic understanding and interpretation of images, texts, and videos. Could you tell us a little more about your field of research?

R.E.: My working group conducts research into methods for analyzing multimodal data, and I am particularly interested in the combination of images and text as well as video data. What I find exciting about this is that combining two modalities, such as text and image or sound and moving image, can create new meanings or reverse meanings. In communication research, this is also called meaning multiplication. This can be explained well with memes, which often play with irony. An example: We could create a meme for everyday student life with an image of Morpheus offering the red and blue pills, combined with the text “You decide: study for the computer science exam or watch another episode of Netflix.” Morpheus looks serious in the picture, the text alone is banal—together they create irony and humor.

Methodologically, one of our approaches is to combine data-driven models such as neural networks or pre-trained language models with semantic knowledge representations in a useful way. By adding explicit knowledge from knowledge bases, we aim to improve data-driven models in order to reduce hallucinations in generative AI models, for example. Our work is highly interdisciplinary. We deal with the application of AI methods in STEM teaching for schools and universities, for research questions in the digital humanities, for example for computer-assisted film analysis, or in sports informatics, for example for game analysis.

h.AI: In “The Matrix,” the film’s protagonist, Neo, realizes that human “reality” is a simulation created by machines. Nowadays, it is indeed becoming increasingly difficult for us humans to distinguish between content generated by humans or AI, or even between human and AI conversation partners. How do you assess these developments?

R.E.: This is a complex issue. There are certainly many areas in which we want to know exactly whether content or a product is human-made or AI-made. In the obvious example of schools and universities, we naturally want to know that assignments have been completed by the students themselves and not entirely by AI. At the same time, we need to teach how AI systems can be used for learning. In the field of news and when it comes to the quality of information and disinformation, it is also highly relevant whether news content or a news photo has been manipulated or generated using AI.

Generative AI systems have made it much easier to manipulate content. This can be very problematic, and we urgently need methods or mechanisms to track the origin and editing of news items.
Ralph Ewerth



R.E.: Generative AI systems have made it much easier to manipulate content. This can be very problematic, and we urgently need methods or mechanisms to track the origin and editing of news items. Approaches such as digital watermarks, blockchain-based proof of origin, and mandatory labeling of AI content are currently being discussed and, in some cases, already implemented.

h.AI: A central motif of the dystopian scenario in The Matrix is the disempowerment of humans by machines (or AI). Although this is presented in an extremely exaggerated way in The Matrix, the question of humans’ decision-making ability in increasingly complex models seems entirely justified. What do you see as the biggest challenges here?

R.E.: It is a problem that we can no longer understand the internal functioning and result generation of large neural models in detail. One possible solution could be to create methods that explain how the models we use work. Another trend is to build smaller models that are just as powerful – but that doesn’t solve the fundamental problem. Another important aspect is to choose the right AI system for each task. For many problems, generative AI systems are not the first choice because they do not offer the best possible solution or are too resource-intensive.

It is a problem that we can no longer understand the internal functioning and result generation of large neural models in detail.
Ralph Ewerth



Another challenge lies in the fact that models can produce systematically biased results, which is obviously undesirable. This is related to the type of data used for training; if it contains biases, discrimination against certain groups, or stereotypes, these will also be learned by the model. People need to be aware of this when making decisions with the help of AI systems. This knowledge could be conveyed indirectly by making the data used for training known, or more directly through targeted evaluations of the models for the respective application or decision.

It is important that we view AI as a supporting tool when it comes to critical decisions—such as in medicine, justice, or education—while the final decision should remain with humans. We cannot delegate responsibility for important decisions to machines. The “human-in-the-loop” principle is gaining importance here.

h.AI: In her book Supremacy, business journalist Parmy Olsen notes that it is precisely the big names in the (AI) tech industry, such as Elon Musk, Sam Altman, and Demis Hassabis are themselves conjuring up the image of “doom AI” for “marketing purposes”—for example, to warn against competitors’ AI solutions or to emphasize the technical possibilities of AI and thus make it interesting for investors.

What do you personally see as the greatest risks for artificial intelligence and what do you see as the greatest opportunities?



R.E.: Unfortunately, AI systems pose some major risks. First, we are currently seeing AI being used in warfare, costing human lives. At the same time, AI is also protecting human lives in war. A second major risk is that AI methods can be used to monitor and manipulate people and, in the worst case, an entire society.

A third danger posed by AI systems, or rather by differences in access to them—whether at the individual, economic, or societal level—is that the gap between rich and poor and between the powerful and the dependent will widen even further. Another problem is the environmental impact of the energy consumption involved in training large AI models.

One of the greatest opportunities offered by AI systems is that we can use them beneficially in medicine to better understand diseases or treat them more easily. In general, AI systems can also be used to achieve fundamental advances in various fields of research. Another opportunity is that AI systems can significantly support education and training for people, but there is still a lot of research to be done in this area. AI also offers great potential in climate protection, from optimizing energy networks and developing new materials to predicting environmental changes.

h.AI: Finally, let’s delve a little into the realm of (optimistic) science fiction: What breakthroughs would you personally like to see in AI research and application in the coming years or decades?

R.E.: I am less interested in breakthroughs in AI research itself than in breakthroughs in socially relevant areas of application. It would certainly be the fulfillment of a human dream to be able to eradicate diseases such as cancer or dangerous infectious diseases. In addition, I would like to see AI systems help to combat climate change and find sustainable solutions to energy and resource problems. Fairer access to AI systems, but also to education and better opportunities for advancement in general, is also very important in order to reduce economic and social inequality and injustice.

I am less interested in breakthroughs in AI research itself than in breakthroughs in socially relevant areas of application.
Ralph Ewerth

R.E.: Last but not least, we should work to ensure that AI systems are used to safeguard democracy and in its interests, and not to promote, establish, and secure authoritarian political systems. So that one day we don’t end up living in a Matrix-like world.

h.AI: Thank you very much for the interview!