3AI
The Third Wave of AI

About 3AI

3AI – The Third Wave of Artificial Intelligence aims to explore the Third Wave of Artificial Intelligence: AI systems should acquire human-like communication and reasoning capabilities and be able to recognize, classify and adapt to new situations autonomously.

3AI Logo

Our research approach goes far beyond the level of performance of AI and machine learning methods achieved in recent decades: AI systems should no longer act merely as tools that execute rules programmed by humans or derive solutions to problems from datasets curated by humans, but should be able to act as ‘colleagues’. The goal is not to replace human intelligence, but to extend it reliably and for the benefit of society in an increasingly complex world.

The AI systems we research should not only be able to learn, but they should also be able to grasp – novel – facts and be able to link them to forms of abstract thought. They will draw logical conclusions and make contextual decisions and learn from them again. In addition to algorithmic fundamentals, new methods of system design, new methods of software engineering and data management for AI in particular will play a key role for this. In the long term, the paradigm of systems AI should form the foundation for the development of the “third wave of AI” – artificial intelligences that learn, reason, build knowledge, and (inter)act in partnership with humans in a context-aware manner. To explore the foundations of Third Wave AI, we are working closely together in a research team of Computer Science, Artificial Intelligence, Cognitive Science, and Life Sciences.

3AI is funded as a cluster project by the Hessian Ministry of Higher Education, Research, Science and the from 2021 to 2025. In 3AI, the foundations for applying for a Cluster of Excellence “Reasonable Artificial Intelligence – RAI” (EXC 3057) in the framework of the Excellence Strategy of the German Federal and State Governments are laid.


To meet the grand challenge of Third Wave AI, we need to rethink AI from the ground up and create new foundations that seamlessly integrate machine learning, optimization, and reasoning-from spatial and temporal, to physical and domain-specific, to cognitive models-because one component alone is not enough to develop complex AI systems with human-like capabilities. As a bracket and leitmotif for 3AI, we envision a programming paradigm for system AI that makes the process of developing complex learning-based AI systems efficient, safe, and easier to reproduce.

3AI is coordinated by Prof. Mira Mezini, Prof. Kristian Kersting, Prof. Jan Peters and Prof. Stefan Roth (all TU Darmstadt).

Project Structure

The research goal of our collaborative program is to establish a solid proof-of-principle for one of the main hypotheses underlying the 3AI vision, namely, that combining learning, reasoning, optimization within the Systems AI paradigm and instantiating it in a common AI programming framework is key to significantly increase the effectiveness, robustness, accountability and reproducibility of AIs, while improving the productivity of those that build AIs of the future to make them accessible to broader application areas. To generate such a proof-of-principle, we will lay down the computational foundations for third wave AIs and will explore initial applications of it to challenging problems within the life sciences. 3AI is divided into 3 missions:

1. Systems

In the Foundations of Systems AI Design mission, we bring together probabilistic programming and learning, self-updating computation, AI-supported programming, and deep databases to create the software foundations of systems AI. We advance the state of the art in computer languages, such that the latter better support programming of learning and reasoning tasks. We investigate common frameworks for deep neural and probabilistic programming, approaches for hybrid modelling that link differentiable models with other machine learning approaches to improve generalization and robustness, and language-based approaches to increase consistency and faithfulness. Similarly, the hybridization process also provides implicit regularization as, e.g., structured knowledge can help to discard implausible deep models and provide structure. Other topics of investigation are programming models for supporting context-oriented, automorphic AIs capable of reasoning in a context-specific way and under incomplete knowledge.

Overall, Foundations of Systems AI Design (SD) investigates research questions around the theme of “Foundations for AI Software Engineering”. While we have come a long way in developing foundations of engineering traditional software, equivalents for AI and in particular for ML software systems are if at all existent in their infancy. Thereby, we aim to not only support AI programmers but also domain experts with limited programming expertise, targeting the collaborative construction of system AIs.


Systems AI Programming

(SD1) “Systems AI Programming” develops software foundations for a common Systems AI framework.

It is our objective to enable the programming of System AIs at a high-level of abstraction so as to automate low-level concerns and to avoid accidental complexity. To achieve this goal, we need to lay down programming foundations and provide programming abstractions for treating data-driven models, cognitive models, knowledge domains, and learners as first-class entities—the units of computations, composition and reasoning of high-level AI languages. Computing languages in use today are conceived with the goal to facilitate “teaching” machines that “mechanically” execute predefined prescriptions; new AI languages will be the vehicle to teach third wave AIs.

Research Questions: The vision of Systems AI programming supports the necessity of high-level data and programming abstractions for systematic and reproducible approaches to developing and debugging robust Systems AIs. Part of our work will focus on advancing the state of the art in systematically developing 2nd wave AIs, which is characterized by ad-hoc processes across steps of data labeling, algorithm selection, architecture tuning, training, testing, etc. Questions we will ask and answer in this context are: what concepts are needed for concept definition, structuring and evolution? If models are the new data to be produced, used and composed, how can we properly manage it and what operations are appropriate to query and process this data? What concepts do we need to modularize models and to compose them from sub-models? What is the equivalent of parametric polymorphism for model functions? How can we systematically debug models and/or data and concepts? Ultimately, the question is how to decouple teaching from learning, in a way that allows teachers to specify problems in a “language” of their domain, which shields both the variability of learning algorithms and that of the runtime.


Hybrid AI

(SD2) “Hybrid AI” links continuous and combinatorial AI approaches in order to improve generalization and robustness of AI. The hybridization process also provides implicit regularization as, e.g., structured knowledge can help to discard implausible deep models and provide structure.

The expanded interface between continuous and combinatorial optimization has already led to a number of breakthroughs in both areas, including, among many others, novel algorithms for deep networks on and with graphs, combinatorial layers with in deep networks, novel approaches for (deep) probabilistic inference in hybrid domains and novel neuro-symbolic approaches. But the majority of today’s deep networks still do not yield reliable confidences alongside their predictions., current deep networks in computer vision are also still clearly lacking in terms of explainability. Yet, explainability is an important desideratum for the acceptance of systems of artificial intelligence in general. It is thus clear that the deep learning community must do more to address this important challenge.

Research Questions: The biggest challenge is to integrate or even unify different (subsets of) representations from logic, databases, probability, constraint-based and neural models, for learning and reasoning. To this end, we will fully integrate (deep) probabilistic circuits and differentiable maximum satisfiability solvers as well as backward passes through blackbox combinatorial solvers into neuro-symbolic approaches and play them back into Area SD1. Generally, we will ask, akin to current machine architecture models from traditional computing, what are the “machine architectures” of third wave AIs that represent and connect ML models, cognitive models, knowledge spaces, etc. to which system AIs are “compiled”? In order to gain insights into the resulting complex AI systems, we will provide explanation methods for hybrid AI models that focus on understanding the rationales, contexts and interpretations of the resulting models using domain knowledge rather than relying on the transparency of the internal computational mechanism. To this aim, we will seamlessly integrate probabilistic circuits with deep neural networks. Overall, our goal is to strengthen the connections between traditional and deep learning methods toward robustness and explainability.


AI4AI

(SD3) “Second-order AI” explores AI methods “at the meta-level,” i.e., it employs AI methods for automating the process of AI development, thereby automatically ensuring good performance, robustness and trustworthiness of AI systems, especially when built, deployed, maintained and monitored by people with limited AI expertise.

The objective of automated AI is to diligently automate the labor-intensive and error-prone aspects of building AI systems thereby ensuring their trustworthiness and robustness “by-design”. Existing work on automated algorithm configuration and algorithm selection has already led to orders of magnitude improvements of the state-of-the-art in various reasoning and optimization tasks, such as the Boolean satisfiability problem, mixed integer programming, timetabling, and AI planning. Likewise, Automated ML is starting to be applied outside research laboratories. However, current techniques do not fully address all 3AI concerns. Very little work addresses the full data science loop, including issues of explainability, fairness, and accountability. Very little work integrates earning, optimization and reasoning. Some approaches use grammars or fixed rules to compose machine learning pipelines, but this has limited exploratory power. Finally, we need Automated AIs that can engage in natural conversational interactions with humans.

Research Questions: For many real-world problems Automated AI is not yet mature and flexible enough. In particular, current Automated AI systems are not able to syntactically handle many types of “messy” real-world data, such as data with defects and missing values. We will explore the results of the Areas SD1 and SD2 to extend our own works on interactive data and automatic density analysis to Automated AI. We will explore programmatic abstractions of the complete data science cycle (feature selection, dimensionality reduction, outlier detection, data augmentation and wrangling etc.) within Automated AI. Actually, Systems AI allows one to search for the right balance between aspects that we would find useful to automate, versus tasks in which it might remain meaningful for us to participate. It allows one to explore the duality between Automated AI and human knowledge and interaction. Domain knowledge can be naturally injected into Automated AI as it is one module among others in a Systems AI programming framework and in turn can call other modules. Instead of thinking of Automated AI as the removal of human involvement from AI, we imagine it as the selective inclusion of human participation. Thus, we will develop a conversational AI view on Automated AI using e.g. reinforcement learning (see e.g. Area AI2) as well as a novel user-centric representation learning approach.


Members of the Foundations of Systems AI Design mission

2. Methods

The goal of the Foundations of AI Methods mission is to show step-by-step how to solve challenging AI/ML tasks in NLP, robotics, and computer vision using the Systems AI paradigm.

Current AI systems are limited to performing only those tasks for which they have been specifically programmed and trained, and are inherently subject to fatal failures when encountering situations outside of those. To address this, we will investigate how to make AI methods more robust by making them less supervised via reasoning across different modalities, interacting with the environment, and directly programming scientific mechanisms resp. simulations and blending them with observational data. This can provide very strong knowledge constraints on top of the observational ones, which promise improved generalization and faster training, discards implausible models and, in turn, can help to create consistent, harmonized and well-curated datasets in novel domains, avoiding various kinds of biases in observational data.


Learning from Noisy Data & Fewer Labels

Training accurate second wave AIs currently requires lots of labeled and clean data. Most data, however, are unlabeled, heterogeneous, fragmentary, noisy, and even multimodal. To reduce the amount of labeled data required to build robust AI systems, (AI1) “Learning from Noisy Data and Fewer Labels” explores Systems AI for training multiple models jointly across different modalities together with learnable domain heuristics for labeling and transforming the data. Moreover, current AI systems are limited to performing only those tasks for which they have been specifically programmed and trained, and are inherently subject to fatal failures when encountering situations outside them.

Research Questions: Our world is multimodal. Like humans, an AI system must be able to learn and reason about the world in a multimodal way. How do humans do this and can machines learn from it? Can we understand human decisions better through new AI methods?

Machine learning models are brittle, in that their performance can degrade severely with small changes in their operating environment. As a result, additional labels are needed after initial training to adapt these models to new environments and data collection conditions. For many problems, however, the labeled data required to adapt models to new environments approaches the amount required to train a new model from scratch. Therefore, we aim to make the process of training AI models more efficient. We will investigate structured (deep) network architectures that take inspiration from traditional computer vision and NLP algorithms and models in order to improve robustness and explainability in simultaneous fashion. In particular, we will investigate hybrid deep neural networks for scene analysis that combine generative and conditional generative models with standard supervised learning paradigms. Through this we aim to enable deep neural networks for scene analysis to take advantage of unlabeled data, which is often available in much larger quantities than labeled data, especially in non-standard tasks. We will investigate multi-modal and multi-task learning methods using, e.g., joint Wasserstein variational inference. We will exploit multi-scale autoregressive priors within the hybrid models (Area SD2), adopt self-supervised learning with domain specific loss functions and reasoning mechanisms to leverage unlabeled data (Area SD1), and make use of cross-domain tasks for jointly learning representations of different modalities. We will explore how to automatically convert noisy, multimodal data into structured knowledge (entities, relations) and, in turn, automatically derive labels via (probabilistic, deep) reasoning.


Lifelong & Developmental AI

(AI2) “Developmental & Lifelong Learning AI” will explore Systems AI to realize AI systems that learn to (A) improve during tasks and from interactions with users (e.g. medical experts) and (B) apply previous models and knowledge to new situations (a new disease such as COVID-19), without forgetting previous learning episodes. To this end, it will also explore how general domain knowledge and rich repertoires of physical interaction skills can be learned completely autonomously, by mimicking the learning mechanisms and inductive biases underlying the cognitive development in infants and children. For this, we will let simulated robots “grow up” in virtual environments, where they learn about their bodies and objects and how to interact with them from multimodal sensory information.

Research Questions: We will tackle the challenge of developmental and lifelong learning using (Bayesian/deep) reinforcement learning agents that “grow up“ in complex virtual environments with simulated physics (Area AI3). These agents will use multimodal input (Area AI1) and will be driven by knowledge- and competence-based intrinsic motivations to explore their world. They will thus become proficient at interacting with it with less and less external supervision. Ultimately, these agents will be allowed to define their own learning goals and practice how to achieve them. A central driving question will be the self-generation of abstract representations and concepts, in close collaboration with Area AI3, from which the agents can derive their own learning goals.

We will explore cognitive mechanisms that underlie human learning, which will be translated into a new generation of computational architectures, mechanisms, and algorithms within AI. They will then serve as components of complex AI systems (Area SD1). During development, the system may discover ever more abstract concepts of support and stability, thereby autonomously forming a naive understanding of, e.g., elementary physical principles—a component of common sense and world knowledge (Area AI3). To fight the problem of small datasets, we will also treat the learning of inductive biases from data as an interactive learning setting (Area SD3), where the machine has to explain its “causal” explanation to the user. Moreover, we will explore reinforcement learning, where the agent can discover factors through actions and interventions and observing their effects as well as via simulators from Area AI3. In particular, we will make (explanatory) interactive as well as reinforcement learning aware of objects and relations (Area AI3) to make AI systems robust against changes to the environment (either visual or mechanistic changes) unseen in the training phase. Overall, we expect to generate new methodologies that will allow AI systems to learn and improve during tasks, apply previous skills and knowledge to new situations, incorporate innate system limits and enhance safety in automated assignments.


World-aware AI

(AI3) “World-aware AI” investigates Systems AI for integrating general domain knowledge and achieving consistency by informing them about the governing rules, constraints and underlying mechanisms of a domain (e.g. cardiac activation mapping that accounts for the underlying wave propagation dynamics).

Programming and training AI systems currently requires intensive human preparatory work to obtain good data. This requires the integration of technical expertise to classify, evaluate and describe high-quality data. Simulations based on knowledge, mechanisms and models from domains or other scientific disciplines offer an alternative. They can help to transfer the skills learned in simulation to reality. Simulations are an important tool in many scientific fields and generally provide a data source for AI algorithms to learn complex relationships and skills and transfer them to reality. However, AI algorithms can also accelerate and complete simulations. Moreover, we will go beyond simulation, grounding AI in computational theories of other scientific disciplines. In particular, we will investigate AI algorithms that interact with—partly human—agents and can optimize their behavior by this interaction. Such collaboration requires humans and AI systems to work in partnership to achieve a common goal by sharing a mutual understanding of each other’s capabilities and respective roles. Cooperation on the human level requires the integration of learning, thinking, perception, communication and interaction on the technical side involving computational models of human behaviour.

Research Questions: To facilitate better incorporation of AI into real-world and scientific systems, we will develop and investigate novel AI architectures and methods that “bake in” physics, mechanisms, and prior knowledge into Systems AI. We aim to show that embedding scientific mechanisms and prior knowledge into AI (e.g. via simulations but also methods from Area AI2) will help to overcome the challenges of sparse data and will facilitate the development of generative models that are causal and explanatory. This is typically tackled via regression (GPs and neural nets), but generative models (GANs, normalizing flows) and RL could be an alternative.

More efficient strategies are needed when data possess distinct structures as in many real-world scenarios. In particular, objects, and the interactions between them, are not only at the core of Systems AI Programming (Area SD1) but the foundations on which our understanding of the world is built. But what is an “object“ in the first place? Abstractions centered around the perception and representation of objects resp. entities play a key role in building human-like AI, supporting high-level cognitive abilities like causal reasoning, object-centric exploration and problem solving. Yet, many of the current ML methods focus on a less structured approach in which objects are only implicitly represented, posing a challenge for interpretability and the reuse of knowledge across tasks. Our first interest is in learning object representations in an unsupervised manner.


Members of the Foundations of Systems Methods mission

3. Life Sciences

The Life Sciences Tasks mission takes up the systems AI paradigm developed in the other two missions as a challenge in one of the most important application areas of AI. Powerful systems of the second AI wave are already changing medicine on many fronts. However, second wave approaches in the life sciences are largely based on carefully collected and purpose-bound data, while newly emerging data for deep learning are often less structured and designed for a different purpose, and different AI methods have to be combined to achieve indirect supervision. 3AI can provide important impulses for new research approaches that go beyond targeted, hypothesis-based approaches and specialpurpose data.


AI for Medicine Beyond Imaging

(LS1) “Systems AI for Medicine Beyond Image Analysis” will investigate whether Systems AI on medical tasks and findings at a meta-level and in the light of complex background knowledge (e.g., ontologies, knowledge graphs and clinical guidelines) can learn to see cross-connections between medical findings better than second wave AIs in a statistically valid way under the constraints of privacy, confidentiality, and fairness.

The current state of the art in AI and in particular in machine learning for medicine focuses primarily on the analysis of images from various modalities (MRI, X-ray, ultrasound, microscopic images from pathology, etc.). Another focus has been on omics data, which have been analyzed by machine learning methods for more than twenty years. Despite some progress with automated AI, network architectures and topologies are typically still tailored manually, for each and every new problem. Therefore, we aim for AI approaches going beyond image analysis and statistical quantitative biomedicine.

Research Questions: In line with the main topics of the third wave of AI we want to explore reasoning on medical tasks and findings on a meta-level and in the light of complex background knowledge (e.g., ontologies, knowledge graphs, and clinical guidelines). Discovery agents that learn to see cross-connections between findings, based on Areas SD2 and AI1, in a statistically valid way are the topic, as well as drawing high-level analogies, off-policy reinforcement learning, process modeling, and data integration on the level of findings and knowledge (Areas AI2 and AI3), beyond integrating views on different data types in deep neural networks. The challenge lies, amongst others, in the design and development of what we call discovery agents: autonomous agents that analyze available data sources (public, private and proprietary), pick suitable machine learning tools, extract findings, relate findings to the design of new in-silico experiments, conduct experiments, query databases to interpret results, find other, prototypical instances that will help to explain the further course of events, and so forth. Discovery agents are to mimic the way humans analyze biomedical data, and hence require Areas SD1-3 and AI1-3.


Improved Cancer Diagnosis

(LS2) “Systems AI for Improved Cancer Diagnosis” investigates whether Systems AI can increase the precision of diagnostics and clinical decision making of solid cancer of the prostate and the bladder, based on virtual slides along with associated metadata. We will also investigate Automated AI and address the regulatory, legal and ethical challenges associated with the collection, sharing and mining of the virtual slides.

The application of AI-based solutions in cancer towards more objective and reproducible cancer subclassification, especially for cases with heterogeneous histomorphology, will turn pathology from a subjective into a more accurate and predictive diagnostic discipline. We want to make use of the first proof of concepts to advance AI to the next level and to make it usable for the clinic. The aim is to achieve more precise stratification of patients with cancer. Besides histopathological imaging and molecular omics data, radiology is another important diagnostic pillar in everyday clinical diagnostics. Radiomic analyses represent another new strategy to evaluate cancer in a quantitative and computational manner beyond visual perception. The ability of radiomics to support diagnostic decision making has been shown in numerous cancer entities, yet, the understanding of suitable features and classification algorithms is still limited. The importance of integrating digital histopathological slide information in combination with clinical assessment categories, radiomic features and molecular data to compare predictive models for the differentiation of cancer lesions remains unclear.

Research Questions. We will address the central questions arising from the above using Systems AI, namely, whether (i) current AI methods can predict relevant molecular changes, patient outcome and cancer heterogeneity solely from conventional histopathology slides (Area A1), (ii) multimodal integration of various data types (e.g. images and molecular data) (Areas SD2 and AI1) and collaborative AI (Areas SD3 and AI2) improve diagnostic accuracy, and (iii) how AI systems developed with the AIprogramming framework of 3AI (Area SD1) are translated into the clinic for patients’ and medics’ benefit (Area AI2). We will define processes for data quality assurance and normalization of image quality and develop a legal and ethical framework. The main objective is to advance the work of health practitioners, leading to highly personalized diagnostics and patient management based on Systems AI. The initial focus will be on solid cancer of the bladder and prostate, which could be used as a blueprint for other relevant diagnostic use cases.


Precision Medicine

(LS3) “Systems AI for Precision Medicine” considers structured clinical-pathological findings, annotated digital histological images, molecular data as well as known (physical) interactions between gene alterations and drugs. We will investigate whether Systems AI can improve precision medicine by providing predictions that are more specific and better contextualized for each patient.

Despite current AI methods’ ability to detect complex, non-linear relationships, AI-based predictions about the distant future have to become significantly more precise before they can be used in practice, for example, for an improved categorization of patients and a classification of diseases in medicine (allowing the early detection of deterioration in the course of chronically ill patients or a better subtyping of diseases). In general, improved prediction models are important components for digital assistance systems, for example in clinical decision support for diagnostics and therapy. The area will therefore improve the generalization capabilities of machine learning methods (especially deep learning) in small and big data situations, so that classification and prediction models, especially in medicine and health care, allow a better view into the future.

Research Questions. Although we envision precision medicine as something exploratory and not targeted, it has to be guided and monitored in a way that statistical validity is always ensured, which will be one of the major research challenges in this endeavor. An agent has to incorporate statistical knowledge, not just about classical statistics, but also about modern ML methods, with a specific focus on the biomedical domain (Area AI3). For that, methods for debiasing datasets (related to methods for fair machine learning) will play a role and need to be employed. To focus the process despite its exploratory character, background knowledge in the form of medical ontologies, fundamental biological knowledge, and knowledge graphs be helpful to restrict the types of entities that should be considered and in turn improve the quality of the predictions. We will investigate how the statistical reliability of biomedical findings can be incorporated in formal knowledge representations like knowledge graphs.

We will devise novel methods of graph embeddings that preserve this statistical information and will hence lead to more accurate answers to queries on the knowledge graph, e.g. queries relating treatment plans with disease progression or patient outcome. A further development of deep probabilistic programming frameworks (Area SD1) will be required to handle strongly related facts en bloc, and not as simple disconnected facts, to both speed up the process of making the prediction as well as improve its quality. In the overall process, representations from probabilistic logics, classical ideas like blackbox architectures, and ideas from statistical-relational learning and programming will be instrumental.


Members of the Life Sciences Tasks mission

People

Principal Investigators

Junior Research Group Leaders

Research Assistants

Science Management and Administration

Institutions

Publications


2023

  • Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning; Daniel Palenicek, Michael Lutter, Joao Carvalho, Jan Peters; ICLR, 2023
  • Pseudo-Likelihood Inference; Theo Gruner, Boris Belousov, Fabio Muratore, Daniel Palenicek, Jan Peters; NIPS / NeurIPS, 2023
  • A typology for Exploring the mitigation of shortcut behavior; Felix Friedrich, Wolfgang Stammer, Patrick Schramowski, Kristian Kersting; Nature Machine Intelligence, 2023
  • One Explanation does not fit XIL, Felix Friedrich, David Steinmann, Kristian Kersting; ICLR Tiny paper, 2023
  • Revision Transfomers: Instructing Language Models to Change their Values, Felix Friedrich, Wolfgang Stammer, Patrick Schramowski, Kristian Kersting; ECAI, 2023
  • MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation; Marco Bellagente, Hannah Teufel, Manuel Brack, Björn Deiseroth, Felix Friedrich, Constantin Eichenberg · Andrew Dai, Robert Baldock,  Souradeep Nanda, Koen Oostermeijer, Andres Felipe Cruz-Salinas, Patrick Schramowski, Kristian Kersting, Samuel Weinbach; NeurIPS, 2023
  • SEGA: Instructing Diffusion using Semantic Dimensions; Manuel Brack, Felix Friedrich, Dominik Hintersdorf, Lukas Struppek, Patrick Schramowski, Kristian Kersting; NeurIPS, 2023
  • Learning sparse graphon mean field games, Christian Fabian, Kai Cui, Heinz Koeppl; International Conference on Artificial Intelligence and Statistics, 2023
  • Multi-Agent Reinforcement Learning via Mean Field Control: Common Noise, Major Agents and Approximation Properties, Kai Cui, Christian Fabian, H Koeppl; arXiv preprint, 2023
  • UAV Swarms for Joint Data Ferrying and Dynamic Cell Coverage via Optimal Transport Descent and Quadratic Assignment; Kai Cui, L Baumgärtner, MB Yilmaz, M Li, C Fabian, B Becker, L Xiang, …; IEEE 48th Conference on Local Computer Networks (LCN), 2023
  • Scalable task-driven robotic swarm control via collision avoidance and learning mean-field control; Kai Cui, Mengguang Li, Christian Fabian, Heinz Koeppl; IEEE International Conference on Robotics and Automation (ICRA), 2023
  • Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior; Kai Cui, Sascha Hauck, Christian Fabian, Heinz Koeppl; arXiv preprint, 2023
  • Like a Good Nearest Neighbor: Practical Content Moderation with Sentence Transformers; Luke Bates, Iryna Gurevych; arXiv preprint, 2023
  • Lessons learned from a Citizen Science project for Natural Language Processing; Jan-Christoph Klie, Ji-Ung Lee, Kevin Stowe, Gözde Gül Şahin, Nafise Sadat Moosavi, Luke Bates, Dominic Petrak, Richard Eckart de Castilho, Iryna Gurevych; EACL, 2023
  • A taxonomy of anti-vaccination arguments from a systematic literature review and text modelling; Angelo Fasce, Philipp Schmid, Dawn L. Holford, Luke Bates, Iryna Gurevych, Stephan Lewandowsky; Nature Human Behaviour, 2023
  • Towards Discriminative and Transferable One-Stage Few-Shot Object Detectors;  Karim Guirguis, Mohamed Abdelsamad, George Eskandar, Ahmed Hendawy, Matthias Kayser, Bin Yang, Juergen Beyerer; Conference on Applications of Computer Vision (WACV) 2023
  • AlphaZe**: AlphaZero-like baselines for imperfect information games are surprisingly strong; Jannis Blüml, Johannes Czech, Kristian Kersting; Frontiers in Artificial intelligence 6, 2023
  • OCAtari: Object-Centric Atari 2600 Reinforcement Learning Environments; Quentin Delfosse, Jannis Blüml, Bjarne Gregori, Sebastian Sztwiertnia, Kristian Kersting; arXiv preprint, 2023
  • Representation Matters: The Game of Chess Poses a Challenge to Vision Transformers; Johannes Czech, Jannis Blüml, Kristian Kersting; arXiv preprint, 2023
  • FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods; Robin Hesse, Simone Schaub-Meyer, Stefan Roth; arXiv preprint, 2023
  • Content-Adaptive Downsampling in Convolutional Neural Networks; Robin Hesse, Simone Schaub-Meyer, Stefan Roth; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern…, 2023
  • Entropy-driven Unsupervised Keypoint Representation Learning in Videos; Ali Younes, Simone Schaub-Meyer, Georgia Chalvatzaki; International Conference on Machine Learning (ICML), 2023
  • ILP: thinking visual scenes as differentiable logic programs; Hikaru Shindo, Viktor Pfanschilling, Devendra Singh Dhami, Kristian Kersting; Machine Learning 112 (5), 2023
  • Causal parrots: Large language models may talk causality but are not causal; Moritz Willig, Matej Zečević, Devendra Singh Dhami, Kristian Kersting; preprint, 2023
  • Queer In AI: A Case Study in Community-Led Participatory AI; OO Queerinai; Proceedings of the 2023 ACM Conference on Fairness, Accountability, and …, 2023

2022

  • Revisiting Model-based Value Expansion; Daniel Palenicek, Michael Lutter, Jan Peters; Multi-disciplinary Conference on Reinforcement Learning and Decision Making, 2022.
  • Interactively Providing Explanations for Transformer Language Models; Felix Friedrich, Patrick Schramowski, Christopher Tauchmann, Kristian Kersting; Hybrid Human AI, 2022
  • Mean Field Games on Weighted and Directed Graphs via Colored Digraphons; Christian Fabian, Kai Cui, Heinz Koeppl; IEEE Control Systems Letters 7, 2022
  • Efficient few-shot learning without prompts; Lewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, Oren Pereg; arXiv preprint, 2022
  • CFA: Constraint-based Finetuning Approach for Generalized Few-Shot Object Detection; Karim Guirguis, Ahmed Hendawy, George Eskandar, Mohamed Abdelsamad, Matthias Kayser, Juergen Beyerer; Workshop on Learning with Limited Labelled Data for Image and Video Understanding (L3D-IVU), 2022
  • Efficient Feature Extraction for High-resolution Video Frame Interpolation; Moritz Nottebaum, Stefan Roth, Simone Schaub-Meyer; British Machine Vision Conference, 2022
  • Flow: Joint Semantic and Style Editing of Facial Images; Krishnakant Singh, Simone Schaub-Meyer, Stefan Roth; British Machine Vision Conference, 2022
  • Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning; Simone Parisi, Davide Tateo, Maximilian Hensel, Carlo D’Eramo, Jan Peters, Joni Pajarinen; Algorithms 15, 2022
  • Curriculum reinforcement learning via constrained optimal transport; Pascal Klink, Haoyi Yang; Carlo D’Eramo, Joni Pajarinen, Jan Peters; ICML 2022
  • Dp-ctgan: Differentially private medical data generation using ctgans; Mei Li Fang, Devendra Singh Dhami, Kristian Kersting; International Conference on Artificial Intelligence in Medicine, 2022
  • Neural-probabilistic answer set programming; Arseni Skryagin, Wolfgang Stammer, Daniel Ochs, Devendra Singh Dhami, Kristian Kersting; Proceedings of the International Conference on Principles of Knowledge …, 2022
  • Can Foundation Models Talk Causality?; Moritz Willig, Matej Zečević, Devendra Singh Dhami, Kristian Kersting; arXiv preprint, 2022
  • Hanf: Hyperparameter and neural architecture search in federated learning; Jonas Seng, Pooja Prasad, Devendra Singh Dhami, Kristian Kersting; arXiv preprint, 2022
  • Probing for correlations of causal facts: Large language models and causality; Moritz Willig, Matej Zečević, Devendra Singh Dhami, Kristian Kersting; arXix preprint, 2022
  • Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition; Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Yongwon Hong, Visvanathan Ramesh; Journal of Imaging, Special Issue Continual Learning in Computer Vision …, 2022
  • CLEVA-Compass: A Continual Learning EValuation Assessment Compass to Promote Research Transparency and Comparability; Martin Mundt, Steven Lang, Quentin Delfosse, Kristian Kersting; International Conference on Learning Representations (ICLR), 2022
  • When deep classifiers agree: Analyzing correlations between learning order and image statistics; Iuliia Pliushch, Martin Mundt, Nicolas Lupp, Visvanathan Ramesh; ECCV, 2022
  • Return of the normal distribution: Flexible deep continual learning with variational auto-encoders; Yongwon Hong, Martin Mundt, Sungho Park, Yungjung Uh, Hyeran Byun; Neural Networks 154, 2022
  • A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning; Martin Mundt, Yongwon Hong, Iuliia Pliushch; Visvanathan Ramesh; Neural Networks, 2022

News

Reasonable Artificial Intelligence

Over the last decade, deep learning (DL) has led to groundbreaking progress in artificial intelligence, but current DL-based AI systems are unreasonable in many ways.