TRR 318-2 - Project C03: Interpretable machine learning—Explaining change
C03 develops methods to explain how and why machine learning models adapt over time. Our project makes AI systems more transparent in dynamic settings by designing efficient, expressive explanations. This includes novel approaches for real-time explanation and higher-order Shapley interactions. In the second funding period, we aim for ex-ante ...
Duration: 01/2026 - 06/2029
TRR 318-2 - Project C02: Interactive learning of explainable, situation-adapted decision models
C02 aims to optimize ML-based decision support systems and human-AI interaction in order to improve decision-making. We focus on explainable and transparent models that take the decision context into account. We are focusing on prescriptive decision-making with partial feedback, where we not only consider the long-term impact of decisions but also ...
Duration: 01/2026 - 06/2029
TRR 318-2 - Project C01: Explanations for healthy distrust in large language models
Since ML models have limitations, human ability to question and distrust their decisions is crucial for human-AI-interaction. C01 established a common terminology for distrust, demonstrated that distrust is not easily fostered, and developed novel machine learning algorithms to identify and explain model uncertainty. We will now develop ...
Duration: 01/2026 - 06/2029
TRR 318-2 - Project B07: Communicative practices of requesting information and explanation from LLM-based agents
The project investigates how users engage with LLM-based agents through prompting practices for information requests and explanations, focusing on ongoing sense-making and calibration processes as “situated inquiries.” Users often begin with an unclear understanding of their knowledge gap, which must be explored and refined through interaction. Our ...
Duration: 01/2026 - 06/2029
TRR 318-2 - Project B06: Ethics and normativity of explainable AI
B06 investigates the normative purposes of XAI. In the first funding period we have established that there are many different normative grounds for XAI. To assess them, it is necessary to take the organizational context of XAI into account. To that aim, media studies will clarify the organizational context in which XAI is embedded, where this ...
Duration: 01/2026 - 06/2029
TRR 318-2 - Project B05: Co-constructing explainability with an interactively learning robot
Research focus of B05 is the double loop of training and understanding. In the training loop the robot continuously adapts and refines its movements based on user input. The understanding loop allows the human trainer to develop a deeper comprehension of the robot's learning mechanisms through real-time interaction and explanations provided during ...
Duration: 01/2026 - 06/2029
TRR 318-2 - Project B01: A dialog-based approach to explaining machine-learning models
B01 explores how dialog-based explanations of machine learning (ML) models function in real-world organizational contexts, accounting for organizational structures, roles, and communication styles, focusing on the predictive policing domain. An experimental evaluation showed that dialog-based explanations significantly enhance users’ understanding ...
Duration: 01/2026 - 06/2029
TRR 318-2 - Project A06: Explaining the multimodal display of stress in clinical explanations
We investigated the influence of stress and mental health conditions in explanatory settings. We determined how signals related to understanding differ intra-individually under stress and inter-individually for people with social interaction conditions. In the second phase, we will develop techniques to train clinicians to detect signs of stress ...
Duration: 01/2026 - 06/2029
TRR 318-2 - Project A04: Co-constructing duality-enhanced explanations
Technical artifacts can be explained via their Architecture (e.g., structure and mechanisms) and their Relevance (e.g., functions and goals)—summarized as Duality. We will analyze human-human explanations of digital artifacts with respect to duality-related monitoring and multimodal scaffolding and how it is tailored to EEs’ social roles, and ...
Duration: 01/2026 - 06/2029
TRR 318-2 - Project A03: Co-constructing explanations between AI-explainer and human explainee under arousal or nonarousal
We investigate how arousal affects the processing of explanations. Arousal can occur from the task, contextual factors or from the explanation itself. Our goal is to develop an interactive system that co-constructs an explanation that allows the explainee to understand XAI explanations when being overly or too little aroused. Both, human and the ...
Duration: 01/2026 - 06/2029