1196 projects were found

TRR 318-2 - Project C01: Explanations for healthy distrust in large language models

Since ML models have limitations, human ability to question and distrust their decisions is crucial for human-AI-interaction. C01 established a common terminology for distrust, demonstrated that distrust is not easily fostered, and developed novel machine learning algorithms to identify and explain model uncertainty. We will now develop ...

Duration: 01/2026 - 06/2029

TRR 318-2 - Project B07: Communicative practices of requesting information and explanation from LLM-based agents

The project investigates how users engage with LLM-based agents through prompting practices for information requests and explanations, focusing on ongoing sense-making and calibration processes as “situated inquiries.” Users often begin with an unclear understanding of their knowledge gap, which must be explored and refined through interaction. Our ...

Duration: 01/2026 - 06/2029

TRR 318-2 - Project B06: Ethics and normativity of explainable AI

B06 investigates the normative purposes of XAI. In the first funding period we have established that there are many different normative grounds for XAI. To assess them, it is necessary to take the organizational context of XAI into account. To that aim, media studies will clarify the organizational context in which XAI is embedded, where this ...

Duration: 01/2026 - 06/2029

TRR 318-2 - Project B05: Co-constructing explainability with an interactively learning robot

Research focus of B05 is the double loop of training and understanding. In the training loop the robot continuously adapts and refines its movements based on user input. The understanding loop allows the human trainer to develop a deeper comprehension of the robot's learning mechanisms through real-time interaction and explanations provided during ...

Duration: 01/2026 - 06/2029

TRR 318-2 - Project B01: A dialog-based approach to explaining machine-learning models

B01 explores how dialog-based explanations of machine learning (ML) models function in real-world organizational contexts, accounting for organizational structures, roles, and communication styles, focusing on the predictive policing domain. An experimental evaluation showed that dialog-based explanations significantly enhance users’ understanding ...

Duration: 01/2026 - 06/2029

TRR 318-2 - Project A06: Explaining the multimodal display of stress in clinical explanations

We investigated the influence of stress and mental health conditions in explanatory settings. We determined how signals related to understanding differ intra-individually under stress and inter-individually for people with social interaction conditions. In the second phase, we will develop techniques to train clinicians to detect signs of stress ...

Duration: 01/2026 - 06/2029

TRR 318-2 - Project A04: Co-constructing duality-enhanced explanations

Technical artifacts can be explained via their Architecture (e.g., structure and mechanisms) and their Relevance (e.g., functions and goals)—summarized as Duality. We will analyze human-human explanations of digital artifacts with respect to duality-related monitoring and multimodal scaffolding and how it is tailored to EEs’ social roles, and ...

Duration: 01/2026 - 06/2029

TRR 318-2 - Project A03: Co-constructing explanations between AI-explainer and human explainee under arousal or nonarousal

We investigate how arousal affects the processing of explanations. Arousal can occur from the task, contextual factors or from the explanation itself. Our goal is to develop an interactive system that co-constructs an explanation that allows the explainee to understand XAI explanations when being overly or too little aroused. Both, human and the ...

Duration: 01/2026 - 06/2029

KI-basierte Sprachmodelle an der Universität Paderborn: Ein Baustein auf dem Weg zu einer KI-Strategie

Mit dem Projekt soll ein weiterer Baustein auf dem Weg zu einer durablen KI-Strategie gelegt werden. Dazu baut die Universität Paderborn den bestehenden KI-Dienst AI-Chat.upb.de zur Produktreife aus und härtet das System für größere Nutzerzahlen. Dazu wird die Inference-Infrastruktur erweitert, um KI breit in datenschutzrechtlich relevanten sowie ...

Duration: 01/2026 - 12/2027

TRR 318-2 - Constructing Explainability

The scope of the EU right to explanation has fueled the need to improve eXplainable Artificial Intelligence capacities aiming at strengthening the rights of individuals affected by AI-based recommendations. Among other purposes, explanations serve the right to contest an AI output and protect humans from being left out of control. However, ...

Duration: 01/2026 - 06/2029