TRR 318-2 - Project C01: Explanations for healthy distrust in large language models
Overview
Since ML models have limitations, human ability to question and distrust their decisions is crucial for human-AI-interaction. C01 established a common terminology for distrust, demonstrated that distrust is not easily fostered, and developed novel machine learning algorithms to identify and explain model uncertainty. We will now develop interventions to foster healthy distrust in the domain of academic writing with LLM support with a novel type of perplexing explanations. The TRR will hereby be provided with a tool to automatically generate explanations that support human agency.
Key Facts
- Project type:
- Sonstiger Zweck
- Project duration:
- 01/2026 - 06/2029