In our digitized society, algorithmic approaches (such as machine learning) are rapidly increasing in complexity, making it difficult for citizens to understand their assistance and accept the decisions they suggest. In response to this societal challenge, research has started to push forward the idea that algorithms should be explainable or even able to explain their own output. This has led to vivid developments of systems providing explanations in an intelligent way (explainable AI or XAI). In our interdisciplinary initiative “Constructing Explainability”, joining forces from the Universities of Paderborn (UPB) and Bielefeld (UBI) and across different disciplines (Linguistics, Psychology, Media Studies, Sociology, Economics and Computer Science), we discuss critically existing approaches and develop a new and co-constructing view on explanations. Our approach promotes humans’ active and mindful participation in sociotechnical settings with AI technologies, thus increasing their informational sovereignty. Our goal is to extend current research in (computer) science to include new perspectives from other disciplines and to offer new answers to the aforementioned societal challenge.
URL of this publication: https://ieeexplore.ieee.org/document/9292993
Members of this initiative: Heike M. Buhl (UPB), Hendrik Buschmeier (UBI), Philipp Cimiano (UBI), Elena Esposito (UBI), Angela Grimminger (UPB), Reinhold Häb-Umbach (UPB), Barbara Hammer (UBI), Ilona Horwath (UPB), Eyke Hüllermeier (UPB), Friederike Kern (UBI), Stefan Kopp (UBI), Tobias Matzner (UPB), Axel-Cyrille Ngonga Ngomo (UPB), Katharina J. Rohlfing (UPB), Ingrid Scharlau (UPB), Carsten Schulte (UPB), Kirsten Thommes (UPB), Anna-Lisa Vollmer (UBI), Henning Wachsmuth (UPB), Petra Wagner (UBI), Britta Wrede (UBI)
Contact persons: Katharina J. Rohlfing, firstname.lastname@example.org (UPB) and Philipp Cimiano, email@example.com (UBI)