Today's most advanced machine learning algorithms are often incomprehensible to humans, including those who designed them. How can we achieve an understandable explanation of their processes? This is currently one of the crucial questions for artificial intelligence projects. Should we be able to explain how machines work, or should the machines learn to explain themselves? The ambiguity in the title of our conference contains the coexistence of two possibilities: machines explaining themselves or humans explaining machines – or maybe both at the same time.
If explaining machines should have a social and political impact, it is not enough that they are understandable to computer science experts. Explaining machines needs to include different socially situated and diverse humans. The issues are complex and involve multiple skills. Computer scientists who design machines must collaborate with social scientists who study understanding (and the lack thereof), the process of explanation, and their conditions. Now more than ever, the challenge of artificial intelligence projects is as much social as technological, and our conference addresses this by stimulating the debate, presenting the variety of perspectives and insights developed by the social sciences in the context of XAI.
• Cansu Canca, AI Ethics Lab
• Dominique Cardon, médialab Sciences Po
• Philipp Cimiano, Bielefeld University
• Elena Esposito, Bielefeld University and University of Bologna
• Mireille Hildebrandt, Radboud University Nijmegen and Free University of Brussels
• Tobias Matzner, Paderborn University
• Frank Pasquale, Brooklyn Law School
• Bernhard Rieder, University of Amsterdam
• Antoinette Rouvroy, University of Namur
• David Weinberger, Berkman Klein Center for Internet & Society at Harvard University
Sign up by email: email@example.com. Participation is free of registration fees.
It is recommended that you read the papers, which you will receive in advance of the conference.