When artificial intelligence enters into dialogue

 |  ResearchCollaborative Research CentresArtificial IntelligenceNews

Should a bank give a loan to someone? Where should patrol officers go to prevent a crime? And should a brain tumour be operated on or treated conservatively? Artificial intelligence can provide reliable answers to these questions in many cases. However, it usually remains unclear how the system arrived at its result. Researchers at the universities of Bielefeld and Paderborn are working in a sub-project of the Collaborative Research Centre and Transregio "Constructing Explanability" (SFB/TRR 318) to make such results explainable.

Even if artificial intelligence (AI) provides astonishingly reliable answers in many cases - those who use it often do not blindly trust the recommendations, but would like to understand how an AI arrived at an assessment. However, simple explanations are rarely enough: "Many variables are interrelated in a complex way and there are interactions between them," says Prof. Dr Axel-Cyrille Ngonga Ngomo from the Institute of Computer Science at  Paderborn University. Together with Prof. Dr. Philipp Cimiano from the CITEC Research Institute at Bielefeld University and Professor Dr. Elena Esposito from the Faculty of Sociology at Bielefeld University, he is researching the question of which dialogue system an AI needs in order to be able to explain its answers to a human.

AI-generated explanations require specialised knowledge

The challenge here is, among other things: Why a person is classified as not creditworthy, for example, does not have to be due to the sum of the individual variables, but can also result from their interaction. How can a machine explain in a meaningful way how it arrived at its result in such a case? Such explanations often presuppose a great deal of knowledge about how an AI works - and they would often be far too complex even for AI experts if the entire process had to be understood.

So how can understanding be better constructed? "One way is to work with a counterfactual approach instead of deep explanations," says Ngonga. This is explaining something about its opposite: a chatbot in this case would illustrate that it would have made a different decision if a few crucial details had been different. In the case of the loan, for example, it would have been granted if the person needing the money was not currently already paying off a car.

New system to take into account previous communication with the questioner

Although this does not provide users with an insight into the complete decision-making process, they can understand the AI's recommendation without having to fully comprehend how it works. "To this end, we are pursuing approaches from co-construction, in which, with a machine as a partner, the aim is not only to exchange explanations, but also to meaningfully answer how these explanations came about," says Elena Esposito.

"Such explainable AI would be interesting for many areas - not only for banks, but also for insurance companies, the police, medical personnel and many other areas, for example," adds Esposito. In the project, the researchers are conducting basic research on how such explanations can be translated into a neutral language. To do this, they are also looking at existing systems, but basically they want to develop a completely new system. It is important that this system adapts to the users and their requirements: For example, it should be able to infer the context on the basis of certain signals and indications.

The researchers plan to first develop a system that can be used in radiology. The answers to the same question could then differ, for example, according to whether it is asked by medical staff or nursing staff. Furthermore, they could also depend on where a question is asked from and whether there has already been communication with this person in the past. In this way, meaningful explanations are generated and repetitions in the answers are avoided. "What is important for the questioner can be very different," says Philipp Cimiano.

Artificial intelligence as an advisor

In cooperation with the Clinic for Paediatric Surgery and Paediatric and Adolescent Urology at the Protestant Hospital Bethel, the scientists in the research project want to train their system using X-ray images. "Afterwards, we will analyse the test protocol and examine what kind of information the questioners need," says Cimiano. Doctors could then, for example, ask the system to mark the brain region that is relevant for the result. "They could also ask if there are images of similar tumours that have been treated in this way. Ultimately, the main thing will be to justify a proposal for treatment and explain it in a way that makes sense."

In the long run, such systems for explaining decisions could not only play a role for AI applications, but could also be used for robots. "Robots use a wide variety of models to make predictions and they classify a wide variety of types of situations," says Cimiano. For robots, a dialogue system would have to be adapted to their particular conditions. "Unlike chatbots, they move around the room in a situation," he says. "For that, they need not only to grasp contexts, but also to be able to evaluate what kind of information is relevant and how deep they should go into the explanations."

In dialogue with the AI system

The scientists in project B01 at Transregio 318 are working on an artificial intelligence (AI) system that understands verbal questions and can answer them appropriately in a linguistic dialogue. In medicine, for example, the system should be able to justify a treatment proposal to doctors and clarify patients' questions and uncertainties regarding the treatment plan. The computer scientists and sociologists include the perspective of the users in their research. To this end, they observe how, for example, clinic staff accept the AI system and what demands they place on the system.

Further information: trr318.uni-paderborn.de/projekte/b01

Photo (Bielefeld University, Mike-Dennis Müller): Prof. Dr Axel-Cyrille Ngonga Ngomo from the Institute of Computer Science at Paderborn University focuses his research on the interfaces between humans, machines and data.
Photo (Bielefeld University, Michael Adamski): Prof. Dr. Elena Esposito is working on the effects of algorithmic predictions and the necessary conditions to make artificial intelligence explainable.
Photo (Bielefeld University, Michael Adamski): Prof. Dr Philipp Cimiano is investigating how AI systems can process unstructured data sets and make their results understandable.