|

The importance of explainable AI: Interview with Klaas Bombeke

Recommendations provided by computer algorithms can often be intransparent. Team DARROW member Klaas Bombeke is working on building explainable AI that we can trust and accept.

Klaas Bombeke is a senior researcher at imec, a research & development organisation focused on digital technologies with headquarters in Belgium. Klaas obtained a PhD in cognitive neuroscience and has been working on the interactions between humans and technology for the past six years.

What is imec's task within the DARROW project?

imec will focus on having reinforcement agents in the DARROW system. Reinforcement agents are an artificial intelligence technique. But because we also look at the human part, we will try to make these models explainable.

What aspect is most important to you in the DARROW project?

I think what DARROW stands for is getting artificial intelligence and automatisation into the world of water treatment. But of course, we need to make sure that the human side of it is not neglected. There are still human operators working in the water treatment plants. And we need to make sure that all the introduced artificial intelligence is accepted by them and that they like to use them as tools.

What challenges do you forsee?

A big challenge in DARROW is that wastewater treatment is quite a complex system. So there are a lot of sensors involved and a lot of actuators involved. Artificial intelligence is everywhere nowadays with ChatGPT and all these things. But still, there are quite some challenges also related to responsibility. So we have to make sure that the human operators that are working in the plant right now have trust in the system and that they feel some kind of control over the system as well.

What are you personally excited about?

I’m personally really interested in the human side of it all. We need to make sure that people have trust in the artificial intelligence. A big research field nowadays is explainable AI where we try to make artificial intelligence explainable to humans. There are all kinds of techniques to give insight into the black box that is AI, like for example visualisation tools to visualise a decision tree. This helps with understanding how the artificial agent was reasoning. Of course, the power of artificial intelligence is still that it can do much more complicated mathematical operations than humans can. So it will never be entirely possible to make artificial intelligence completely explainable. But we should strive to make it more explainable.