Computer Vision News - December 2022

43 Mara Graziani and Vincent Andrearczyk communication was more difficult than it is when we are all tech people from the same background. ” One of the critical areas for discussion was whether the definition of interpretability had to be linked directly to human understanding . Multi-agent AI systems, operating cooperatively, require interpretability in the exchange and decision planning at a level. It was a non- trivial question that took at least a month of back-and-forth debate. With an agreed definition that Mara, Vincent, and others are all happy with, is the exercise now complete? “ Not at all – there are still many things to work on, ” Mara responds. “ For example, the definition of interpretability in law is still very fuzzy. They see it as privacy. We need to Having arrived at such a clear and sensible result, we have to ask: What took so long? “ Our world is very structured – we use definitions, we have a formulation of the problem, we write down maths, and maths is universal! ” Mara explains. “ But we’re talking to sociologists, ethicists, philosophers. In the paper, we’re talking about religion and God and the interpretation of the Bible. How many years have people spent on that? ” she asks, laughing. Vincent adds: “ Also, the review took a long time because there were many different definitions. We were trying to understand the viewpoint of each domain and why they employed these terms. Then we had a big group of experts from different domains, and Computer Vision News is happy to circulate this newly agreed definition among the community: An AI system is interpretable if it is possible to translate its working principles and outcomes in human-understandable language without affecting the validity of the system.

RkJQdWJsaXNoZXIy NTc3NzU=