WACV 2024 Daily - Sunday‏

17 DAILY WACV Sunday [Rita laughs] I have no order, neither in my life nor my ideas. In general, often they’re goal-oriented because, as I told you, we have many different projects paid by problem, by companies, or paid by the European Community and so on. You have a lot of ideas, and then you have to put these ideas into a specific problem. I’ll give another example. We’ve started an interesting European project called the European Lighthouse of AI for Sustainability (ELIAS). That is coordinated by Italy, by Nicu Sebe in Trento. I have the role to coordinate the individual sustainability part because this is a project about environmental, societal, and individual sustainability. What does it mean? Privacypreserving AI, personalization. What I like a lot at the moment is unlearning, so understanding there are some things that you can forget, not only to learn. Unlearning is a new topic in computer vision. It’s starting now. The idea is if you have a pre-trained network that does something, what you can do if it trains with something that you don’t want it to remember because of copyright, because of privacy, but also because it’s becoming obsolete. Or it might be wrong. Yeah, or it might be wrong. Stupid example: I have a robot that stays in my house, and I ask it, ‘Give me my ball.’ I play tennis, so my idea of a ball is a tennis ball. I don’t play basketball anymore, so I don’t want that. He remembers that this is a ball for me. The idea is to modify the knowledge - not only to remember everything, that is too easy, but just to know what is needed is something that humans are doing. Now, much research is devoted to this kind of personalization. That means understanding just only what is necessary. I’ll give you another example. Another European project I’m working with is called ELSA. All of them are from ELLIS, so it’s starting with a similar name. ELSA is the European Lighthouse on Secure and Safe AI. In both of them, we are looking at security. First, detection, and also what we are doing now, what we call the Safe CLIP, so understanding if a system like CLIP, for instance, learned from violence, abuse, or, in general, unacceptable concepts, both in images and text. There are a lot of inadequate or, I say, toxic words and toxic images in what they learn. What we are doing now is trying to retrain the system for unlearning the violence. If you generate an image with a woman raped by a man, no, I would like that you generate a woman that is talking with a man, so understanding the concept, maintaining the theme, but forgetting the violence. These are some new experiments we are doing that, for me, are really very interesting. What we can do now in computer vision and in AI, so language, understanding the world, Rita Cucchiara

RkJQdWJsaXNoZXIy NTc3NzU=