Computer Vision News - February 2017

Who is using your work today? I can tell you fields: Fintech and self- driving cars. What concerns them the most is the explainability. Think of a car that is based on a neural network driving down the street, you want to know what it is thinking. You don’t want it to be based on a black box. What is the direction for the development of your work? Right now, the easiest way customers can benefit, from the algorithm is explainability. That’s our easiest thing, our lowest hanging fruit. It turns out that we can take any feedforward network, convert it to ours, and explain what’s going on. Just the conversion process explains better what the network is looking for, but then we can also run ours. We can take patterns that are maybe problematic or just any test pattern. We’ll run it and tell you how is the network thinking about this pattern, what are the inputs that it is sure of or unsure of, and so on. On top of that, I get the same exact kind of performance as a feedforward network. If you could add one more feature to your model, what would it be? One feature I don’t have that I’d really like to work on is integrating with spatial attention. It turns out that the model gives you much more feedback about what it’s doing. That can be used to closely interconnect recognition and attention. Let’s say I am supposed to look for an octopus and from the feedback part, I notice there isn’t as much octopus-like patterns in my current focus location. I can then focus on a broader view, and use explainability and feedback latch onto the locations where inputs that are better suited for octopus can be found. That’s something that I would like to do in the future. How is your model different from SVM? SVM is also a feedforward method. In SVM and in all the feedforward networks, the weights are determined during learning with an optimization. In my method, weights are still determined during learning because learning is when you are going to determine the weights, but I don’t need an optimization to do that learning. That is one difference. Why do you find uniqueness to be important? In one of the descriptions, I say that any network (regardless whether it is a feedforward network or my network) is determining what features are relevant for a specific recognition problem. It’s important for the network to recognize and recognize quickly: let’s say to differentiate between a zebra and a horse: most features are the same, except for stripes. To efficiently identify a horse from a zebra, it is beneficial to focus on the stripes more than any other features. This focus is necessary and occurs in both models. The difference is that in the 26 Computer Vision News Guest Guest

RkJQdWJsaXNoZXIy NTc3NzU=