Computer Vision News - February 2017

pattern, but then it cannot be used to go back and reveal what it is expecting. Although recently there have been developments to show a bit more information for example about convolutional networks, fully connected neural networks are not transparent. Those are the two things that were on Achler’s mind when he was trying to figure out the algorithm. Only after he got it working updating and explaining, he looked at other phenomena like that bursting and so on. There are other brain phenomenon associated with this algorithm and optimization. They are called masking and difficulty with search. Achler gives an intuitive example: you get a similar type of phenomenon when you’re solving a jigsaw puzzle. Certain pieces are faster to solve than other pieces. You see this sort of phenomena in search. How similar what you’re searching for relative to what you’re looking for is going to determine how fast you can search. This is describing cognitive experiments about the human brain. These findings can be observed in human or animal reaction time experiments, when observing how long it takes to recognize. Most researchers, Achler says, are entrenched with the feedforward neural network model which does not explain this. So instead they try to explain search phenomena with other explanations, suggesting that these phenomena are a property of spatial processing. But these cognitive phenomena also happen in smell – olfaction. Because of the physics, the sensors for recognizing smell cannot process space very well. Yet neurons responsible for smell still display these cognitive phenomena. The model suggests the cognitive phenomenon is not a spatial phenomenon and indeed it is still observed with limited spatial processing. The fact that space does not matter suggests the underlying mechanism happens in all neurons, regardless of whether they process space or not. In summary, you’ll find this algorithm matches the most amount of phenomena with the least amount of parameters. Plus it shows great performance in recognizing. The model has weights: they are different than feedforward weights and are used differently than feedforward weights. We asked Achler to put in context the user’s interaction with the system, and he told us that what the user can see are the ideal patterns that the network is looking for, to better understand what the network is doing. The user can see that for every layer of the network. Once the user gives a pattern to recognize, he or she can also see how well each piece of information from that pattern is recognized. They can see what are the inputs that are well and not well understood, not used enough by the network, and so on. Another breakthrough is that it’s easier to modify. Users can add a new node at any point without having to retrain the old nodes. “ I designed the network to be flexible ” Computer Vision News Guest - the algorithm 25 Guest

RkJQdWJsaXNoZXIy NTc3NzU=