Computer Vision News - February 2017
From an algorithm perspective, the key ideas of Achler’s theory are that the feedforward-feedback connections are performing an optimization and that this optimization occurs during recognition and not during learning as feedforward networks suggest. Optimization generates error signals. Evidence of error signals and optimization during recognition is one of the most prevalent findings when neuroscientists record from neurons. This is observed as network-wide neuron bursting. When something changes – a new sound, image, smell is encountered, neurons are no longer recognizing their environment well. This causes an error signal, if you will, and many neurons become active network-wide. This is called neuron bursting. The more time passes from that initial change, the better the neurons recognize their new environment, the error signal becomes smaller and neuron activation goes back down. In Tsvi’s model, this bursting happens naturally. In the neural networks that are popular now, the feedforward neural networks, you neither have error signals nor bursting during recognition. Achler didn’t design the network to show bursting. He designed the network to be flexible. There are two things that the brain can do that feedforward algorithms can’t do. One of them is being able to update: you can immediately learn something new by seeing it once. But in feedforward neural networks, if you want to learn something new after seeing it once, you have to rehearse and retrain on everything that you previously learned, and interleave that in fixed frequency and random order. In other words, you have to randomly sort everything that has been previously learned and re- train the feedforward network on all those patterns. If you want to add one more new thing again, you have to rehearse and retrain on everything again, every time. This approach is not very practical, but is required if optimization occurs during learning as feedforward models require. Achler knew that rehearsal was not practical and was trying to figure out how to make networks more flexible: how to immediately add something new to them. That’s one problem; the other is: how does it explain? People can explain things that they can recognize. For example, if you are asked to describe an octopus, you can describe what an octopus may look like. If you’re good at drawing, you could draw it from memory. Feedforward neural networks, the basis for machine learning, can’t explain very well. They are known to be a black box. The network can’t be used to explain itself: it is not easy to understand what the machine is expecting for a given pattern. In other words, feedforward networks may be able to recognize a “ Optimization during recognition and not during learning ” 24 Computer Vision News Guest - the algorithm Guest
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=