Computer Vision News - September 2018

Note that visualization at the category level is a technique that, unless you use special regularization constraints, is identical to creating adversarial examples, which we talked about in the May 2018 issue ( Adversarial Examples: Attacks on Deep Learning ). From the spectrum above we will focus on the feature visualization at the neuron level, specifically we will demonstrate studying interaction between a few neurons, rather than focus on an individual neuron. We will be using a special technique, detailed below, for aligning the visualizations produced to facilitate interpretation and analysis of the results. We will be using Lucid, the software library by Google . We’ve already seen some visualizations produced by Lucid, in our June 2018 issue ( Focus on: Debug and Analysis Mechanisms for Deep Learning in TensorFlow and Keras ). And I would guess this is not the last time we’ll be seeing them in these articles, since the visualizations are fascinating and have a major contribution in our capability to conceptualize the internal processes of deep neural networks. As stated, early feature visualization methods studied the effects of single neurons within the network . However, later, this technique was expanded to groups of neurons , maximizing the overall activation of the group, rather than the activation of a single neuron, in order to study, visualize and interpret the effect of the interaction of two or more neurons. This approach faces a ‘little’ challenge. Because the feature visualization process is based on a random initial state, despite the fact we optimize and visualize for the same function and the same object, the visualization produced will be slightly different each iteration (a different output, like a different angle or spread of features). This was not a problem for classic feature visualization research which only studied a single neuron at a time, but becomes a problem for visualizing the interaction among several neurons. If we run the visualization without special constraints, the visualizations arrived at won’t align -- recognizable visual landmarks crucial to successful interpretation (the eye-like features, in our example) will appear at different locations in each image, which will greatly reduce the usefulness of the visualization technique for analysis. Focus on… 17 Tool Computer Vision News

RkJQdWJsaXNoZXIy NTc3NzU=