Computer Vision News - September 2018
A large neural network made up of hundreds of millions of artificial neurons for image classification? How? How does the network arrive at the specific outcomes it does? This is a big question. One of the most complex issues involved, which is yet to be fully (some would say, even partly) cracked, is the attempt to understand the internal decision mechanisms and processes of deep neural networks . The attempt to interpret these internal processes has become one of the hottest areas of research in the field of deep learning . Early research focused on trying to identify the crucial neuron within the network and understanding what it affects as well as how. Later, researchers tried to understand the activity of groups of neurons and the integration between them, recognizing the fact that neuron #123456 being activated five times doesn’t really tell us anything truly useful about the network as a whole. The research into interpreting the decision-making mechanisms in neural networks focused on three main areas: (1) feature visualization, (2) attribution and (3) dimensionality reduction. The central insight of the latest research in the field is not seeing these interpretative techniques in isolation, each standing on its own, but as composable building blocks towards more comprehensive models, each helping foster some insight into the behavior of neural networks. The goal of the integration of these building blocks isn’t just to explain what features it is that the network identifies, but to understand the mechanisms by which the network integrates these small pieces to arrive at decisions further down the line, and why/how it arrives at the specific decisions that it does. In this 'Focus on' article , we will talk about feature visualization : a very effective technique for understanding the processing of data by and between single neurons. Feature visualization can take place at various levels: the individual neuron, channel, layer, class logits, or category, as illustrated below. 16 Focus on: Aligned Feature Visualization Interpolation for Deep Neural Networks Tool by Assaf Spanier One of the most complex issues involved, which is yet to be fully cracked, is the attempt to understand the internal decision mechanisms and processes of deep neural networks
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=