Computer Vision News - May 2019

The figure above demonstrates the basic idea behind the approach -- it shows two patterns -- X and Y, mixed together to produce a more complex image -- Z. The distribution of small image patches for each “pure” pattern (X and Y) is simpler than the distribution of small image patches for the mixed image Z. We know from literature that if X and Y are two independent random variables, the entropy for their sum Z = X + Y is greater than each of their respective entropies. The figure also shows a graph of the MSE loss for image reconstruction using a single DIP network as a function of time. Three image reconstructions are plotted: (i) the orange line represents the MSE loss, when the DIP network is trained to reconstruct image X; (ii) the blue line represents the loss when trained to reconstruct image Y; and (iii) the green line represents the loss when trained to reconstruct the mixed transparencies image X+Y. You can see that the higher the starting loss value, the longer it takes the network to converge. The loss of the mixed image is not only higher than the loss of each of its component images, it is in fact higher than the sum of their loss values. This is attributed to the fact that the distribution of small image patches in the mixed image is more complex and diverse (higher entropy, lower internal similarity) than in either component image. Research 6 Research Computer Vision News

RkJQdWJsaXNoZXIy NTc3NzU=