Computer Vision News - June 2019

25 Focus on Computer Vision News 2-dimensional space. This will allow us to plot it using our plotting function. To use the Sklearn t-SNE function we first define the embedding object, then we fit the transformation using gradient descent on KL divergence as described above. When defining the object, we need to choose the model's hyperparameters which affect the final embedding. In our implementation, we first define the dimension of the embedding to be 2. To stabilize the optimization, we use PCA to initialize the embedding. Lastly, we choose the perplexity of the gaussian distribution to be 20. The perplexity in our case is the width of the gaussian and it is commonly set between 5 to 50. The above boils down into these two lines: There are additional hyperparameters that can be defined such as the number of iterations, stopping criterions learning rate and more. In our case, we set them as default parameters. These additional parameters can be used in the case that the embedding does not converge into the desired accuracy i.e. does not look good enough. We are now ready to see some results. The above code generates the following graph: This figure is a typical clustering of t-SNE that shows the quality of the embedding. It shows that our data might be separable by using a linear separator. Moreover, each of the clusters is very concentrated around its means. In this case, the t-SNE gives us a very good intuition about the underlying data. tsne = manifold . TSNE ( n_components = 2 , init = 'pca' , perplexity = 40.0 ) X_tsne = tsne . fit_transform ( X ) t-SNE

RkJQdWJsaXNoZXIy NTc3NzU=