Computer Vision News - September 2016

Datasets: The Datasets used for training are the Piccadilly Circus in London and the Roman Forum in Rome . On these datasets, the VisualSFM was used to generate invariance images that capture the same views with different illumination conditions and different perspectives. The Piccadilly dataset contains 3384 images and the reconstruction has 59k unique points. The Roman Forum contains 1658 images and 51k unique points. Only the feature points that survive the SFM reconstruction process were used to train the LIFT framework. The Datasets used for testing are the Strecha dataset, which contains 19 images of two scenes; the DTU dataset, which contains 60 sequences of objects with different viewpoints and illumination settings; and the Webcam dataset. Results: To stimulate your appetite, we present three qualitative results. As expected, LIFT returns a higher number of correct correspondences across the two images. Correct matches are shown in green lines and the descriptor support regions are shown in red circles. First row: Strecha; second and third row: DTU. Computer Vision News Research 33 Research “Outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining”

RkJQdWJsaXNoZXIy NTc3NzU=