Computer Vision News - January 2020

2 Summary Challenge 0 In the previous pages, we have learned about EndoVis 2019 and its three sub-challenges. We have asked Daniel Tomer, team leader at RSIPVision, tocomment. Daniel has gained a large practical experience in algorithm development for endoscopy, having worked with many clients and developed practical applications in this field. Here is a brief summary of Daniel’s review about two of the EndoVis sub-challenges. The full article is available in the Endoscopy section of our website. researchers use a deep learning end-to- end approach.In this method, a deep convolutional neural network (CNN) is developed in order to analyze a stereo pair (PSMN for example). The model is trained on simulated data so that, when later fed with two images, it accurately predicts their corresponding depth map. Results In this challenge, the submitted algorithmic models were evaluated over a test set which consisted of frames taken from an endoscopic camera with an associated ground truth for the depth (which was obtained using a structured light pattern). The metric for evaluation was the per-pixel mean squared error between the ground truth depth image and the predicted one. First place (Trevor Zeffiro - RediMinds Inc., USA) reached an average of ~3 mm error using the deep learning approach. Second place (Jean-Claude Rosenthal - Fraunhofer Heinrich Hertz Institute, Germany) used the pre-process approach. S u r g i c a l W o r k f l o w Analysis and Skill Assessment Methods (Read overview and technical challenges in the full article on our Endovis D e p t h estimation from stereo camera pair Methods (Read overview and technical challenges in the full article on our website.) The more direct approach to solve this part is to pre-process the data such that it is closer to natural images . After the postprocessing, the frames are fed into one of the classical depth estimation methods. This approach yields good results, but it isn’t the state-of-the-art. In order to achieve SoTA results,

RkJQdWJsaXNoZXIy NTc3NzU=