CVPR Daily - Wednesday

He explains how this work helps: “ The main contribution is a provably convergent solver to tackle this difficult problem. We extend the classical Alternating Direction Method of Multipliers (ADMM) , very popular in traditional supervised learning (e.g. training of SVMs), to handle mixed-integer problems of the above form. In addition, we provide a theoretical proof, guaranteeing the convergence even under suboptimal label updates. In contrast to grab-cut resp. k-means, our algorithm is better suited to deep features and in video object segmentation can better cope with severe scene changes .” One of the work supervisors - Laura Leal-Taixé , also from the Technical University of Munich - tells us why she thinks this work is important: “ Essentially, it works on a completely different machine learning paradigm, so instead of the classic training and testing, we don’t have a training step. With transductive learning , you have irregular and inference steps where you’re trying to match all the data points that you know are labelled, but at test time, not at training time. This is the difference between inductive and transductive. The thing is that if you actually formulate this problem in this way then it’s really hard to solve. This is where the paper comes in with a formulation that actually allows you to solve this problem .” The next step for this work, Emanuel concludes, is to try out different optimisation algorithms . The formulations are quite general and can be applied to other computer vision problems such as tracking and detection. Wednesday 9 Emanuel Laude

RkJQdWJsaXNoZXIy NTc3NzU=