Computer Vision News - February 2019

there are some advantages in video, temporal features or even just the continuous movement of the object. You can take advantage of it by annotating videos, and you cannot do it in images. Also, if you want to track some specific object, a long video, even if it goes outside of the frame and comes back, you can do it only in video, and not in images. We specialize in video and we are the only ones that have automatic tracking in video, which is the biggest state-of-the-art right now. ...Which (again) is very precious for self-driving cars. Yes, basically because it saves time annotating video. Somebody (I think from Drive.ai) said once, that “For every one hour driven, it takes approximately 800 human hours to label it.” This is very time-consuming. What are your next steps for this platform? We are very satisfied with the automatic tracking, but this is not enough. We can do better. We can plug in machine learning models into our platform that will train and infer on new videos while annotating the videos. Think of active learning methods. You start by annotating very short segments of video, send results immediately to your model, train it, and on the next segment you can start from the model’s inferences. As you proceed with this method, the model gets better, and the data annotation process gets easier. Ron Soferman’s comment: Not only is this a nice platform for machine learning, but also a system that can use machine learning to guess the annotation and let the user fix it if necessary. The new system can save a lot of time and use the manual work only for non-trivial cases. Let's consider an interface where you start the annotation, then after a few hundred images, the system uses transfer learning over known networks like ImageNet. Most features in natural images are already trained by ImageNet so that the marginal work of retraining should yield a good classifier in most cases. Pauline Luc 33 Computer Vision News Clay Sciences - Video annotations Application

RkJQdWJsaXNoZXIy NTc3NzU=