CVPR Daily - 2018 - Thursday

the probability that this box is going to be accepted, we can know what the expectation of the time is that it will take to annotate an image. We construct a provably optimal strategy – given some assumptions as always – that results in the minimum annotation time .” Her second solution is completely model free: “ We use reinforcement learning to train an agent that will act in the best way to minimise the annotation time. We have our environment, that is a human who is annotating images. We have an agent that can act, and the actions of the agents are what modality of annotation we are using. Then we receive a reward from the environment, that is the negative time of the annotation. When we reach the end of the episode that means that we obtain the bounding box that we wanted. Then the reward is zero. In this case, we maximise the reward and naturally we minimise the annotation time .” Ksenia says they were able to train such an agent using DQN, which is Deep-Q Network , an algorithm from a paper by DeepMind . They adapted it to their scenario and were able to train an agent that learns how to act in these annotation dialogs. She adds that the title of their paper is about learning intelligent annotation dialogs, because they construct this sequence of actions, either drawings or verifications, and they are their dialogs. Ksenia concludes by telling us about their experiments. In one setting they have fixed settings and test many different scenarios. For example, they have a weak detector but need to obtain very precise bounding boxes, or they have a very fast drawing strategy but don’t need precise boxes. They show that they outperform other methods there. The other setting is even more realistic. They retrain the detector and their agent that produces dialogs during the data collection. As they collect more data, they can retrain everything, and everything becomes better and better over time. If you would like to find out more about this work, please come along to Ksenia’s spotlight today 2:50-4:30 in Room 255 and her poster at 4:30-6:30 in Halls D-E. “… we maximise the reward and naturally we minimise the annotation time.” Thursday 9 Ksenia Konyushkova

RkJQdWJsaXNoZXIy NTc3NzU=