4 Robot Learning Research RAPID EXPLORATION FOR OPEN-WORLD NAVIGATION WITH LATENT GOAL MODELS by Marica Muffoletto (Twitter ) Dear readers, this month we review a paper from UC Berkeley and Carnegie Mellon University which was recently presented at the 5 th conference on Robot Learning in London! We welcome this new computer vision research, entitled Rapid Exploration for Open-World Navigation with Latent Goal Models (link here ). We are indebted to the authors Dhruv Shah, Benjamin Eysenbach, Nicholas Rhinehart, Sergey Levine for allowing us to use their images to illustrate this review. Topic This work focuses on the problem of goal- directed exploration for visual navigation in novel environments. Fundamental to thiswork is robustness andadaptiveness tovisual distractors. Who needs a robot that can reach a goal only when it’s sunny or when there is no obstacle in the way? A compressed representation of perceptual inputs and goal images is core to this research. As similar state-of-the-art methods, it’s based on a topological map learning a distant function and a low-level policy. Themain difference with other methods lies in the efficiency and robustness of RECON. While recent works reason about the novelty of a state only after visiting it or need high sample complexity and hence a simulated counterpart, RECON makes use of previous experience in different environments to accelerate learning in the current one using an information bottleneck.