Computer Vision News - June 2018

20 Ultimately, we do think that robots and AI will build better robots and AI. Then you can think about rapid accelerations of their capabilities on the cognitive side and on the hardware side for robotics in AI. In a few weeks you are going to speak for the first time at RE•WORK’s Deep Learning for Robotics Summit, in San Francisco. What are we going to learn from you? There are two main new things that I am going to highlight. The main point of my talk is how we can get robots to adapt to changes in their environment or to damage. That specifically will cover a paper that we have published with my excellent collaborators - Antoine Cully, Jean-Baptiste Mouret, and Danesh Tarapore - that was on the cover of Nature. That paper discusses how you can get a robot that has become damaged to conduct a very few number of experiments and, within one to two minutes, figure out how to continue on with its mission… literally one to two minutes of watch time, and the robot will figure out how to adjust to the situation and continue to go about what we’ve asked it to do. The second thing I want to discuss is what most people in deep learning for robotics and deep reinforcement learning have been focused on, which I call traditional reinforcement algorithms, such as DQN (deep Q- network), an algorithm from Deep Mind that was on the cover of Nature. We recently showed in a series of papers that came out at Uber that evolution is actually a competitive alternative to these traditional reinforcement learning algorithms. That is following on the heels of work at OpenAI, which showed that a different evolutionary algorithm is a competitive alternative. So between the work at OpenAI and our work at Uber, there are basically two new champions in the tournament that can all take on any given dragon (a dragon might be a hard engineering problem). What we’re finding, which I think is very interesting, is that each of these different algorithms is very good at different types of problems. One message that I will emphasize to the practitioners or the people from industry who are at the RE•WORK Summit is that if you have a particular problem, and you know that you have four algorithms, and one of them is likely to work and the others aren’t, then you probably shouldn’t try one of them, stick with it, and really try to get it to work. You should probably try all four of them really quickly and see. One of them might work far better than the other ones. Ahead of time, it’s not clear Guest: Jeff Clune - UBER Computer Vision News Guest

RkJQdWJsaXNoZXIy NTc3NzU=