Computer Vision News - December 2022

6 Women in Computer Vision it a command. If you say to make a bowl of cereal, implicitly, you also want it to clean up after making a bowl of cereal by putting away the milk because you don't want the milk to go bad. And so, actually, even this problem of specifying what the task is to a robot is challenging in and of itself beyond the problem of actually executing that task. I don't like to ask negative questions, but there is one that I need to ask. When you tried to make something happen, and it just didn't work, what was the most frustrating moment? In research, frustration and failures come up all the time. I don't even think it's necessarily a negative thing. It's just kind of part of what happens, and the times in which the robot fails and the algorithms fail are the times in which you learn. In some ways, this can actually be a positive experience. In terms of examples of frustration, it's too many to count. Along with the task specification that we just talked about, there have been times in which you give a robot a demonstration that illustrates the task, and you'll try to learn a reward function underlying that demonstration, and it won't actually give you the right result. It will give you a reward function that is consistent with the behavior in that one scenario but doesn't generalize to a new scenario. There have been times in which I've worked on inverse reinforcement learning, where you try to learn rewards underlying demonstrations, and it ends up being a very challenging problem. There have been times in which we've spentmonths trying todo something. Another example is a lot of reinforcement learning algorithms work beautifully in a simulation where you try to have the robot learn how to run, and so it runs, and then it falls down. And so then, in simulation, you should use that object as a tool. That's one example. Do you give more credit to the robot itself or to the people who program this robot?  All of it comes down to the software and the algorithms that you design and put into the system. We buy robots all the time, and we just buy them off the shelf. While it’s very useful to be able to buy robots off the shelf, the hardware itself is not at all capable unless you have algorithms that can power it to do the kinds of things that we're trying to get them to do. You still have decades to work on robots. What would be your dream result to see robots accomplish? One really basic example of something that I would love to see is where you can put a robot in a kitchen that it has never been in before and tell it to make a bowl of cereal, and it will be able to make a bowl of cereal. It sounds really basic like people can do this when they're half asleep, but it involves opening up a package of cereal, opening up a fridge, opening up some milk, getting out a bowl, and pouring. All in the right order?  Well, the right order actually isn't the hard part. The hard part is the dexterity of opening up packages and closing packages and pouring and all that and being able to do it in a way that's general with packages that maybe you haven't quite seen that exact package before or you haven't seen exactly that kitchen before. Why should the robot close the package? The instructions were only to prepare a bowl of cereal. Even task specification is really challenging, as you point out. I mean, I would like my robot to cleanup after itself even after I give

RkJQdWJsaXNoZXIy NTc3NzU=