Computer Vision News - December 2021

31 Prof. Russell Taylor (JHU) Are there any advancements that have made you go “wow” in the last few years? What has been most impressive has been the strength of machine learning methods very broadly to improve the use of information in this field. We are beginning to see something I thought for a long time we would be seeing more of, which is that machines are beginning to be more actively involved in interventional procedures. The first autonomous medical robots were radiation therapy machines that moved around the patient and shot beams of X-rays to kill a tumor – and you hoped not do too much damage elsewhere. The system we developed at IBM Research and which was then commercialized, Robodoc, did the machining step to prepare bones for orthopedic implants. With surgical robots there are really two key questions. The first is, how can I tell the machine what it is supposed to do in a way it can understand? Second, how can you be sure it can do what you have told it to do safely and properly and it is not going to do something else? At extreme ends of the systems, these are fairly straightforward. The stereotactic systems, like radiation therapy andbone preparation systems, have planned everything out from a CT image or other set of medical images, and there may be levels of autonomy in the planning process – it is typically a human-machine partnership there – but the actual doing of it is very straightforward. The only thing you have left to do is to register all the images and the coordinate systems of these plans to the physical patient. Then the system just does it, perhaps with feedback during the process in case of patient motion or changes. The dominant paradigm today in medical robots is teleoperation. The machine moves its instruments in the same way I am moving my control handles. The very successful da Vinci surgical robots are an example of that. The commanding and what the robot is supposed to do is very straightforward. There is still a lot that has tobedone and the machine has to make a bunch of decisions internally, but they are very straightforward. Now, if I want the robot to take over and do a step while I am doing interactive surgery, then the robot and the surgeon both need to have an agreement as to what the environment of the patient is. You need richer situational awareness. You need things like real- time modelling, real-time computer vision, much better understanding of surgical tasks and how you interact with humans and all of that. “... how can I tell the machine what it is supposed to do in a way it can understand?” This is probably the first human clinical case of the Robodoc system in 1992 Think Surgical

RkJQdWJsaXNoZXIy NTc3NzU=