Skip to content
  • Our Work
    • Fields
      • Cardiology
      • ENT
      • Gastro
      • Orthopedics
      • Ophthalmology
      • Pulmonology
      • Surgical
      • Urology
      • Other
    • Modalities
      • Endoscopy
      • Medical Segmentation
      • Microscopy
      • Ultrasound
  • Success Stories
  • Insights
    • Magazine
    • Upcoming Events
    • Webinars
    • Meetups
    • News
    • Blog
  • The company
    • About us
    • Careers
Menu
  • Our Work
    • Fields
      • Cardiology
      • ENT
      • Gastro
      • Orthopedics
      • Ophthalmology
      • Pulmonology
      • Surgical
      • Urology
      • Other
    • Modalities
      • Endoscopy
      • Medical Segmentation
      • Microscopy
      • Ultrasound
  • Success Stories
  • Insights
    • Magazine
    • Upcoming Events
    • Webinars
    • Meetups
    • News
    • Blog
  • The company
    • About us
    • Careers
Contact

AI for Navigation in Robotic Assisted Surgeries

Every Robotic Assisted Surgeries (RAS) requires some level of navigation. While in open surgery the target is viewed directly, minimally invasive RAS views come from inside the body cavity, with a restricted field-of-view (FOV). Also, the surgeon’s hands are occupied with the tools, whereas the camera is controlled by an assistant, adding another complication to the procedure – requiring perfect collaboration between them. Another challenge arises from anatomical and physiological differences between patients which make it difficult to accurately position surgical tools and recognize target organs. In gastroscopies or colonoscopies, the singular wide-angle view is often difficult to interpret, and objective navigational aid can be beneficial. Recently developed AI technology offers a solution to these challenges, as described below.

RAS Navigation

Autonomous Camera Control

Directing the camera towards the desired FOV can be achieved in several ways. Firstly, the system can be trained to keep pre-selected key points in the view, and automatically correct the angle to maintain similar positioning of these points in the image. Using deep-learning, the system can detect and segment the surgical tools or key anatomical features, and use them as anchors for the image. Aligning the camera position with the detected (or selected) key-points will continue aiming the camera at the desired FOV.

Another method involves a camera for surgeon’s eye movement tracking. Using well-established eye-tracking algorithms, the system can analyze the direction where the surgeon is looking and aim the camera for the desired FOV. These methods can navigate the camera to the ideal position based on the surgical needs.

Intraoperative Guidance Using AI

One of the common challenges in video guided surgical procedures is navigation to the desired destination. Anatomical differences, unusual camera point-of-view, pathologies, lack of depth perception, and other factors all account for navigational difficulties. There are several levels for navigational assistance:

  • Anatomical landmark recognition – The system can be trained using deep-learning to detect specific organs or key anatomical landmarks and highlight them on the screen. These can be used to familiarize the surgeon with the anatomy and lead to the target site.
  • Path suggestion – another step towards fully automated robots is automatic navigation. The system can learn navigation using data from previous procedures and suggest preferred paths. This removes decision-making responsibility from the surgeon and allows faster and smoother navigation.
  • Generate warnings – fusion of the image with pre-op MRI/CT scan or with anatomical atlas can provide additional information which is not visible to the surgeon. If the surgical tools are in proximity to vulnerable blood vessels or nerve bundles which can be compromised if contacted, the system can detect them and warn the surgeon. This will significantly reduce surgical complications and add a layer of quality control to the procedure.
  • Real time tool tracking – for endoscopic procedures, a set of algorithms known as SLAM (Simultaneous localization and mapping) can create a map and track the position of the scope simultaneously. This enables accurate positioning of the scope throughout the lumen in real time.
  • Procedural planning – Using pre-op scans, the system can plan an ideal path for the procedure. This feature is especially valuable to endoscopic procedures, where the path does not change due to incisions.

These are several aspects where navigation during RAS can be improved by implementing AI in the system.

AI Contributes to RAS

AI can significantly improve RAS – shorter procedures, better outcomes, assume responsibilities from the surgeon, and reduce adverse events. RSIP Vision has a multidisciplinary team, including engineers and clinicians, with vast experience in implementing state-of-the-art algorithms into medical devices. We specialize in custom-tailored solutions and can help in creating the ideal solution and speed time to market.

Share

Share on linkedin
Share on twitter
Share on facebook

Main Field

Surgical

In the recent years, robotic modules with their associated algorithmic software emerged in the surgery rooms. Here, some of their applications will be reviewed. They are used for orthopedic, brain and cardiac surgeries.

View Surgical

Categories

  • Surgical

Related Content

Single-port

How Can AI Assist in Single-port Robotic Surgery?

Data Generation in Robotic Assisted Surgeries (RAS)

Data Generation in Robotic Assisted Surgeries (RAS)

Success Rating and Dynamic Feedback in RAS

Success Rating in Robotic Assisted Surgeries

RAS Navigation

Tissue Sparing in Robotic Assisted Orthopedic Surgeries

C Arm X-Ray Machine Scanner

Radiation Reduction in Robotic Assisted Surgeries (RAS) Using AI

Visible spectrum color

Hyperspectral Imaging for Robotic Assisted Surgery

Single-port

How Can AI Assist in Single-port Robotic Surgery?

Data Generation in Robotic Assisted Surgeries (RAS)

Data Generation in Robotic Assisted Surgeries (RAS)

Success Rating and Dynamic Feedback in RAS

Success Rating in Robotic Assisted Surgeries

RAS Navigation

Tissue Sparing in Robotic Assisted Orthopedic Surgeries

C Arm X-Ray Machine Scanner

Radiation Reduction in Robotic Assisted Surgeries (RAS) Using AI

Visible spectrum color

Hyperspectral Imaging for Robotic Assisted Surgery

Show all

RSIP Vision

Field-tested software solutions and custom R&D, to power your next medical products with innovative AI and image analysis capabilities.

Read more about us

Get in touch

Please fill the following form and our experts will be happy to reply to you soon

Recent News

IBD Scoring – Clario, GI Reviewers and RSIP Vision Team Up

RSIP Neph Announces a Revolutionary Intra-op Solution for Partial Nephrectomy Surgeries

Announcement – RSIP Vision Presents Successful Preliminary Results from Clinical Study of 2D-to-3D Knee Bones Reconstruction

Announcement – New Urological AI Tool for 3D Reconstruction of the Ureter

All news
Upcoming Events
Stay informed for our next events
Subscribe to Our Magazines

Subscribe now and receive the Computer Vision News Magazine every month to your mailbox

 
Subscribe for free
Follow us
Linkedin Twitter Facebook Youtube

contact@rsipvision.com

Terms of Use

Privacy Policy

© All rights reserved to RSIP Vision 2023

Created by Shmulik

  • Our Work
    • title-1
      • Ophthalmology
      • Uncategorized
      • Ophthalmology
      • Pulmonology
      • Cardiology
      • Orthopedics
    • Title-2
      • Orthopedics
  • Success Stories
  • Insights
  • The company