Skip to content
  • Our Work
    • Fields
      • Cardiology
      • ENT
      • Gastro
      • Orthopedics
      • Ophthalmology
      • Pulmonology
      • Surgical
      • Urology
      • Other
    • Modalities
      • Endoscopy
      • Medical Segmentation
      • Microscopy
      • Ultrasound
  • Success Stories
  • Insights
    • Magazine
    • Events and Webinars
    • News
    • Blog
  • The company
    • About us
    • Careers
Menu
  • Our Work
    • Fields
      • Cardiology
      • ENT
      • Gastro
      • Orthopedics
      • Ophthalmology
      • Pulmonology
      • Surgical
      • Urology
      • Other
    • Modalities
      • Endoscopy
      • Medical Segmentation
      • Microscopy
      • Ultrasound
  • Success Stories
  • Insights
    • Magazine
    • Events and Webinars
    • News
    • Blog
  • The company
    • About us
    • Careers
Contact

AI-Assisted Tissue Sparing in Urology

Tissue sparing is a common practice during surgeries. This approach aims to remove as little as possible of the surrounding tissue during a procedure. Studies have shown that tissue sparing procedures have fewer complications and faster recovery time.
In urology, tissue sparing is relevant in many surgical procedures, such as prostatectomy or cystectomy. Often a tumor needs to be removed, and it is preferable to preserve as much tissue without leaving cancer-cells behind. In prostatectomy by sparing tissue the neurovascular bundle (NVB) damage can be avoided, reducing the complication rate. Female cystectomy is usually accompanied by removal of surrounding organs, and can also benefit from tissue sparing.

Tissue Sparing in urology

Tissue sparing requires techniques: minimal incisions for surgical access (i), extreme precision for tumor removal (ii), and steady handling of surgical tools (iii). Below we will discuss how implementation of artificial intelligence (AI) and computer vision can improve these techniques and assist in tissue sparing.

(i) When attempting to minimize the entry port in a laparoscopic or open procedure, there are several challenges. The field of view and the tool’s operating range will be limited. When the field of view is limited, it is difficult to navigate the surgical scenery and recognize anatomical landmarks. Deep learning networks can be trained to detect specific landmarks and highlight them on the screen, assisting in recognition during the procedure. Additionally, utilizing the pre-op CT or MRI scan, a detailed procedure plan can be devised by segmenting and modeling the relevant anatomical structures, and the ideal incision position can be determined. This plan can be registered with intra-op imaging to verify incision location. Developing such modules is done using a combination of classic computer vision algorithms and deep learning.

(ii) Removing a tumor, whether radical or partial removal, requires precise dissection of the tissue. The tumor needs to be removed fully, with minimal damage to surrounding healthy tissue, vasculature, and nerves. Such precision can be enhanced by fusing pre-op images from CT or MRI with the tumor segmentation, with the intra-op video or ultrasound (US) images. The surgeon can view in real-time the surgical plan and perform accordingly. The pre-op segmentation can be easily obtained using neural networks trained to segment tumors. The registration process can be achieved using deep learning techniques and classic computer vision, and further improved using existing or wearable landmarks designated for image registration.

(iii) The above described solutions assist in visualizing the tumor throughout the procedure. However, accuracy still depends on the tool-handling capabilities of the surgeon. To neutralize this effect, robots can be introduced. Robotic assisted surgeries (RAS) are becoming increasingly common. As expected, the robot’s “hand” is more stable than the human hand, and can perform these surgeries with increased accuracy and precision. To induce robotic accuracy, key points within the anatomy can be selected, and the tools’ position relative to them can be calculated. This information provides real-time notifications of proximity to the tissue and warns against undesired resections. By using tool tracking algorithms – a well established method which segments the tool in the field-of-view and utilizes prior knowledge of camera and tool characteristics to accurately position the tool in space – tool tracking is achieved. The robot can use this information to accurately maneuver the tool while avoiding unnecessary incisions. This can also be done using electro-magnetic (EM) tracking – a designated EM sensor is attached to the tool and using an external EM field the tool’s position is recorded continuously. Further developments in this field may also register the robot’s coordinate system with the patient’s, providing more accurate positioning relative to the anatomy. Now the surgeon can approach the surgical procedure with high accuracy and stability, ensuring minimal surrounding damage, and sparing healthy tissue.

Combining these methodologies can aid in tissue sparing during urologic procedures. A smaller port in an ideal position can be achieved without compromising the anatomical understanding of the surgical scene, the surgical target can be accurately viewed in real-time, and introduction of RAS will reduce human-error, ultimately leading to less damage to healthy tissue. These solutions are challenging to implement, and advanced knowledge in AI and computer vision is essential for developing them. RSIP Vision has vast experience in developing computer vision solutions, and adequate staff for this mission. Contact us for a speedy development process and faster time-to-market.

Share

Share on linkedin
Share on twitter
Share on facebook

Main Field

Urology

Urology is one of the many medical fields that are being revolutionized by Artificial Intelligence. Prostate, bladder, urethra, kidneys and more key organs are already benefitting from impressive breakthroughs brought about by new technologies. AI is now able to contribute to more accurate and faster detection, segmentation, classification and diagnose for many urology disfunctions and diseases, including cancer, BPH and urolithiasis. Thanks to the latest generation of AI and computer vision techniques (deep learning in particular), AI provides now a precious support to the medical team in the operating room as well as during all stages of treatment, drastically improving the quality of therapy and reducing recovery time. You can find below some of RSIP Vision's projects and pioneering research in the field of AI for urology.

View Urology

Categories

  • blog, Surgical, Urology

Related Content

Improved PCNL

Improved PCNL with Computer Vision

AI-Assisted Prostate cancer diagnosis

AI-Assisted Prostate Cancer Diagnosis

Prostate Guidance

Intra-op Prostate Guidance by RSIP Vision

Neph - Partial Nephrectomy surgery

RSIP Neph Announces a Revolutionary Intra-op Solution for Partial Nephrectomy Surgeries

Benign Prostatic Hyperplasia BPH

AI for Benign Prostatic Hyperplasia BPH

Stone tracking - urology

Stone tracking during kidney stone removal

Improved PCNL

Improved PCNL with Computer Vision

AI-Assisted Prostate cancer diagnosis

AI-Assisted Prostate Cancer Diagnosis

Prostate Guidance

Intra-op Prostate Guidance by RSIP Vision

Neph - Partial Nephrectomy surgery

RSIP Neph Announces a Revolutionary Intra-op Solution for Partial Nephrectomy Surgeries

Benign Prostatic Hyperplasia BPH

AI for Benign Prostatic Hyperplasia BPH

Stone tracking - urology

Stone tracking during kidney stone removal

Show all

RSIP Vision

Field-tested software solutions and custom R&D, to power your next medical products with innovative AI and image analysis capabilities.

Read more about us

Get in touch

Please fill the following form and our experts will be happy to reply to you soon

Recent News

Announcement – XPlan.ai Confirms Premier Precision in Peer-Reviewed Clinical Study of its 2D-to-3D Knee Reconstruction Solution

IBD Scoring – Clario, GI Reviewers and RSIP Vision Team Up

RSIP Neph Announces a Revolutionary Intra-op Solution for Partial Nephrectomy Surgeries

Announcement – XPlan.ai by RSIP Vision Presents Successful Preliminary Results from Clinical Study of it’s XPlan 2D-to-3D Knee Bones Reconstruction

All news
Upcoming Events
Stay informed for our next events
Subscribe to Our Magazines

Subscribe now and receive the Computer Vision News Magazine every month to your mailbox

 
Subscribe for free
Follow us
Linkedin Twitter Facebook Youtube

contact@rsipvision.com

Terms of Use

Privacy Policy

© All rights reserved to RSIP Vision 2023

Created by Shmulik

  • Our Work
    • title-1
      • Ophthalmology
      • Uncategorized
      • Ophthalmology
      • Pulmonology
      • Cardiology
      • Orthopedics
    • Title-2
      • Orthopedics
  • Success Stories
  • Insights
  • The company