Success Rating and Dynamic Feedback
Minimally invasive surgeries (MIS), specifically robotic assisted surgeries (RAS), generally have an improved outcome compared with standard surgeries. However, they are not free of adverse events (AE). These undesired events can be a result of human error or equipment malfunction. Early detection of the event is essential for treatment and can reduce mortality rate and extra hospitalization time significantly.
RASs are normally video guided – whether using regular, endoscopic cameras, depth cameras, or a combination of them. This continuous video feed can be used to detect adverse events.
The state-of-the-art technology of video analysis uses deep learning methods. These methods are based on a known dataset that is annotated. A model called Transformer can be trained with an attention-mechanism to detect the unique time-points when an adverse event occurs, and the nature of the AE. The outputs of this network include a warning sign and an image with the specific event.
Real Time Usage Examples
The most common AE occurs when an organ in proximity to the surgical site is injured. This can be overlooked during the procedure, but Artificial Intelligence can easily detect when the surgical tools injure a neighboring organ. This raises a “red-flag” in the system and sends a notification to the surgeon, who can proceed to repair the injury before further damage is inflicted.
Retention of a foreign body is another common AE. The system can be trained to detect the entrance and exit of each element through the trocar and into peritoneum during the procedure and notify the user when the exit of a specific element is missing from the video, prior to the closing of the surgical site. This can reduce collateral damage from such AE significantly (infection, inflammation, hospitalization time, etc.).
AEs such as bleeding or vascular damage can be analyzed differently. The surgeon easily identifies when bleeding occurs; the challenge is detecting where the bleeding originates. The system can be trained to detect the area and the moment the damage occurred and mark it in the image. The surgeon can then learn the precise location of the bleeding and begin treatment. This feature may reduce operation time and minimizes bleeding.
Misperception of anatomy can ultimately lead to AEs. Separate AI mechanisms address anatomic labeling and are sufficient to overcome this matter. However, if the anatomy is extremely challenging or the labelling feature is unavailable, the AE detector can assist. When the anatomy is falsely recognized, another AE is bound to occur – excess bleeding, injury of neighboring organs, etc. The system can detect these AEs, issue warnings, and alert the surgeon that the anatomy was falsely recognized, and they should re-assess their position.
We mentioned the most common AEs, but the list extends further. Almost all AEs can be addressed with this system, given adequate training data. Using it will significantly reduce AE-based expenses and mortality rates.
We also described the uses of this system in real-time or semi real-time environments. Another use for this system is post-op analysis and training. Accurately detecting AEs can shorten the time for procedural analysis and be used to present examples for trainees. Rather than viewing the surgical videos for hours trying to point-out mistakes, the system can automatically detect and extract the “significant” frames, thereby improving the training session.
Adding detection technology into your module
RSIP Vision has a diverse team of engineers and clinicians, specializing in introducing AI to RAS. We can implement this Transformer (or LSTM) to suit the exact needs of your medical device, in an efficient and speedy manner. Adding this feature can significantly improve patient outcome and increase the value of your product.