Computer Vision News - August 2023

35 Calibration Techniques for Node Classification ... Computer Vision News “GNNs are still quite a new topic, and especially for medical image data, it’s difficult to get good graphs because they rely heavily on the segmentations,” Iris tells us. “I focus on the circle of Willis, an anastomosis of blood vessels in the brain. They have a lot of anatomical variety among healthy people as well. The brain vessels form this kind of circle. Only 30% of the people in a healthy population will have this complete circle, and 70% will have some anatomical variance. Blood vessels can be absent, or certain vessels can be duplicated or underdeveloped. To get good graphs, you need good segmentations, especially of the smaller vessels. If they’re underdeveloped, it’s quite difficult.” Iris’s more observational study focuses on the applicability of calibration techniques to graphs and finds that the methods are indeed effective. However, the segmentation challenge is ongoing for researchers. Current and future efforts, including a MICCAI challenge, are focused on segmenting the intracranial arteries and developing more accurate vessel segmentation and graphs moving forward. “We used node classification for GNNs,” Iris explains. “We looked at the most vanilla GNN that there is at the moment, but we also focused on higher-order graph convolutional networks because a limitation in these GNNs is that they focus on local regions. They only learn the embeddings of direct neighboring nodes of a target node. But they fail to capture global patterns in the data. The higher-order graph convolutional networks add information beyond direct neighborhoods, and show better performance in both discriminative power and calibration.” Calibration is a crucial aspect often overlooked in pursuit of high accuracy and discriminative power in classification tasks. It becomes particularly important when the primary goal is not simply distinguishing between two classes, Good calibration means that a model is confident about accurate predictions, while also indicating low confidence when it is likely to be inaccurate. In contrast to most deep neural networks, which are often overconfident in their probability estimates, graph neural networks tend to be underconfident.

RkJQdWJsaXNoZXIy NTc3NzU=