PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this paper, we present a segmentation method for laparoscopic images using semantically similar groups for multi-class semantic segmentation. Accurate semantic segmentation is a key problem for computer assisted surgeries. Common segmentation models do not explicitly learn similarities between classes. We propose a model that, in addition to learning to segment an image into classes, also learns to segment it into human-defined semantically similar groups. We modify the LinkNet34 architecture by adding a second decoder with an auxiliary task of segmenting the image into these groups. The feature maps of the second decoder are merged into the final decoder. We validate our method against our base model LinkNet34 and a larger LinkNet50. We find that our proposed modification increased the performance both with mean Dice (average +1.5%) and mean Intersection over Union metrics (average +2.8%) on two laparoscopic datasets.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Leo Uramoto, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori, "A semantic segmentation method for laparoscopic images using semantically similar groups," Proc. SPIE 12466, Medical Imaging 2023: Image-Guided Procedures, Robotic Interventions, and Modeling, 1246605 (3 April 2023); https://doi.org/10.1117/12.2654636