Purpose: Among the conferences comprising the Medical Imaging Symposium is the MI104 conference currently titled Image-Guided Procedures, Robotic Interventions, and Modeling, although its name has evolved through at least nine iterations over the last 30 years. Here, we discuss the important role that this forum has presented for researchers in the field during this time. Approach: The origins of the conference are traced from its roots in Image Capture and Display in the late 1980s, and some of the major themes for which the conference and its proceedings have provided a valuable forum are highlighted. Results: These major themes include image display/visualization, surgical tracking/navigation, surgical robotics, interventional imaging, image registration, and modeling. Exceptional work from the conference is highlighted by summarizing keynote lectures, the top 50 most downloaded proceedings papers over the last 30 years, the most downloaded paper each year, and the papers earning student paper and young scientist awards. Conclusions: Looking forward and considering the burgeoning technologies, algorithms, and markets related to image-guided and robot-assisted interventions, we anticipate growth and ever increasing quality of the conference as well as increased interaction with sister conferences within the symposium. |
1.IntroductionIn its first 50 years, the SPIE Medical Imaging Symposium has provided an outstanding forum for scientific communication from researchers in academia and industry, from students and seasoned luminaries, spanning a tremendous breadth and depth of medical imaging research. The MI104 Image-Guided Procedures, Robotic Interventions, and Modeling conference traces its roots to 1989 and has presented a vibrant forum that has become an important feature on the scientific landscape in North America for researchers with interest in image-guided interventions, surgical robotics, and a variety of clinical applications ranging from surgery and interventional radiology to radiation therapy. In this paper, we briefly trace the history of the conference and highlight major scientific themes for which it has served as a venue for many scientists to present their work. These include, but are not limited to, topics on interventional imaging [all modalities, including endoscopy, other optical imaging technologies, radiography/fluoroscopy, ultrasound, computed tomography (CT), magnetic resonance (MR), and nuclear medicine imaging], landmark-based, feature-/surface-based, and image-based registration for interventional guidance, surgical robotics, and image display/visualization. Some of the noteworthy highlights are also summarized, including top-cited papers from the conference proceedings and awards earned by students and early-career scientists. 2.History and Evolution of MI104: “The Image-Guided Procedures Conference”The inception of the MI104 conference now entitled Image-Guided Procedures, Robotic Interventions, and Modeling traces its roots to 30+ years ago on topics of image capture and display. As shown in Table 1, the name of the conference has evolved over time, reflecting emerging themes ranging from image capture, display, and visualization in its first 10 years to themes of image-guided procedures (starting in 2001), modeling (in 2008), and robotic interventions (in 2012). Table 1The title of the conference has changed over the years, reflecting an evolution in major themes, from “image capture and display” in the late 1980s to “image-guided procedures” representing a consistent thread since the early 2000s, with the addition of “modeling” in 2008, and “robotic interventions” in 2012.
Since its first stand-alone edition in 1989, the MI104 Image-Guided Procedures conference has grown to become the third- or fourth-largest conference under the SPIE Medical Imaging Symposium umbrella, attracting as many as 150 submissions and close to 400 attendees each year, many of whom are students and early-career scientists, and some presenting their research at an international forum for the first time. Well integrated with sister conferences throughout the symposium, the MI104 conference has become the premier forum in North America for presentation of cutting-edge research in image-guided procedures. In addition to becoming one of the top attended conferences, since the mid-late 2000s, the Image-Guided Procedures conference has hosted joint sessions with several other conferences in the SPIE Medical Imaging Symposium. A joint session with Ultrasound Imaging and Tomography has become a recurring feature for more than a decade, highlighting contributions on ultrasound-guided interventions. Beginning in 2021 were joint sessions with the Imaging Informatics conference focused on research related to interventional workflow optimization and use of phantoms for simulation and validation. New in 2022 were joint offerings with the Physics of Medical Imaging conference featuring research in novel imaging technologies for image-guided interventions, including CT and cone-beam CT (CBCT). 3.Major ThemesOver the last 30+ years, the areas of major interest presented at the conference have evolved considerably, with numerous major themes evident in research on image display/visualization, surgical tracking/navigation, surgical robotics, interventional imaging, image registration, and modeling. Some highlights among these major themes are noted in the sections below, also reflected by the topics of keynote lectures and workshops summarized in Table 2. Table 2Keynote lectures and workshops associated with the MI104 conference since 2006. Accounts prior to 2006 were not available from the conference record, and workshop contributors (marked “N/A”) were not reliably recorded in the available conference programs. See the Acknowledgments section for a partial recognition of contributors.
Major themes are also clearly evident in review of the conference proceedings. Figure 1 shows such themes in the form of a word cloud drawn from the titles of the 50 most downloaded papers, and Figs. 2Fig. 3Fig. 4Fig. 5Fig. 6–7 highlight some of the figures drawn from work therein. The top 50 most downloaded proceedings papers are given in Table 3, and Table 4 shows the most downloaded proceedings paper each year. The excellence of research presented at the meeting, especially by students and early career scientists, is evident in Table 5, which lists award-winning papers recognized in this conference over the years. Table 3Top 50 most frequently downloaded papers from the MI104 conference proceedings.
Table 4Top downloaded paper from the MI104 conference proceedings each year.
Table 5Notable conference (and all-conference) awards earned by students and early-career scientists since 2014.
3.1.Image Display and VisualizationFrom the onset of the conference in 1989 and through the end of the 1990s, image capture, formatting, visualization, and display were the primary themes of the conference. Several notable works include, but are not limited to, the development of image data compression techniques,12 a first high-performance floating point image computing workstation for medical imaging,13 presentation of medical images on cathode ray tube (CRT) displays,14,15 volume rendering of medical images using three-dimensional (3D) texture mapping,16 and the use of OpenGL in medical imaging,17 and the characterization of high-resolution liquid crystal displays (LCD) for medical imaging.18,19 In concert with image capture and display, several platforms and toolkits were developed to assist with the processing, fusion, and integrated visualization of multi-modality imaging data, such as the 3D VIEWNIX platform,20 the medical imaging interaction toolkit framework,21,22 visualization toolkit-Insight toolkit (VTK-ITK) integrated visual programming,1 and 3D Slicer.23 Numerous techniques have leveraged such toolkits for integration of 3D data derived from multi-sensor imagery and anatomical atlases using parallel processing, probabilistic quantification, segmentation, and registration for multi-modality medical image fusion.24,25 Throughout the formative years of the conference, advanced image visualization remained an important theme. The development of technologies and techniques to enable multi-modal image manipulation, visualization, and display led to the advent of virtual, augmented and mixed reality applications in medical imaging, with several notable examples being holographic stereograms,26 real-time auto-stereoscopic visualization,27 use of stereo and kinetic depth cues for augmented reality of brain imaging,28 as well as the use of solid models of patient specific anatomy generated from computed tomography/magnetic resonance imaging (CT/MRI) images using laser sintering and laminated object manufacturing techniques.29 3.2.Surgical Tracking/NavigationBy the turn of the millennium, a spectrum of infrared, videometric, and electromagnetic surgical tracking systems had emerged and found growing application in surgical navigation, primarily in intracranial neurosurgery and spine surgery. Among such systems were the Polaris Spectra (NDI, Mississauga, Ontario, Canada) infrared tracker, the MicronTracker (Claron, Toronto, Ontario, Canada) videometric tracker, and the Aurora (NDI) electromagnetic tracker.30 The Spectra became a fairly prevalent component of clinical navigation systems, including the StealthStation (Medtronic, Minneapolis, Minnesota, United States) and VectorVision (BrainLab, Munich, Germany) systems. The MicronTracker presented interesting possibilities in producing one’s own marker configurations (easily printed checkerboard patterns) and in fusing registered image or planning information with the video scene. The Aurora eliminated line-of-sight constraints and was amenable to tracking flexible probes or endoscopes inside the body. Later embodiments included the Polaris Vicra (NDI) suitable to lower cost and laboratory setups, fusionTrack (Atracsys, Puidoux, Switzerland) for increased geometric precision (e.g., in temporal bone surgery), and even systems originally developed for consumer gaming, such as the Kinect (Microsoft, Seattle, Washington, United States). Early implementations of such tracking/navigation systems employed point-based registration via colocalization of corresponding “fiducial” points in the tracker (world) and 3D image coordinate frames. The analytical basis for understanding the resulting geometric error in the navigation system was described by Fitzpatrck and West2,31–34 in terms of the fiducial localization error (FLE), fiducial registration error (FRE), and target registration error (TRE), including the effect of the number and geometric arrangement of fiducial markers. The SPIE Image-Guided Procedures conference was an important forum for the development and communication of this quantitative framework that is now commonly invoked throughout the scientific literature in the development and application of new surgical navigation systems. 3.3.Surgical RoboticsGiven the extensive focus of the Image-Guided Procedures conference on technology and techniques for minimally invasive intervention, surgical robotics, and robot-assisted interventions became a leading theme. Several pioneering works appeared in the proceedings, including the 2012 volume featuring the design of a decoupled MRI-compatible force sensor using fiber Bragg grating sensors for robot-assisted prostate interventions,5 a flexure-based wrist for needle-sized surgical robots,35 exploring surface acquisition methods for intraoperative re-registration toward enabling image-guided partial nephrectomy with the da Vinci robot,36 automatic trajectory and instrument planning for robot-assisted spine surgery,37 a tendon-actuated approach for robot-enabled needle steering in lung biopsy,38 or the development of concentric agonist-antagonist robots for minimally invasive surgeries,39 to name a few. 3.4.Interventional ImagingThe MI104 conference has provided a valuable forum for development and clinical application of new interventional imaging technologies across the full spectrum of modalities. Among the most prevalent of these is endoscopy, including laparoscopic, endonasal, thoracic, arthroscopic, bronchoscopic, and neuroendoscopic techniques. Especially in relation to computer vision methods for image processing, feature recognition, 3D reconstruction, and registration to other imaging and planning data, advanced methods for endoscopic video guidance have formed an important means to enhance visualization of the interventional scene.40 Such work also aims to extend endoscopic capability by integration with robotic assistance, including the da Vinci stereoscopic system36 as well as a number of emerging robotic systems that could provide a useful platform for controlled manipulation of the endoscope. Similarly prevalent in the Image-Guided Procedures conference is research that expands the use of ultrasound for interventional imaging. Moreover, the conference has held several joint symposia and workshops with the ultrasound conference in recent years. Integration of ultrasound with surgical tracking systems enables not only inter-modality registration and guidance41 but also extends the utility of ultrasound in surgery of the liver, spine, or brain. Systems for transrectal ultrasound have been the subject of considerable research, including a novel robotic assistance system for prostate biopsy or brachytherapy guided by MRI and transrectal ultrasound.42–44 As detailed elsewhere in this special issue, the Physics of Medical Imaging conference was home to the development and reporting of new medical imaging technologies, including flat-panel detectors for x-ray fluoroscopy and CBCT. After the turn of the millennium, such technology began to find prevalent use in image-guided radiation therapy, image-guided surgery, and interventional radiology, and the Image-Guided Procedures conference provided an important forum for development, integration, and application of such systems, including first clinical application in areas, such as otolaryngology–head and neck surgery45 and registration of intraoperative imaging with preoperative CT and MRI.46 Among the exciting research programs in image-guided interventions over the last 20 years was the AMIGO operating room47 constructed at the Brigham & Women’s Hospital (Boston, Massachusetts, United States) as a clinical research development and proving ground for the use of multi-modality image guidance. The AMIGO comprised surgical navigation, endoscopy, ultrasound, fluoroscopy, CT, and MR imaging (and later CT-positron emission tomography) within a single operating room (OR) to investigate new clinical applications and the potential advantages of increased precision afforded by such technologies. The research environment facilitated numerous projects reported at the conference and helped to refine the vision for the OR of the Future. 3.5.Image Registration: Rigid, Deformable, and Inter-Modality Registration TechniquesJust as image registration is integral to the practice of image-guided interventions, so has it been among the outstanding science presented at the conference. Point-based registration approaches (and the analytical models describing registration error) are mentioned above in relation to surgical tracking/navigation.2,3,31–34 Numerous methods and applications of image-based 3D-2D registration (alternatively 2D-3D registration, making no claim as to the order or which constitutes the moving or fixed image) have been reported at the MI104 conference, with the term broadly applied to video-to-volume registration (e.g., endoscopy to CT), slice-to-volume registration (e.g., ultrasound to MRI), and projection-to-volume registration (e.g., fluoroscopy to CT).48–57 Such work includes novel methods and implementations for 3D-2D registration with applications ranging from needle interventions to catheter guidance and orthopedic surgery. Prominent among these are methods for registration of 3D CT (or CBCT) to intraoperative 2D fluoroscopy, with many groups reporting research on novel objective functions, motion models (including piecewise rigid registration), and optimization methods.55 Such work has helped CT-to-fluoroscopy registration emerge within the modern standard of surgical image guidance. Ongoing research seeks to accurately register MRI with fluoroscopy and improve robustness and runtime via deep-learning approaches. Similarly, 3D-3D image registration, including inter- and intra-modality images and rigid and nonrigid motion models, presents a major area of research in image-guided interventions, with healthy overlap and shared interest with the Image Processing conference.8,9,58–66 Research in 3D-3D image registration has focused primarily on challenges associated with inter-modality registration (CT, MRI, and ultrasound) and nonrigid registration models. Methods to handle nonlinearly related image intensities in inter-modality registration primarily focus on novel objective functions, e.g., the modality-insensitive neighborhood descriptor (MIND), and more recently, learned relationships between inter-modality image appearance via CNNs and generative adversarial networks. Research employing nonrigid motion models, e.g., B-spline, Demons, etc., has sought to bring such capability to applications in image-guided surgery, especially in the context of highly deformable tissues, such as the brain, lungs, and liver. Here again, deep-learning architectures represent an emerging theme that extends previous research based on physics-based, diffeomorphic motion models. 3.6.Modeling for Image-Guided InterventionsIn concert with the advances in image computing, manipulation, visualization, and display in the effort to support image-guided interventions, modeling became an integral component in pre-operative treatment planning. One such example is the first assessment of the display accuracy and clinical utility of virtual and solid models of patient anatomy generated from CT/MRI imaging data using rapid prototyping techniques,29 as well as the use of constitutive modeling for the development of a brain phantom.67 Several modeling tools have been used in conjunction with image processing techniques toward improving segmentations, such as statistical multi-vertebrae shape and pose model for segmentation of CT images,68 or registration for applications, such as brain shift estimation and correction, e.g., enhancement of subsurface brain shift model accuracy.69 Although at first modeling methods were solely focused on the generation of faithful geometric representations of patient specific anatomy from medical images, modeling soon evolved to encompass the integration of functional data (i.e., electrophysiology) and its mapping onto image-derived patient specific morphology.69 Furthermore, several theoretical modeling approaches have been used to estimate organ motion when such motion could not be easily measured, such as modeling liver motion and deformation during the respiratory cycle using intensity-based free-form registration of gated MR images,70 or estimate an organs specific response to therapy71,72 as a means to predict and optimize treatment outcome. Similarly, other modeling applications include automated detection of specific workflow stages, such as recognition of risk situations based on endoscopic instrument tracking and knowledge-based situation modeling73 or specific feature detection, e.g., mitotic cell recognition using hidden Markov models.74 4.Notable Papers and AwardsThe MI104 conference proceedings have provided a valuable forum for the publication of groundbreaking work, documenting content presented in oral and poster presentations, often including late-breaking results appearing only in the SPIE Proceedings or preliminary to eventual peer-review journal publications. The conference also formed the basis for special sections in the Journal of Medical Imaging on “Image Guidance Technology Platforms” in 201875 and “Interventional Data Science” in 2020.76 The top 50 most downloaded papers from the MI104 conference proceedings are summarized in Table 3, with a relatively recent (2018) paper on deep-learning-based image corrections earning the top spot ( downloads). Table 4 shows the top downloaded paper each year, with keywords from the titles of these papers pictorially shown in Fig. 1. These papers also demonstrate the importance of the meeting as a forum for student researchers to present their work, with a majority of the papers noted in Tables 3 and 4 having a graduate student as first author. In recent years, the conference program committee has recognized outstanding papers by early-career scientists via the Young Scientist Award (sponsored by Siemens Healthineers, Princeton, New Jersey, United States), the Best Student Paper Award (sponsored by Intuitive Surgical, Sunnyvale, California, United States), and Best Poster Awards (sponsored by NDI Northern Digital Inc., Waterloo, Ontario, Canada). Student papers are also eligible for the symposium-wide Robert F. Wagner (and previously Michael B. Merickel) Best Student Paper Award. Table 5 summarizes such recognitions earned by papers in the Image-Guided Procedures, Robotic Interventions and Modeling conference since 2014, when reliable records regarding awards were first available. 5.Conclusions and Outlook: An Important Forum for Advancing Interventional MedicineAs SPIE celebrates the 50th anniversary of the Medical Imaging Symposium, we also celebrate nearly 35 years of the MI104 conference, growing from its roots in the conference on Image Capture and Display and now termed the conference on Image-Guided Procedures, Robotic Interventions, and Modeling. The increasing prevalence of minimally invasive interventional radiology and surgical approaches over the last 20 years has been driven by the need for safer, more precise, and effective therapies, and the emergence of such therapies has been enabled in large part by the technologies that were featured for the first time during their development via this conference. MI104 has provided a valuable forum and ongoing dialog regarding research and translation of technologies for surgical navigation, advanced visualization, intraoperative imaging, robotic assistance, and modeling of tissues, devices, and therapeutic response. Such technologies have been integral to advances in patient care, and their continued adoption will continue to require close partnerships among clinicians and engineers, including academics and industry. The years ahead are sure to bring further technology advances, studies to demonstrate the benefits in outcomes, and recognition of costs and value-based care. The MI104 conference on Image-Guided Procedures, Robotic Interventions, and Modeling will continue to provide a valuable forum for research that enables and expands the widespread use of minimally invasive interventions, including but not limited to devising more accurate surgical target localization, precise and accurate registration of multiple data sources and systems, and novel advances in the surgical armamentarium. New paradigms for multi-modality imaging accompanied by intuitive and workflow-compatible visualization will surely advance, and new devices and tools that minimize technology footprint in the interventional suite to facilitate clinical adoption and mitigate cost and resistance to change will be equally important. The continuing theme of open science and open source data and computational tools is anticipated to grow to facilitate even broader engagement and participation in such advances throughout the scientific community. Numerous additional areas of major challenge loom on the horizon. First are the challenges presented by clinical needs and the engineering of new technologies to meet those needs. These challenges, brought to light by the informed insight of clinical collaboration, have been and will continue to be a driving force for cutting-edge research presented at the meeting. Second are challenges of a logistical and/or financial nature, recognizing the need for improved workflow, integration, and interoperability among technologies entering the circle of care as well as the need to recognize and mitigate cost and to demonstrate clear evidence of improved quality, outcomes, and value. An important emerging theme is the development of frugal image-guided surgical and interventional systems that are suitable to resource-constrained healthcare centers and remote clinical centers. Such challenges loom in developed, underdeveloped, and developed countries alike, and there is tremendous opportunity to advance healthcare in such contexts. The community of researchers who regard MI104 as a home for their work in image-guided procedures, robotic assistance, modeling, and data-driven procedural guidance are well positioned to participate in this trend. Third are challenges of a social-scientific nature in the rapidly changing landscape and format of scientific conferences following the pandemic of 2020. At the time of writing, many of us remember SPIE Medical Imaging 2020 as the last in-person meeting attended in-person prior to the pandemic. The Medical Imaging 2021 symposium was held entirely online, and SPIE Medical Imaging 2022 marked a return to an in-person symposium. Given the acceleration and evolution in modes of scientific communication in recent years, we anticipate an ongoing evolution in meeting format that will synergize the efficiencies of digital interaction with the vibrancy of personal interaction that has marked the last 50 years of the symposium. DisclosuresJHS discloses sponsored research and licensing agreements with Siemens Healthineers (Forchheim Germany), Carestream Health (Rochester USA), Medtronic (Minneapolis USA), and Elekta Oncology (Stockholm Sweden). AcknowledgmentsPeople Behind the MI104 Conference: The scientists participating in the conference represent the cutting edge of what has become one of the premier gatherings for scientific research in medical imaging. No review of its success would be complete, however, without acknowledging the organizers at SPIE who have contributed their time and expertise to its growth and ongoing success. Among these are Sandy Hoelterhoff, Robbine Waters, Lillian Dickinson, Kirsten Anderson, and Matt Novak (Conference Program Coordinators), Diane Cline and Marilyn Gorsuch (SPIE Event Managers), Maryellen Giger (SPIE Past President, SPIE Medical Imaging Past Symposium Chair, and SPIE Journal of Medical Imaging Editor-in-Chief), as well as the numerous Symposium Chairs and MI104 Conference Chairs, whose contributions behind the scenes have been paramount to make the annual conference a success and its proceedings a valuable contribution to the scientific literature. While the available conference records did not reliably capture the names of workshop contributors summarized in Table 2, within the authors’ memory are the following individuals gratefully acknowledged for their valuable contributions (with apologies for unintended omissions): Dr. Michael Ackermann, Dr. Kevin Cleary, Dr. Christian Eusemann, Dr. Maryellen Giger, Dr. Pierre Jannin, Dr. Despina Kontos, Dr. Michael Miga, Dr. Nobuhiko Hata, Dr. David R. Holmes III, Brandon Nelson, Dr. Guy Shechter, Dr. Jonathan Sorger, Dr. Robert Webster III, Dr. Kenneth Wong, and Dr. Terry Yoo. Finally, the continuous and generous sponsorship of several organizations have supported the continued excellence of the conference and recognition of outstanding work by students and early-career scientists. Among these are Siemens Healthineers (Young Scientist Award), Intuitive Surgical (Student Paper Award), Northern Digital Inc. (NDI poster awards), as well as sponsorship from SPIE in support of the symposium-wide Michael B. Merickel and Robert F. Wagner All-Conference Best Student Paper Awards. ReferencesM. Koenig et al.,
“Embedding VTK and ITK into a visual programming and rapid prototyping platform,”
Proc. SPIE, 6141 61412O
(2006). https://doi.org/10.1117/12.652102 PSISDG 0277-786X Google Scholar
J. B. West, J. M. Fitzpatrick and P. G. Batchelor,
“Point-based registration under a similarity transform,”
Proc. SPIE, 4322 611
–622
(2001). https://doi.org/10.1117/12.431135 PSISDG 0277-786X Google Scholar
A. D. Wiles, D. G. Thompson and D. D. Frantz,
“Accuracy assessment and interpretation for optical tracking systems,”
Proc. SPIE, 5367 421
–432
(2004). https://doi.org/10.1117/12.536128 PSISDG 0277-786X Google Scholar
S. Speidel et al.,
“Visual tracking of da Vinci instruments for laparoscopic surgery,”
Proc. SPIE, 9036 903608
(2014). https://doi.org/10.1117/12.2042483 PSISDG 0277-786X Google Scholar
R. Monfaredi et al.,
“Design of a decoupled MRI-compatible force sensor using fiber Bragg grating sensors for robot-assisted prostate interventions,”
Proc. SPIE, 8671 867118
(2013). https://doi.org/10.1117/12.2008160 PSISDG 0277-786X Google Scholar
A. Rougee et al.,
“Geometrical calibration for 3D x-ray imaging,”
Proc. SPIE, 1897 161
–169
(1993). https://doi.org/10.1117/12.146963 PSISDG 0277-786X Google Scholar
G. Lu et al.,
“Hyperspectral imaging for cancer surgical margin delineation: registration of hyperspectral and histological images,”
Proc. SPIE, 9036 90360S
(2014). https://doi.org/10.1117/12.2043805 PSISDG 0277-786X Google Scholar
I. Garg et al.,
“Enhancement of subsurface brain shift model accuracy: a preliminary study,”
Proc. SPIE, 7625 76250J
(2010). https://doi.org/10.1117/12.845630 PSISDG 0277-786X Google Scholar
S. Reaungamornrat et al.,
“MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery,”
Proc. SPIE, 9786 97860H
(2016). https://doi.org/10.1117/12.2208621 PSISDG 0277-786X Google Scholar
S. Röhl et al.,
“Real-time surface reconstruction from stereo endoscopic images for intraoperative registration,”
Proc. SPIE, 7964 796414
(2011). https://doi.org/10.1117/12.877662 PSISDG 0277-786X Google Scholar
Z. Tian, L. Liu and B. Fei,
“Deep convolutional neural network for prostate MR segmentation,”
Proc. SPIE, 10135 101351L
(2017). https://doi.org/10.1117/12.2254621 PSISDG 0277-786X Google Scholar
H. Blume and A. Fand,
“Reversible and irreversible image data compression using the S-transform and Lempel–Ziv coding,”
Proc. SPIE, 1091 2
–18
(1989). https://doi.org/10.1117/12.976433 PSISDG 0277-786X Google Scholar
K. S. Mills et al.,
“High-performance floating-point image computing workstation for medical applications,”
Proc. SPIE, 1232 246
–256
(1990). https://doi.org/10.1117/12.18882 PSISDG 0277-786X Google Scholar
H. R. Blume et al.,
“Comparison of the physical performance of high-resolution CRT displays and films recorded by laser image printers and displayed on light-boxes and the need for a display standard,”
Proc. SPIE, 1232 1
–18
(1990). https://doi.org/10.1117/12.18846 PSISDG 0277-786X Google Scholar
E. Muka, H. R. Blume and S. J. Daly,
“Display of medical images on CRT soft-copy displays: a tutorial,”
Proc. SPIE, 2431 341
–359
(1995). https://doi.org/10.1117/12.207628 PSISDG 0277-786X Google Scholar
S.-Y. Guan and R. G. Lipes,
“Innovative volume rendering using 3D texture mapping,”
Proc. SPIE, 2164 382
–392
(1994). https://doi.org/10.1117/12.174021 PSISDG 0277-786X Google Scholar
R. J. Rost,
“Using OpenGL for imaging,”
Proc. SPIE, 2707 473
–484
(1996). https://doi.org/10.1117/12.238478 PSISDG 0277-786X Google Scholar
H. R. Blume et al.,
“Characterization of high-resolution liquid crystal displays for medical images,”
Proc. SPIE, 4681 271
–292
(2002). https://doi.org/10.1117/12.466930 PSISDG 0277-786X Google Scholar
H. R. Blume et al.,
“Characterization of liquid-crystal displays for medical images: II,”
Proc. SPIE, 5029 449
–473
(2003). https://doi.org/10.1117/12.479774 PSISDG 0277-786X Google Scholar
J. K. Udupa et al.,
“3DVIEWNIX: an open, transportable, multidimensional, multimodality, multiparametric imaging software system,”
Proc. SPIE, 2164 58
–73
(1994). https://doi.org/10.1117/12.174042 PSISDG 0277-786X Google Scholar
I. Wolf et al.,
“The medical imaging interaction toolkit (MITK): a toolkit facilitating the creation of interactive software by extending VTK and ITK,”
Proc. SPIE, 5367 16
–27
(2004). https://doi.org/10.1117/12.535112 PSISDG 0277-786X Google Scholar
I. Wolf et al.,
“Curved reformations using the medical imaging interaction toolkit (MITK),”
Proc. SPIE, 5744 831
–838
(2005). https://doi.org/10.1117/12.595949 PSISDG 0277-786X Google Scholar
S. Pieper, M. Halle and R. Kikinis,
“3D slicer,”
in 2nd IEEE Int. Symp. Biomed. Imaging: Nano to Macro (IEEE Cat No. 04EX821),
632
–635
(2004). https://doi.org/10.1109/ISBI.2004.1398617 Google Scholar
Y. J. Wang et al.,
“Multimodality medical image fusion: probabilistic quantification, segmentation, and registration,”
Proc. SPIE, 3335 239
–249
(1998). https://doi.org/10.1117/12.312497 PSISDG 0277-786X Google Scholar
A. Uneri et al.,
“Architecture of a high-performance surgical guidance system based on C-arm cone-beam CT: software platform for technical integration and clinical translation,”
Proc. SPIE, 7964 796422
(2011). https://doi.org/10.1117/12.878191 PSISDG 0277-786X Google Scholar
C. F. X. de Mendonca et al.,
“Fast holographic-like stereogram display using shell rendering and a holographic screen,”
Proc. SPIE, 3658 1
–9
(1999). https://doi.org/10.1117/12.349460 PSISDG 0277-786X Google Scholar
L. Portoni et al.,
“Real-time auto-stereoscopic visualization of 3D medical images,”
Proc. SPIE, 3976 37
–44
(2000). https://doi.org/10.1117/12.383054 PSISDG 0277-786X Google Scholar
Jr. C. R. Maurer et al.,
“Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom,”
Proc. SPIE, 4319 445
–456
(2001). https://doi.org/10.1117/12.428086 PSISDG 0277-786X Google Scholar
N. J. Mankovich et al.,
“Solid models for CT/MR image display: accuracy and utility in surgical planning,”
Proc. SPIE, 1444 2
–8
(1991). https://doi.org/10.1117/12.45149 PSISDG 0277-786X Google Scholar
E. Wilson et al.,
“A hardware and software protocol for the evaluation of electromagnetic tracker accuracy in the clinical environment: a multi-center study,”
Proc. SPIE, 6509 65092T
(2007). https://doi.org/10.1117/12.712701 PSISDG 0277-786X Google Scholar
A. Danilchenko and J. M. Fitzpatrick,
“General approach to error prediction in point registration,”
Proc. SPIE, 7625 76250F
(2010). https://doi.org/10.1117/12.843847 PSISDG 0277-786X Google Scholar
J. B. West and J. M. Fitzpatrick,
“Point-based rigid registration: clinical validation of theory,”
Proc. SPIE, 3979 353
–359
(2000). https://doi.org/10.1117/12.387697 PSISDG 0277-786X Google Scholar
J. M. Fitzpatrick, J. B. West and Jr. C. R. Maurer,
“Derivation of expected registration error for point-based rigid-body registration,”
Proc. SPIE, 3338 1
–12
(1998). https://doi.org/10.1117/12.310824 PSISDG 0277-786X Google Scholar
J. M. Fitzpatrick,
“Fiducial registration error and target registration error are uncorrelated,”
Proc. SPIE, 7261 726102
(2009). https://doi.org/10.1117/12.813601 PSISDG 0277-786X Google Scholar
D. P. Losey et al.,
“A flexure-based wrist for needle-sized surgical robots,”
Proc. SPIE, 8671 86711G
(2013). https://doi.org/10.1117/12.2008341 PSISDG 0277-786X Google Scholar
J. M. Ferguson et al.,
“Toward image-guided partial nephrectomy with the da Vinci robot: exploring surface acquisition methods for intraoperative re-registration,”
Proc. SPIE, 10576 1057609
(2018). https://doi.org/10.1117/12.2296464 PSISDG 0277-786X Google Scholar
R. C. Vijayan et al.,
“Automatic trajectory and instrument planning for robot-assisted spine surgery,”
Proc. SPIE, 10951 1095102
(2019). https://doi.org/10.1117/12.2513722 PSISDG 0277-786X Google Scholar
L. B. Kratchman et al.,
“Toward robotic needle steering in lung biopsy: a tendon-actuated approach,”
Proc. SPIE, 7964 79641I
(2011). https://doi.org/10.1117/12.878792 PSISDG 0277-786X Google Scholar
K. Oliver-Butler, Z. H. Epps and D. C. Rucker,
“Concentric agonist-antagonist robots for minimally invasive surgeries,”
Proc. SPIE, 10135 1013511
(2017). https://doi.org/10.1117/12.2255549 PSISDG 0277-786X Google Scholar
L. Rai and W. E. Higgins,
“Continuous endoscopic guidance via interleaved video tracking and image-video registration,”
Proc. SPIE, 6918 69182R
(2008). https://doi.org/10.1117/12.768702 PSISDG 0277-786X Google Scholar
R. Walsh et al.,
“Design of a tracked ultrasound calibration phantom made of LEGO bricks,”
Proc. SPIE, 9036 90362C
(2014). https://doi.org/10.1117/12.2043533 PSISDG 0277-786X Google Scholar
J. Bax et al.,
“3D transrectal ultrasound prostate biopsy using a mechanical imaging and needle-guidance system,”
Proc. SPIE, 6918 691825
(2008). https://doi.org/10.1117/12.772759 PSISDG 0277-786X Google Scholar
H. Xu et al.,
“Accuracy validation for MRI-guided robotic prostate biopsy,”
Proc. SPIE, 7625 762517
(2010). https://doi.org/10.1117/12.844251 PSISDG 0277-786X Google Scholar
S.-E. Song et al.,
“Development and preliminary evaluation of an ultrasonic motor actuated needle guide for 3T MRI-guided transperineal prostate interventions,”
Proc. SPIE, 8316 831614
(2012). https://doi.org/10.1117/12.911467 PSISDG 0277-786X Google Scholar
J. H. Siewerdsen et al.,
“Cone-beam CT with a flat-panel detector on a mobile C-arm: preclinical investigation in image-guided surgery of the head and neck,”
Proc. SPIE, 5744 789
–797
(2005). https://doi.org/10.1117/12.595690 PSISDG 0277-786X Google Scholar
S. Nithiananthan et al.,
“Demons deformable registration for cone-beam CT guidance: registration of pre- and intra-operative images,”
Proc. SPIE, 7625 76250L
(2010). https://doi.org/10.1117/12.845499 PSISDG 0277-786X Google Scholar
S.-E. Song et al.,
“Workflow assessment of 3T MRI-guided transperineal targeted prostate biopsy using a robotic needle guidance,”
Proc. SPIE, 9036 903612
(2014). https://doi.org//10.1117/12.2043766 PSISDG 0277-786X Google Scholar
H. Imamura et al.,
“Registration of preoperative CTA and intraoperative fluoroscopic image sequence for assisting endovascular stent grafting,”
Proc. SPIE, 4681 500
–509
(2002). https://doi.org/10.1117/12.466953 PSISDG 0277-786X Google Scholar
J. P. Helferty et al.,
“CT-video registration accuracy for virtual guidance of bronchoscopy,”
Proc. SPIE, 5369 150
–164
(2004). https://doi.org/10.1117/12.534125 PSISDG 0277-786X Google Scholar
H. Hong, K. Kim and S. Park,
“Fast 2D-3D marker-based registration of CT and x-ray fluoroscopy images for image-guided surgery,”
Proc. SPIE, 6141 61412G
(2006). https://doi.org/10.1117/12.653480 PSISDG 0277-786X Google Scholar
H. Sundar et al.,
“A novel 2D-3D registration algorithm for aligning fluoro images with 3D pre-op CT/MR images,”
Proc. SPIE, 6141 61412K
(2006). https://doi.org/10.1117/12.654251 PSISDG 0277-786X Google Scholar
J. Hummel et al.,
“Endoscopic navigation system using 2D/3D registration,”
Proc. SPIE, 6141 614114
(2006). https://doi.org/10.1117/12.654337 PSISDG 0277-786X Google Scholar
X. Huang et al.,
“Intra-cardiac 2D US to 3D CT image registration,”
Proc. SPIE, 6509 65092E
(2007). https://doi.org/10.1117/12.711507 PSISDG 0277-786X Google Scholar
R. H. Gong and P. Abolmaesumi,
“2D/3D registration with the CMA-ES method,”
Proc. SPIE, 6918 69181M
(2008). https://doi.org/10.1117/12.770331 PSISDG 0277-786X Google Scholar
S. Pawiro et al.,
“A new gold-standard dataset for 2D/3D image registration evaluation,”
Proc. SPIE, 7625 76251V
(2010). https://doi.org/10.1117/12.844488 PSISDG 0277-786X Google Scholar
P. Steininger et al.,
“A novel class of machine-learning-driven real-time 2D/3D tracking methods: texture model registration (TMR),”
Proc. SPIE, 7964 79640G
(2011). https://doi.org/10.1117/12.878147 PSISDG 0277-786X Google Scholar
Y. Otake et al.,
“Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration,”
Proc. SPIE, 8316 83160N
(2012). https://doi.org/10.1117/12.911308 PSISDG 0277-786X Google Scholar
J. G. Verly et al.,
“Nonrigid registration and multimodality fusion for 3D image-guided neurosurgical planning and navigation,”
Proc. SPIE, 5367 735
–746
(2004). https://doi.org/10.1117/12.536959 PSISDG 0277-786X Google Scholar
T. Guo, Y. P. Starreveld and T. M. Peters,
“Evaluation and validation methods for intersubject nonrigid 3D image registration of the human brain,”
Proc. SPIE, 5744 594
–603
(2005). https://doi.org/10.1117/12.594082 PSISDG 0277-786X Google Scholar
S. Tang et al.,
“Application of nonrigid registration in ablation of liver cancer,”
Proc. SPIE, 6509 650931
(2007). https://doi.org/10.1117/12.708423 PSISDG 0277-786X Google Scholar
M. Ferrant et al.,
“Real-time simulation and visualization of volumetric brain deformation for image-guided neurosurgery,”
Proc. SPIE, 4319 366
–373
(2001). https://doi.org/10.1117/12.428076 PSISDG 0277-786X Google Scholar
B. K. Lamprich and M. I. Miga,
“Analysis of model-updated MR images to correct for brain deformation due to tissue retraction,”
Proc. SPIE, 5029 552
–560
(2003). https://doi.org/10.1117/12.480217 PSISDG 0277-786X Google Scholar
J. M. Blackall et al.,
“Tracking alignment of sparse ultrasound with preoperative images of the liver and an interventional plan using models of respiratory motion and deformation,”
Proc. SPIE, 5367 218
–227
(2004). https://doi.org/10.1117/12.535180 PSISDG 0277-786X Google Scholar
A. Al-Mayah et al.,
“Effect of heterogeneous material of the lung on deformable image registration,”
Proc. SPIE, 7261 72610V
(2009). https://doi.org/10.1117/12.813828 PSISDG 0277-786X Google Scholar
X. Fan et al.,
“Graphical user interfaces for simulation of brain deformation in image-guided neurosurgery,”
Proc. SPIE, 7625 762535
(2010). https://doi.org/10.1117/12.844036 PSISDG 0277-786X Google Scholar
R. Han et al.,
“Deformable MR-CT image registration using an unsupervised synthesis and registration network for neuro-endoscopic surgery,”
Proc. SPIE, 11598 1159819
(2021). https://doi.org/10.1117/12.2581567 PSISDG 0277-786X Google Scholar
A. Puzrin et al.,
“Image guided constitutive modeling of the silicone brain phantom,”
Proc. SPIE, 5744 157
–164
(2005). https://doi.org/10.1117/12.595689 PSISDG 0277-786X Google Scholar
A. Rasoulian, R. N. Rohling and P. Abolmaesumi,
“A statistical multi-vertebrae shape+pose model for segmentation of CT images,”
Proc. SPIE, 8671 86710P
(2013). https://doi.org/10.1117/12.2007448 PSISDG 0277-786X Google Scholar
Q. Zhang, R. Eagleson and T. M. Peters,
“Graphics hardware based volumetric medical dataset visualization and classification,”
Proc. SPIE, 6141 61412T
(2006). https://doi.org/10.1117/12.653537 PSISDG 0277-786X Google Scholar
C. Villard et al.,
“Toward realistic radiofrequency ablation of hepatic tumors 3D simulation and planning,”
Proc. SPIE, 5367 586
–595
(2004). https://doi.org/10.1117/12.534871 PSISDG 0277-786X Google Scholar
C. A. Linte et al.,
“Image-based modeling and characterization of RF ablation lesions in cardiac arrhythmia therapy,”
Proc. SPIE, 8671 86710E
(2013). https://doi.org/10.1117/12.2008529 PSISDG 0277-786X Google Scholar
M. E. Rettmann et al.,
“Segmentation of left atrial intracardiac ultrasound images for image guided cardiac ablation therapy,”
Proc. SPIE, 8671 86712D
(2013). https://doi.org/10.1117/12.2008762 PSISDG 0277-786X Google Scholar
S. Speidel et al.,
“Recognition of risk situations based on endoscopic instrument tracking and knowledge based situation modeling,”
Proc. SPIE, 6918 69180X
(2008). https://doi.org/10.1117/12.770385 PSISDG 0277-786X Google Scholar
G. M. Gallardo et al.,
“Mitotic cell recognition with hidden Markov models,”
Proc. SPIE, 5367 661
–668
(2004). https://doi.org/10.1117/12.535778 PSISDG 0277-786X Google Scholar
A. Simpson and M. Miga,
“Special section guest editorial: Technology platforms for treatment and discovery in human systems: novel work in image-guided procedures, robotic interventions, and modeling,”
J. Med. Imaging, 5
(2), 021201
(2018). https://doi.org/10.1117/1.JMI.5.2.021201 JMEIET 0920-5497 Google Scholar
A. L. Simpson and M. I. Miga,
“Special section guest editorial: interventional and surgical data science for data-driven patient outcomes,”
J. Med. Imaging, 7
(3), 031501
(2020). https://doi.org/10.1117/1.JMI.7.3.031501 JMEIET 0920-5497 Google Scholar
BiographyJeffrey H. Siewerdsen received his PhD in physics from the University of Michigan (Ann Arbor, Michigan) in 1998, where he worked on the early development of flat-panel x-ray detectors. At William Beaumont Hospital (Royal Oak, Michigan, 1998–2002), he was on the team that developed the first systems for CBCT-guided radiation therapy. At the Ontario Cancer Institute and University of Toronto (2002–2009), his research involved intraoperative 3D imaging and registration. At Johns Hopkins University (2009–2022), he is the John C. Malone Professor and vice-chair in Biomedical Engineering and founding co-director of the Carnegie Center for Surgical Innovation and the I-STAR Labs. In 2022, he joined the MD Anderson Cancer Center as faculty and director of surgical data science. Cristian A. Linte is an associate professor in Biomedical Engineering and Center for Imaging Science at Rochester Institute of Technology. His research focuses on the development, implementation, and evaluation of biomedical image computing, visualization, and navigation tools in support of computer-assisted diagnosis and therapy. He has been attending and disseminating his research at SPIE Medical Imaging since 2006, has been on the program committee of the Image-Guided Procedures, Robotic Interventions, and Modeling conference since 2014, and has served as chair of this conference during 2019–2023. |