In operating rooms (OR), physicians have to work in strict compliance to asepsis rules to not endanger the health of patients. In the case of laparoscopic mini-invasive surgery, where the field of view of the surgeon is restricted, the use of computers is necessary to provide the missing information. Physicians must therefore interact with computers either by directly manipulating mouse and keyboard, by removing their gloves or by using a protection for the devices, or by voicecommanding an assistant to do so. However, in addition to be time-consuming, it may cause hygiene issues in the first case and a lack of precision in the second. The need to have better way of interactions with computers had led to important researches in that area during the last ten years, especially in Touchless Human Machine Interaction (THMI). Indeed, THMI, including gesture recognition, voice recognition and eye-tracking, has a promising future in the medical field, allowing surgeons to interact by themselves with devices, thereby avoiding error-prone process while complying to asepsis rules. In this context, the “Intelligent Touchless Glassless Human-Machine Interface” (ITG-HMI) project aims to provide a new tool for viewing and manipulating 3D objects. In this article, we present how this interface was implemented, through the detection and recognition of hand gestures using Deep Learning, the establishment of a graphical interface to display 3D models and the adaptation of gestures recognized in actions to achieve.
|