Paper
28 February 2014 A hardware and software architecture to deal with multimodal and collaborative interactions in multiuser virtual reality environments
P. Martin, A. Tseu, N. Férey, D. Touraine, P. Bourdot
Author Affiliations +
Proceedings Volume 9012, The Engineering Reality of Virtual Reality 2014; 901209 (2014) https://doi.org/10.1117/12.2042499
Event: IS&T/SPIE Electronic Imaging, 2014, San Francisco, California, United States
Abstract
Most advanced immersive devices provide collaborative environment within several users have their distinct head-tracked stereoscopic point of view. Combining with common used interactive features such as voice and gesture recognition, 3D mouse, haptic feedback, and spatialized audio rendering, these environments should faithfully reproduce a real context. However, even if many studies have been carried out on multimodal systems, we are far to definitively solve the issue of multimodal fusion, which consists in merging multimodal events coming from users and devices, into interpretable commands performed by the application. Multimodality and collaboration was often studied separately, despite of the fact that these two aspects share interesting similarities. We discuss how we address this problem, thought the design and implementation of a supervisor that is able to deal with both multimodal fusion and collaborative aspects. The aim of this supervisor is to ensure the merge of user’s input from virtual reality devices in order to control immersive multi-user applications. We deal with this problem according to a practical point of view, because the main requirements of this supervisor was defined according to a industrial task proposed by our automotive partner, that as to be performed with multimodal and collaborative interactions in a co-located multi-user environment. In this task, two co-located workers of a virtual assembly chain has to cooperate to insert a seat into the bodywork of a car, using haptic devices to feel collision and to manipulate objects, combining speech recognition and two hands gesture recognition as multimodal instructions. Besides the architectural aspect of this supervisor, we described how we ensure the modularity of our solution that could apply on different virtual reality platforms, interactive contexts and virtual contents. A virtual context observer included in this supervisor in was especially designed to be independent to the content of the virtual scene of targeted application, and is use to report high-level interactive and collaborative events. This context observer allows the supervisor to merge these interactive and collaborative events, but is also used to deal with new issues coming from our observation of two co-located users in an immersive device performing this assembly task. We highlight the fact that when speech recognition features are provided to the two users, it is required to automatically detect according to the interactive context, whether the vocal instructions must be translated into commands that have to be performed by the machine, or whether they take a part of the natural communication necessary for collaboration. Information coming from this context observer that indicates a user is looking at its collaborator, is important to detect if the user is talking to its partner. Moreover, as the users are physically co-localised and head-tracking is used to provide high fidelity stereoscopic rendering, and natural walking navigation in the virtual scene, we have to deals with collision and screen occlusion between the co-located users in the physical work space. Working area and focus of each user, computed and reported by the context observer is necessary to prevent or avoid these situations.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
P. Martin, A. Tseu, N. Férey, D. Touraine, and P. Bourdot "A hardware and software architecture to deal with multimodal and collaborative interactions in multiuser virtual reality environments", Proc. SPIE 9012, The Engineering Reality of Virtual Reality 2014, 901209 (28 February 2014); https://doi.org/10.1117/12.2042499
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Haptic technology

Virtual reality

Visualization

Gesture recognition

Speech recognition

Computer architecture

Interfaces

RELATED CONTENT

Gestural interfaces for immersive environments
Proceedings of SPIE (February 28 2014)
Is haptic watermarking worth it?
Proceedings of SPIE (February 09 2006)
Museum of Pure Form: preliminary considerations
Proceedings of SPIE (March 02 2001)
Generic force server for haptic devices
Proceedings of SPIE (March 02 2001)

Back to Top