Multi-projection technology, utilized for creating visual effects across a wide area, requires measurement of the projection space, determination of the position of each projector, and correction of the distortion of the projected image caused by the projection surface. These calibration tasks require time and effort, making their easy achievement challenging. In this study, we propose an automated multi-projection system that uses calibrated integrated ProCam units, consisting of projectors and cameras, in an indoor space with unknown geometry. Due to the 3D measurement of the projection surface for geometric correction is performed with high accuracy and dense point cloud, rendering using the point cloud data reduces the rendering speed. To balance rendering speed and geometric correction accuracy, we propose a rendering method that combines point cloud and plane polygons, which is specialized for indoor spaces characterized by flat features.
In recent years, numerous techniques have been researched to imbue images with a sense of depth, making them appear as if they were real. These research efforts have made it possible to present high-quality images with a three-dimensional quality. However, on the other hand, issues have emerged concerning the unnaturalness of the viewing environment, such as field of view restrictions imposed by head-mounted displays, and the intrusive presence of the display itself, which hinders the expressive power of the images. Therefore, in this study, we propose a method for presenting stereoscopic images that can be viewed with the naked eye, using a transparent display with a minimized presence, with the aim of creating stereoscopic image presentations that seamlessly blend into the real environment.
In this research, we aim to realize the improvement of the accuracy of radiometric compensation for moving non-rigid bodies. By predicting the movement of objects using deep learning, the influence of delays in measurement and projection depending on cameras and projectors on correction accuracy is reduced.
In this study, we investigate the weight illusion produced by changing the impression of an arm by displaying visual effects on an avatar's arm superimposed on its own arm. We have confirmed that visual changes using virtual arms produce an illusion of weight in comparisons between different weight conditions. It was pointed out that differences in the impression of the weight of the arm itself due to differences in the appearance of the virtual arms may affect the perception of weight. In this paper, we examine the effect of changes in the impression of arm strength on the perception of weight by changing the impression of the arm by visual effects that do not affect the impression of the arm's weight.
Projection mapping, a spatial augmented reality technology that changes the appearance of an object by projecting images using a projector, has become widespread. We have achieved projection mapping on moving objects by estimating the position and orientation of the object by mapping the contour of the object acquired with an infrared camera to the contour of a 3D model acquired in advance. However, when the contour of the object is occluded, the estimation accuracy is degraded, which is a problem for precise projection mapping. In this study, we propose robust estimation of position and orientation by using distortion information of the projected image caused by the motion of the target object in addition to the contour. The projection distortion is reproduced in the simulation environment, and the estimated accuracy of the position and orientation is evaluated.
In this study, we propose a projection mapping using a rotating volumetric display. In this proposal, a light source synchronized with the object of projection including its position and shape is dynamically generated by a simple structure of a rotating volume display. The light is projected onto a moving object using a recursive through optical systems. In this paper, we verify the applicability to projection mapping by implementing a prototype.
In this study, we propose a projection mapping method using contours viewed from multiple viewpoints to achieve highprecision tracking of objects that can be grasped and moved freely by hand. Recently, dynamic projection mapping, which changes the appearance of a moving object, has been actively studied. In particular, tracking a grasped object is difficult because the object is shielded. We have proposed a method for achieving high-precision object tracking using only ordinary infrared cameras. Nonetheless, there were still challenges that made it difficult to keep track. In this study, we propose a more robust and accurate object tracking by adding contours obtained from different viewpoints by increasing the number of cameras, which is more robust to shielding and accurate.
In this research, we aim to improve the accuracy of wide-area projection using multiple projectors and speed up the measurement for projection in an environment surrounded by walls such as indoor spaces. In previous efforts, we have realized a method for large-scale image projection easily, by projecting a pattern onto the indoor space and measuring with a combination of a normal lens and a fish-eye lens. However, when patterns are projected in the indoor space surrounded by walls, an area in which measurement cannot be performed correctly by indirect reflected light occurs, which causes distortion in the projection result. Therefore, we reduce the influence of indirect reflected light by dividing and projecting pattern light. In addition, focusing on the fact that indirect reflected light loses high-frequency components in its generation process, it is greatly reduced by a simple difference process based on the similarity of indirect reflected light of a sufficiently divided pattern image. On the other hand, such division of the pattern image increases the time required for grasping the shape of the space. Therefore, divided patterns are designed to reduce the number of them as much as possible using color information and an efficient division method while efficiently removing indirect reflected light.
Recently, luminance compensation that realizes image projection canceling the in uence of patterned projection surfaces, has attract attention. This technique can cancel the in uence of the patterns, based on the response function representing the input-output relationship between a projector and a camera. On the other hand, it largely depends on the inter-pixel correspondence between the projector and the camera, so projection surfaces is limited to a rigid body. That is, the utilization area of luminance compensation is largely limited. In this research, we realize luminance compensation on swinging curtains by estimating the inter-pixel correspondence in real-time. In addition, the GPU is used to execute the system in real-time. We examine the effectiveness of the constructed system, using curtains of various materials.
Recently, image projection attracted a lot of attention, and it is used in various situations. So, the system that realizes image projection simply in any wide areas becomes necessary. Therefore, in this research, we proposed a geometrical correction method for surrounded projection combining a standard lens camera with a fish-eye lens camera and realize simple construction of image projection space in any indoor spaces. By capturing the same pattern images from two types of cameras, two types of cameras are corresponded without external parameters of cameras and projectors, and simplifying obtained data makes the correspondences highly accurate. In this research, we evaluated our geometrical correction method in accuracy, and projected image projection with multiple projectors in the wide indoor space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.