In this paper, we propose a new computational reconstruction technique of integral imaging for depth resolution enhancement by using integer-valued and non-uniform shifting pixels. In a general integral imaging system, we can record and visualize (or display) 3D object using lenslet array. In previous studies, many reconstruction techniques such as computational volumetric reconstruction and pixel of elemental images rearrangement technique (PERT) have been reported. However, a conventional computational volumetric reconstruction technique has low visual quality and depth resolution because low resolution elemental images and uniformly distributed shifting pixels are used for reconstruction. On the other hand, our proposed method uses non-uniformly distributed shifting pixels for reconstruction instead of uniformly distributed shifting pixels in conventional computational volumetric reconstruction. Thus, the visual quality and depth resolution may be enhanced. Finally, our experimental results show the improvement of depth resolution and visual quality of the reconstructed 3D images.
In this paper, we propose a new passive image sensing and visualization of 3D objects using concept of both resolution priority integral imaging (RPII) and depth priority integral imaging (DPII) to improve lateral and depth resolutions of 3D images simultaneously. We suppose that elemental images are the most important information for 3D performance of integral imaging, since they include both lateral and depth resolutions of 3D objects. Therefore, all resolutions of the reconstructed 3D images are determined by these elemental images in pickup stage. In this paper, we analyze the lateral and depth resolutions that depend on the basic parameters of camera or lens for pickup. Then, we describe our proposed method. To support our proposed method, we carry out the computer simulation. In addition, we analyze how the surface light of 3D objects placed in arbitrary position can be expressed within the permitted range according to the setting of camera parameters. Finally, to evaluate the performance of our method, peak signal to noise ratio (PSNR) is calculated.
KEYWORDS: Integral imaging, Reconstruction algorithms, 3D image processing, Image resolution, 3D image reconstruction, Visualization, Convolution, 3D image enhancement, Point spread functions, Geometrical optics
In this paper, we propose a visual quality enhancement of 3D reconstruction algorithm in integral imaging. Conventional integral imaging has a critical problem that attenuates the visual quality of 3D objects when low-resolution elemental images are used. Although, PERT is one of the solutions, the size of 3D scenes is different from optical reconstruction since it is not considering space between back-projected pixels on reconstruction planes. Therefore, we consider this space and use convolution operator. Especially, convolution operator can be designed by considering aperture shapes. To support our proposed method, we carry out optical experiment and computer simulations.
In this paper, we propose a new high-resolution depth estimation algorithm in integral imaging which can obtain threedimensional (3D) images by using lenslet array. In conventional studies, a stereo-matching is used for depth estimation. However, it is not the best solution for integral imaging since the 3D images are usually low-resolution images. Therefore, we propose a pixel blink rate based algorithm using pixel of the elemental images rearrangement technique (PERT) in integral imaging. Through our optical experiment, the depth resolution by our technique is dramatically improved compared with a conventional method.
In this paper, we propose an optical three-dimensional (3D) visualization under inclement weather conditions. These conditions include fog and night environments. For visualization under fog, we assume that fog is the unknown scattering media so that we use peplography technique which estimates the scattering media by Gaussian random process and detects ballistic photons from the scattering media by photon counting imaging. In addition, we use photon counting imaging with Bayesian estimation and adaptive statistical parameters for night vision. In this method, priori information of the scene can be assumed as Gamma distribution for calculation of posteriori distribution and adaptive statistical parameters can be calculated from the reconstructed 3D images. To obtain 3D information under inclement weather conditions, we use a passive 3D imaging technique such as integral imaging and computational reconstruction algorithm with 3D point cloud. Finally, we optimize these algorithms for real-time process and wearable devices. To support our proposed method, we implement preliminary experiments.
KEYWORDS: Photon counting, Statistical analysis, 3D image reconstruction, 3D image processing, Microscopy, 3D visualizations, Visualization, Microorganisms, Integral imaging, Image quality
We present three-dimensional photon counting microscopy using Bayesian estimation. To record the light intensity information of objects in photon-starved conditions, photon counting imaging can be used. In conventional photon counting imaging, maximum likelihood estimation (MLE) or Bayesian estimation with uniform statistical parameters has been used for 3D visualization. Since MLE does not use the prior information of the estimated target, its visual quality is not enough to recognize 3D microorganisms when low number of photons is used. In addition, because Bayesian estimation with uniform statistical parameters uses fixed statistical parameters over the whole image, the estimated image seems to be image with boost-up light intensity. On the other hand, our proposed method uses the nonuniform statistical parameters for prior information of microorganisms to estimate 3D profile of them. Therefore, this method may enhance the visual quality of 3D microscopy results with low number of photons.
In this paper, we present a novel three-dimensional (3D) sensing system which can demonstrate 3D acquisition. The
proposed system is using an electronically tunable liquid crystal (LC) lens with axially distributed sensing method.
Therefore, multiple 2D images with slightly different perspectives by varying the focal lengths of the LC lens without
mechanical movements of an image sensor can be recorded. And then the 3D images are further reconstructed according
to the ray-back projection algorithm. The preliminary functionalities are also demonstrated in this paper. We believe that
our proposed system may useful for a compact 3D sensing camera system.
In this paper, we overview tracking methods of 3D occluded objects in 3D integral imaging. Two methods based on
Summation of Absolute Difference (SAD) algorithm and Bayesian framework, respectively, are presented. For the
tracking method based on SAD, we calculate SAD between pixels of consecutive frames of a moving object for 3D
tracking. For the tracking method based on Bayesian framework, posterior probabilities of the reconstructed scene
background and the 3D objects are calculated by defining their pixel intensities as Gaussian and Gamma distributions,
respectively, and by assuming appropriate prior distributions for estimated parameters. Multi-objects tracking is
achieved by maximizing the geodesic distance between the log-likelihood of the background and the objects.
Experimental results demonstrate 3D tracking of occluded objects.
In this paper, an overview of automatic target recognition for three-dimensional (3D) passive photon counting integral
imaging system using maximum average correlation height filters is presented. Poisson distribution is adapted for
generation photon counting images. For estimation of the 3D scene from photon counting images, maximum likelihood
estimation is used. The advanced correlation filter is synthesized with ideal training images. Using this filter, we prove
that automatic target recognition may be implemented under photon starved conditions. Since integral imaging may
reduce the effect of occlusion and obscuration, the advanced correlation filter may detect and recognize a 3D object
under photon starved environment. To demonstrate the ability of 3D photon counting automatic target recognition,
experimental results are presented.
KEYWORDS: Sensors, Integral imaging, Cameras, 3D image processing, Image sensors, 3D image reconstruction, Calibration, 3D metrology, 3D modeling, Reconstruction algorithms
Integral imaging is a 3D sensing and imaging technique. Conventional 3D integral imaging systems require that all the
sensor positions in the image capture stage are known. But in certain image pick up geometries, it may be difficult to
obtain accurate measurement of sensor positions such as sensors on moving platforms and/or randomly distributed
sensors. In this paper, we present a 3D integral imaging method with unknown sensor positions. In the proposed method,
all the sensors are randomly distributed on a plane with parallel optical axes. More, only the relative position of any two
sensors is needed whereas all other sensor positions are unknown. We combine image correspondences extraction,
camera perspective model, two view geometry and computational integral imaging 3D reconstruction techniques to
estimate the unknown sensor positions and reconstruct 3D images. The experiment results executed both in lab and
outside show the feasibility of the proposed method in 3D integral imaging. Furthermore, the experiments indicate that
the quality of reconstructed images by using the proposed sensor position estimation algorithm can be improved
compared to the ones by using the physical measurements of the sensor positions.
In this paper, we propose a 3D sensing and visualization of micro-objects using an axially distributed image capture
system. In the proposed method, the micro-object is optically magnified and the axial images of magnified micro-object
are recorded using axially distributed image capture. The recorded images are used to visualize the 3D scene using the
computational reconstruction algorithm based on ray back-projection. To show the usefulness of the proposed method,
we carry out preliminary experiments and present the results.
KEYWORDS: 3D image processing, 3D image reconstruction, Image processing, Light scattering, Scattering, Integral imaging, Visualization, Water, 3D modeling, Image sensors
In this paper, three-dimensional (3D) imaging of objects in scattering medium is presented. Synthetic Aperture Integral
Imaging (SAII) technique is used to record multiple images with different perspectives. Each recorded image is degraded
by light scattering. This degradation function can be modeled by Gaussian Theory. The unknown parameter mean of
Gaussian distribution can be estimated by using Maximum Likelihood Estimation (MLE). The effects of scattering can
be remedied by using estimated degradation function and statistical image processing techniques such as histogram
stretching and matching. 3D scene can be visualized by computational 3D reconstruction algorithms of integral imaging.
To show the ability of 3D object visualization in scattering medium, experimental results are presented.
In this keynote address paper, an overview of multi-view three-dimensional (3D) imaging with passive sensing for
underwater applications is presented. The 3D Synthetic Aperture Integral Imaging (SAII) technique is adapted for
underwater sensing. The change in apparent object distance caused by the refractive index of water must be accounted
for in computational 3D image reconstructions. An experimental environment with objects in water and SAII system in
air or water is presented. Experimental results are presented to demonstrate the ability of the underwater 3D SAII system.
In this paper, we present a method to implement computational three dimensional (3D) integral imaging (II). This
method is based on Pixels of the Elemental Image Rearrangement Technique (PERT). In our proposed method for
computational reconstruction of II, the reconstructed 3D image is obtained by using the entire elemental images which
are captured from the lenslet array. Instead of averaging the elemental images, our proposed method rearranges pixels of
each elemental image. Therefore, the reconstructed 3D image has the same number of pixels as the entire elemental
images' pixels. To verify this computational reconstruction method, we have implemented optical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.