In this paper, we propose a novel multi-view generation framework that considers the spatiotemporal consistency of each
synthesized multi-view. Rather than independently filling in the holes of individual generated images, the proposed
framework gathers hole information from each synthesized multi-view image to a reference viewpoint. The method then
constructs a hole map and a SVRL (single view reference layer) at the reference viewpoint before restoring the holes in
the SVRL, thereby generating a spatiotemporally consistent view. A hole map is constructed using depth information of
the reference viewpoint and the input/output baseline length ratio. Thus, the holes in the SVRL can also represent holes
in other multi-view images. To achieve temporally consistent hole filling in the SVRL, the restoration of holes in the
current SVRL is performed by propagating the pixel value of the previous SVRL. Further hole filling is performed using
a depth- and exemplar-based inpainting method. The experimental results showed that the proposed method generates
high-quality spatiotemporally consistent multi-view images in various input/output environments. In addition, the
proposed framework decreases the complexity of the hole-filling process by reducing repeated hole filling.
This study aims to promote the cubic effect by reproducing images with depth perception using chromostereopsis in
human visual perception. From psychophysical experiments based on the theory that the cubic effect depends on the
lightness of the background in the chromostereoptic effect and the chromostereoptic reversal effect, it was found that the
luminous cubic effect differs depending on the lightness of the background and the hue combination of the neighboring
colors.
Also, the layer of the algorithm-enhancing cubic effect that was drawn from the result of the experiment was classified
into the foreground, middle, and background layers according to the depth of the input image. For the respective
classified layer, the color factors that were detected through the psychophysical experiments were adaptively controlled
to produce an enhanced cubic effect that is appropriate for the properties of human visual perception and the
characteristics of the input image.
In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes
between corresponding images, or local descriptors representing neighborhoods of feature points extracted from
corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to
the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera
motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray
values, identifying corresponding points becomes difficult in the case of changing illumination and images with a
similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on
feature points using color information of images. Essentially, the digital values acquired from a real digital color camera
are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and
invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as
color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a
test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using
the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature
points extracted using the proposed method is increased, while image mosaicking using color information is also
achieved.
Image acquisition devices inherently do not have color constancy mechanism like human visual system. Machine color constancy problem can be circumvented using a white balancing technique based upon accurate illumination estimation. Unfortunately, previous study can give satisfactory results for both accuracy and stability under various conditions. To overcome these problems, we suggest a new method: spatial and temporal illumination estimation. This method, an evolution of the Retinex and Color by Correlation method, predicts on initial illuminant point, and estimates scene-illumination between the point and sub-gamuts derived by from luminance levels. The method proposed can raise estimation probability by not only detecting motion of scene reflectance but also by finding valid scenes using different information from sequential scenes. This proposed method outperforms recently developed algorithms.
This paper proposes a method of filtering a digital sensor image to efficiently reduce noise and to improve the sharpness of an image. To reduce the noise in an image captured by conventional image sensor, the proposed noise reduction filter selectively outputs one of results obtained by recursive temporal and spatial noise filtering values. By proposed noise filtering method, image detail can be well preserved and noise filtering artifacts which can be generated along the moving object boundary in image sequences by applying temporal noise filtering are prevented. Since the sharpness of noise filtered image can be inevitably deteriorated by noise filtering, the adaptive noise suppressed sharpening filter is also proposed. The proposed sharpening filter generates filter mask adaptively according to the pixel similarity information within filter mask and can obtain the continuous image quality by the easy-controllable gain control algorithm without noise boost-up in the smooth region.
On a plasma display panel (PDP), luminous elements of red, green, and blue have different time responses. Therefore, a colored trails and edges appear behind and in front of moving objects. In order to reduce the color artifacts, this paper proposes a motion-based discoloring method. Discoloring values are modeled as linear functions of a motion vector to reduce hardware complexity. Experimental results show that the proposed method has effectively removed the colored trails and edges of moving objects. Moreover, the clear image sequences have been observed compared to the conventional ones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.