Accurate measurement of sound speed is important to calibrate a sound velocity profiler which provides real-time sound velocity to the sonar equipment in oceanographic survey. The sound velocity profiler calculates the sound speed by measuring the time-of-flight of a 1 MHz single acoustic pulse to travel over about 300 mm path. A standard sound velocimeter instrument was invited to calibrate the sound velocity profiler in pure water at temperatures of 278,283, 288, 293, 298, 303 and 308K in a thermostatic vessel at one atmosphere. The sound velocity profiler was deployed in the thermostatic vessel alongside the standard sound velocimeter instrument and two platinum resistance thermometers (PRT) which were calibrated to 0.002k by comparison with a standard PRT. Time of flight circuit board was used to measure the time-of-flight to 22 picosecond precision. The sound speed which was measured by the sound velocity profiler was compared to the standard sound speed calculated by UNESCO to give the laboratory calibration coefficients and was demonstrated agreement with CTD-derived sound speed using Del Grosso's seawater equation after removing a bias.
Traditional matching algorithms cannot be directly applied to the fisheye image matching for large distortion existing in fisheye image. Therefore, a matching algorithm based on uncorrected fisheye images is proposed. This algorithm adopts a local feature description method which combines MSER detector with CSLBP descriptor to obtain the image feature. First, the two uncorrected fisheye images captured by binocular vision system are described by the principle of epipolar constraint. Then the region detection is done with MSER and the ellipse fitting is used to the obtained regions. The MSER regions are described by CSLBP subsequently. Finally, in order to exclude the mismatching points of initial match, random sample consensus (RANSAC) algorithm has been adopted to achieve exact match. Experiments show that the method has a good effect on the uncorrected fisheye image matching.
Because of the further from the center of image the lower resolution and the severe non-linear distortion are the
characteristics of uncorrected fish-eye lens image, the traditional feature matching method can’t achieve good
performance in the applications of fish-eye lens, which correct distortion firstly and then matches the features in image.
Center-symmetric Local Binary Pattern (CS-LBP) is a kind of descriptor based on grayscale information from
neighborhood, which has high ability of grayscale invariance and rotation invariance. In this paper, CS-LBP will be
combined with Scale Invariant Feature Transform (SIFT) to solve the problem of feature point matching on uncorrected
fish-eye image. We first extract the interest points in the pair of fish-eye images by SIFT, and then describe the
corresponding regions of the interest points through CS-LBP. Finally the similarity of the regions will be evaluated using
the chi-square distance to get the only pair of points. For the specified interest point, the corresponding point in another
image can be found out. The experimental results show that the proposed method achieves a satisfying
matching performance in uncorrected fish-eye lens image. The study of this article will be useful to enhance the
applications of fish-eye lens in the field of 3D reconstruction and panorama restoration.
KEYWORDS: Calibration, Stereo vision systems, Cameras, Spherical lenses, Lenses, Visual process modeling, 3D modeling, Image processing, 3D image processing, Digital signal processing
Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even
exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will
get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since
a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the
pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this
category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar
rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo
Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole
global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision
equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize
3D reconstruction and panoramic image generation.
KEYWORDS: Cameras, Calibration, Visual process modeling, Spherical lenses, Lenses, Mathematical modeling, Stereo vision systems, 3D modeling, 3D vision, Imaging systems
In geometric calibration of stereoscopic cameras the object is to determine a set of parameters which describe the
mapping from 3D reference coordinates to 2D image coordinates, and indicate the geometric relationships between the
cameras. While various methods for stereo cameras with ordinary lenses can be found from the literature, stereoscopic
vision with extremely wide angle lenses has been much less discussed. Spherical stereoscopic vision is more and more
convenient in computer vision applications. However, its use for 3D measurement purposes is limited by the lack of an
accurate, general, and easy-to-use calibration procedure. Hence, we present a geometric model for spherical stereoscopic
vision equipped by extremely wide angle lenses. Then, a corresponding generic mathematical model is built. Method for
calibration the parameters of the mathematical model is proposed. This paper shows practical results from the calibration
of two high quality panomorph lenses mounted on cameras with 2048x1536 resolutions. Here, the stereoscopic vision
system is flexible, the position and orientation of the cameras can be adjusted randomly. The calibration results include
interior orientation, exterior orientation and the geometric relationships between the two cameras. The achieved level of
calibration accuracy is very satisfying.
Omnidirectional vision appears the definite significance since its advantage of acquiring full 360° horizontal field of
vision information simultaneously. In this paper, an embedded original omnidirectional vision navigator (EOVN) based
on fish-eye lens and embedded technology has been researched.
Fish-eye lens is one of the special ways to establish
omnidirectional vision. However, it appears with an unavoidable inherent and enormous distortion. A unique integrated
navigation method which is conducted on the basis of targets tracking has been proposed. It is composed of multi-target
recognition and tracking, distortion rectification, spatial location and navigation control. It is called RTRLN. In order to
adapt to the different indoor and outdoor navigation environments, we implant mean-shift and dynamic threshold
adjustment into the Particle Filter algorithm to improve the efficiency and robustness of tracking capability. RTRLN has
been implanted in an independent development embedded platform. EOVN likes a smart crammer based on
COMS+FPGA+DSP. It can guide various vehicles in outdoor environments by tracking the diverse marks hanging in the
air. The experiments prove that the EOVN is particularly suitable for the guidance applications which need high
requirements on precision and repeatability. The research achievements have a good actual applied inspection.
The purpose of this paper aims to promote the application of fish-eye lens. Accurate parameters calibration and effective
distortion rectification of an imaging device is of utmost importance in machine vision. Fish-eye lens produces a
hemispherical field of view of an environment, which appears definite significant since its advantage of panoramic sight
with a single compact visual scene. But fish-eye lens image has an unavoidable inherent severe distortion. The precise
optical center is the precondition for other parameters calibration and distortion correction. Therefore, three different
optical center calibration methods have been researched for diverse applications. Support Vector Machine (SVM) and
Spherical Equidistance Projection Algorithm (SEPA) are integrated to replace traditional rectification methods. SVM is a
machine learning method based on the theory of statistics, which have good capabilities of imitating, regression and
classification. In this research, SVM provides a mapping table between the fish-eye image and the standard image for
human eyes. Two novel training models have been designed. SEPA has been applied to promote the rectification effect
of the edge of fish-eye lens image. The validity and effectiveness of our achievements are demonstrated by processing
the real images.
The stereo visual odometer in vision based on the navigation system is proposed in the paper. The stereo visual odometer
can obtain the motion data to implement the position and attitude estimation of ALV(Autonomous Land Vehicle). Two
key technology in the stereo vision odometer are dissertated. The first is using SIFT(Scale Invariant Feature Transform)
to extract suitable feature, match points pairs in the feature, and track the feature of fore and after frames of the same
point on the object. The second is using matching and tracking to obtain the different 3-D coordinate of the feature of the
point on the object, and to compute the motion parameters by motion estimate. The unknown outdoor environment is
adopted in the experiment. The results show that the stereo vision odometer is more accurate, and the measurement error
dose not increase with the movement distance increasing. It can be used as an important supplement of conventional
odometer.
Omnidirectional vision (Omni-vision) has the feature that an extremely wide view can be achieved simultaneously.
The omni-image brings a highly unavoidable inherent distortion while it provides hemispherical field of views. In this
paper, a method called Spherical Perspective Projection is used for correction of such distorted image. Omni-vision
target recognition and tracking with fisheye lens for AGVs appears definite significant since its advantage of acquiring
all vision information of the three-dimensional space once. A novel Beacon Model and Omni-vision tracker for mobile
robots is described. At present, the research of target model has many different problems, such as outdoor illumination,
target veiling, target losing. Specially, outdoor illumination and beacon veiling are the key problems which need an
effective method to solve. The new beacon model which features particular topology shape can be recognized in the
outdoors with part veiled of the object. In this paper an improved omni-vision object tracking method based on mean
shift algorithm is proposed. The mean shift algorithm which is a powerful technique for tracking objects in image
sequences with complex background has been proved to be successful for the fast computation and effective tracking
problems. The recognition and tracking functions have been demonstrated on experimental platform.
The image feature extract method of image pattern recognition algorithm is studied in the paper. A high speed and
real-time method of image locating and feature extract is presented. And the method mainly consists of two key
technologies, one is to locate the target area of measured object using mask matrix method, the other is to extract the
edge feature based on the template matching method. The experiment results show that the method of image feature
extracting is a high speed and high precision image recognition algorithm, and it can be satisfied the high-speed and
real-time requests of on-line detection.
KEYWORDS: Wavelets, Digital signal processing, Image enhancement, Signal processing, Wavelet transforms, Image processing, Multimedia, Software development, Control systems, Roads
The paper presents a new Adaptive Gain calculating approach when using the Adaptive Image Enhancement algorithm
based on Wavelet Transform. The basic technique is to select two different thresholds which divide the input into three
parts after the wavelet coefficients are normalized. For the wavelet coefficients less than the smaller threshold, just make
the output zero. And this provides a de-noise effect. The output remains unchanged when the wavelet coefficients are
greater than the larger one. For the last part, find a function to make the output figure S shape. So the algorithm can give
a clear contrast and the function is the key. The final goal is to use this method in the process of Online Vision Measure,
we have chosen the TI TMS320 DM642 Digital Signal Processor or DSP because of its powerful multimedia processing
capability. What's more, the TI corporation has provided a variety of software develop libraries as well as the 3rd party
tools. All such tools make the development more rapid and convenient. After the technique this paper provides is
implemented on DSP, a series of optimization will be performed to make it suitable for industry real-time usage.
The binocular vision measurement system is adopted with two CCDs and their optical axes are vertical to obtain the attitude of dynamic missile. Two methods to get the axis vector of the measured missile are discussed in the paper. One is to derive the linear equation of the missile axis' projection line in space coordinate using axle wire method. The other is to obtain the axis vector of measured object based on vertical principle of two CCDs' optical axes. Lastly, with the measured missile attitude measurement principle, the moving attitude from the axis vector obtained above is discussed in the paper too. The result of simulation experiment and analysis shows that the measurement method has higher precision, and can meet the measurement requirement for the dynamic parameter of moving object in long distance.
The performance of a typical machine vision system depends on the quality of the images that is directly affected by the illumination. In order to obtain high contrast images, the target must be illuminated in such a way that distinctions between target areas and peripheral areas are as clear as possible. A good design can improve the resolution of images and make software programming more easy, while a bad design will inevitably cause many problems. For example, specular reflection and exposal may hide important information, and hatching will blur the edges of images. So we present a new design of a light source called 24-phase colour light and various lighting techniques to improve the quality of the illumination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.