PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In many machine vision automated inspection applications a line scan camera is used to acquire image data from a moving object as it is passed beneath a controlled light source. The geometry between the line scan camera and the controlled light source is chosen in such a way as to emphasize certain object characteristics. In most cases the object is translated linearly. Because of this motion, image blurring may take place if the camera's exposure is not properly controlled. This paper describes the method by which an ordinary line scan camera can be 'tricked' into providing independent exposure control using this technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scanning cameras have been widely used in machine inspection. Color sensors are required to provide sufficient scene information to develop intelligent recognition algorithm. This paper describes the performance of a color linear sensor which can provide photographic quality images. We have also used a digital scanner with this color sensor to evaluate its potential in the application of traffic line recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High speed web inspection applications, such as for the paper or metal production industry, suggests a unique set of problems for the optics and lighting required. Typically, web inspection is done using a linear array camera looking across a single line of the web. The lighting and viewing system is needed over only a limited time of the material under inspection. However, converting most light sources into a line of light can be very inefficient. Typical sources have included banks of halogen lamps, apertured fluorescent lamps, and laser lines. Obtaining the high light levels desired for high speed inspection is still difficult with many of these sources. This paper describes the design of an optical system using high pressure sodium lamps along with specialized optical components to produce a very bright and efficient line of light. Considerations of reflector design, apertures, and the use of holographic optical elements will be discussed. This paper will present the results of both computer simulation and experimental investigations of this general lighting design problem, and suggest possible designs that could be employed in a variety of applications to achieve better light efficiency of shaped lighting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A non-error Synthetic Discriminant Function Binary Phase-only Filter (SDFBPOF) is constructed with digital synthesis method and a corrected FFT algorithm for in-plane rotation, scale, shift distortion invariant optical pattern recognition (OPR). The computer simulation of correlation shows that this method can obtain very high S/N, sharp correlation peak and low sidelobe compared with the common phase quantized SDFBPOF. The construction of this BPOF is simplified. The space bandwidth (SBW) is decreased by using the corrected FFT algorithm with faster speed than DFT, so it could be made easily and precisely using modern VLSI technology or some programmable Spatial Light Modulators (SLM).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An improved pattern recognition method based on morphological transformations is described and simulation results are presented. An incoherent optical morphological transformation processor is used to implement this kind of pattern recognition and the experimental result is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most common assumptions for recovering object features in computer vision and rendering objects in computer graphics is that the radiance distribution of diffuse reflection from materials in Lambertian. We propose a reflectance model for diffuse reflection from smooth inhomogeneous dielectric surfaces that is empirically shown to be significantly more accurate than the Lambertian model. The resulting reflected diffuse radiance distribution has a simple mathematical form. The proposed model for diffuse reflection utilizes results of radiative transfer theory for subsurface multiple scattering. For an optically smooth surface boundary this subsurface intensity distribution becomes altered by Fresnel attenuation and Snell refraction making it become significantly non-Lambertian. We present a striking diffuse reflection effect at occluding contours of dielectric objects that is strongly deviant from Lambertian behavior, and yet is explained by our diffuse reflection model. The proposed diffuse reflection model for optically smooth surfaces can be used to describe diffuse reflection from rough dielectric surfaces by serving as the diffuse reflection law for optically smooth microfacets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is shown theoretically and experimentally that the offset of the image center does not affect significantly the determinations of the position and orientation of a coordinate frame and the accuracy of measurement. We also present the results that the lens distortions do not affect the location of an object and the accuracy of measurement significantly. We developed a method of estimating the initial values, which are in general closed to the final results, for the iterative process of the camera calibration. Finally some experimental results are given to demonstrate the theoretical analysis given in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple and accurate camera calibration method is presented in this paper, and the relation between accuracy of calibrated TV camera parameters and calibration condition is examined by applying a law of error propagation. The optimal calibration condition is proposed where an iterative method is applied to calibrate the parameter values. Furthermore, the variance of the estimated 3-D information is determined quantitatively in the case of the optimal calibration condition. These results are confirmed through experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Direct Linear Transform (DLT) is widely used in the calibration and reconstruction problem in computer vision. The calibration/reconstruction problem can be written as [Y] equals [X][B] + [e]. Where [Y] is a known vector, [X] is a known matrix, [B] is a vector on unknowns and [e] is a vector of unknown errors. In this paper we present methods for detecting outliers in the observations that compose our set of linear equations and apply them to experimental data. One set of methods uses the philosophy of 'identification', i.e., the outlying or influential cases are identified by determining the variation they have on the obtained results and points of high influence are removed from the data set. Another set of methods (the method of robust statistics) try to minimize the effects of errors that occur due to idealized assumptions in statistics; so robust methods permit the contamination of data without their removal and thus fall into the class of 'accommodation' methods. The methods of 'identification' and 'accommodation' were applied to experimentally observed data. Each method is compared for accuracy and it is found that these methods be chosen on the basis of computational considerations since their accuracies are compatible with each other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical sensor simulations based on high fidelity models of illumination, scenes, cameras, and signal processors can accurately predict the performance of machine vision systems. These simulations typically render images of the scene from solid models that define each object in the sensor's field of view by a ray casting algorithm, then pass the image through models of the camera (receiver) and the signal processor. Conventional ray casting algorithms cast a uniformly spaced grid of rays toward the scene from the camera and add the returns computed for each ray to the appropriate pixel of the image. This paper describes an adaptive ray casting (ARC) algorithm that dynamically adjusts the resolution of the ray grid, within bounds set by the user, to match the level of detail present in each part of the image. The ARC Algorithm generates a resolution map for the scene specifying the resolution required in each pixel, then it dynamically adjusts the spacing of the ray grid to match the required resolution during the rendering process. The resolution map is stored in the same array as the image, allowing the algorithm to run efficiently on systems with limited memory. This ARC Algorithm renders images of very high fidelity without extreme execution times.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer model is presented for the formation and sensing of defocused images in a typical CCD camera system. A computer simulation system named IDS has been developed based on the computer model. IDS takes as input the camera parameters and the scene parameters. It produces as output a digital image of the scene as sensed by the camera. IDS consists of a number of distinct modules each implementing one step in the computer model. The modules are independent and can be easily modified to enhance the simulation system. IDS is being used by our group for research on methods and systems for determining depth from defocused images. IDS is also being used for research on image restoration. It can be easily extended and used as a research tool in other areas of machine vision. IDS is a machine independent and hence portable. It provides a friendly user interface which gives the user full access and control to the parameters and intermediate results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subpixel accuracy gauging with solid state cameras has been of great interest in past years. Efforts to reduce errors in subpixel edge locations were directed at the subpixel interpolation technique or in the physical structure of the sensor itself. In this paper we present data which supports the opinion that the major error is caused by the sampling technique. We examine the nonlinearity of the subpixel edge location when moving the edge in equidistant steps in horizontal and vertical directions according to the solid state sensor. The relation between the virtual and physical pixels and the influence of the edge shift on the camera calibration and robot guidance are briefly discussed in this paper as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photometric accuracy is a measure of how well a detector signal voltage represents the surface reflectance at the corresponding object region for a specific illumination condition. A linear relationship should exist between the signal voltage and the product of the object reflectance and source irradiance at a point of interest. Furthermore, the measured voltage should represent only the point of interest with negligible contribution from surrounding points. This paper describes the class of machine vision applications for which photometric accuracy is important which includes color and height measurement. Recognizing that a monochrome or color CCD array is the usual choice of image sensor, error sources are reviewed. One error which has not received much attention which can severely degrade performance is CCD veiling glare. Experimental results are shown which quantify this error and an empirical model developed to represent the degradation of photometric measurement performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fundamental limit for the distance uncertainty of coherent 3D-sensors is presented. The minimum distance uncertainty is given by (delta) z equals 1/2(pi) (DOT) (lambda) /sin2u, with the aperture of observation sinu and wavelength (lambda) . This distance uncertainty can be derived via speckle statistics for different sensing principles, and surprisingly the same result can be obtained directly from Heisenberg's uncertainty, principle for a single photon. Because speckles are the main reason for distance uncertainty, possibilities to overcome the speckle problem are discussed. This leads to an uncertainty principle between lateral resolution and longitudinal distance uncertainty. A way to improve the distance uncertainty without sacrificing lateral resolution is the use of temporally incoherent light.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the application of a new Spatial-Domain Convolution/Deconvolution transform (S transform) for determining distance of objects and rapid autofocusing of camera systems using image defocus. The method of determining distance, named STM, involves simple local operations on only a few (about 2 to 4) images and it can be easily implemented in parallel. STM has been implemented on an actual camera system named SPARCS. Experiments on the performance of STM and their results on real-world objects are presented. The results indicate that STM is useful in practical applications. The utility of the method is demonstrated for rapid autofocusing of electronic cameras. STM is computationally more efficient than other methods, but for our camera system, it is somewhat less robust in the presence of noise than a Fourier transform based approach. STM is a useful technique in many applications such as rapid autofocusing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development and design of a computer synchronized 3D-triangulation sensor is presented. By combining the high resolution of both an electrooptical position detector and a galvanometer-deflector, located in the observation beam, a nearly perfect synchronization of illumination and observation beam for arbitrary contours could be achieved. High resolution for large measurement ranges was obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for isolating three-dimensional features of known height in the presence of noisy data is presented. The approach is founded upon observing the locations of a single light stripe in the image planes of two spatially separated cameras. Knowledge relating to the heights of sought features is used to define regions of interest in each image which are searched in order to isolate the light stripe. This approach is advantageous since spurious features that may result from random reflections or refractions in the region of interest of one image usually do not appear in the corresponding region of interest of the other image. It is shown that such a system is capable of robustly locating features such as very thin vertical dividers even in the presence of spurious or noisy image data that would normally cause conventional single camera light striping systems to fail. The discussion that follows summarizes the advantages of the methodology in relation to conventional passive stereoscopic systems as well as light striped triangulation systems. Results that characterize the approach in noisy images are also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flexible handling by robots and form inspection require fast and precise three dimensional (3D) information. Therefore we developed a fast and precise range-image sensor based on triangulation and the coded light approach. We project gray-coded light patterns with a programmable LCD line-projector. The stroboscopic flash of the projector is synchronized with the electronic shutter of the camera; therefore the 3D-sensor is insensitive to normal ambient light. Using special image processing hardware based on the synchronous dataflow paradigm, we are able to acquire and process a range-image within 220 ms. We have used this 3D-sensor in conjunction with a robot to sort postal parcels of arbitrary size and orientation from a pile. A cycle time (image acquisition, processing, grasping and placing) of 3 s has been reached, limited by the speed of the robot. Some details of the image-processing and the methods to achieve a high robustness of the system are presented. Further we discuss the accuracy achieved with this 3D-sensor. Results of the tests with high ambient light and the system calibration methods are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a 3-D sensor with extraordinary features. It supplies an accuracy which is only limited by the roughness of the object surface. This differs from other coherent sensors, where the depth accuracy is limited by the aperture of observation, when they are looking onto optically rough surfaces. As a consequence, our sensor supplies high accuracy, even with small aperture (we can look into narrow holes). The sensor supplies high distance accuracy at volume scatterers as well. The sensor is based essentially on a Michelson interferometer, with the rough object surface serving as one 'mirror'. This is possible because instead of the phase of the interferogram, only the occurrence of interference is detected, via small coherence of the source. We call the method 'coherence radar'.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high lateral resolution and large depth of field cannot be simultaneously obtained by using conventional optical focusing elements. The combination of Abbe's and Rayleigh's formulas show that the focal depth is proportional to the square of the focus spot size for these elements. On the other hand, by using a holographic optical focusing lens, one can obtain an axial focus line which confers a high lateral resolution and a large depth of field on a 3-D active vision system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high speed scanning system designed for three-dimensional robot vision is presented, which consists of a very compact sensor-head and a separate signal processing unit. Using the light- section method the sensor is two-dimensional without any moving parts. The third dimension can be achieved by moving the robot hand. The sensor-head contains: (1) the CCD-array (IA- D1 Turbosensor with 256 X 256 elements, Dalsa Inc., CDN), (2) the camera electronics and optics, (3) the laser source (0.5 - 20 mW, 780 nm CW), and (4) the light-plane optics (Schafter & Kirchhoff, Hamburg, Germany). The assembly of these elements in a single box provides a robust sensor system. Considering to the so-called Scheimpflug-condition some a priori information about the system can easily by described and used for two different criteria for the separation of unwanted interferences. As a result signal processing is facilitated and on-line signal processing even for very high data rates is possible. The signal processing unit contains: (1) a special custom circuit for analogue and digital on-line preprocessing, (2) a TMS 320 C 30 dsp-card, (3) several interfaces for high speed data transfer, and (4) the power supply. Since the signal processing unit may be displaced up to 1.5 m away from the compact sensor-head, the system is well suited for robot vision purposes. The system provides high accuracy (approximately equals 0.4%), high speed (up to 250 full frames per second) and on-line signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe the basic characteristics of an optical correlator and draw attention to the specific joint transform design currently being marked by SORL. This system incorporates liquid crystal TVs as input devices providing a high degree of feature discrimination. We address two issues here: The first is a detailed explanation of an alignment procedure which can be used with most correlators. The wavefront errors introduced from the correlator's components are also measured using interferometric techniques. This is an important step in the setup for any correlator in order to provide high quality Fourier information. The second issue is how to maintain performance while reducing the size of the correlator architecture. A discussion of compact correlators is given and two new designs are described. To work effectively, these architectures require that careful attention be paid to the initial alignment and set-up procedures described here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.