PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
There is a lack of commercially available white light sources for machine vision applications. Current commercial sources are typically expensive and primarily designed for workbench use. Because of their benchtop design, these light sources cannot be easily integrated into the inspection system. In most cases a light source must be custom designed and built to suit the needs of the particular machine vision application. The materials being inspected can vary from highly specular to highly diffuse, thus requiring a broad range of illumination levels. Other issues important in machine vision light sources include efficiency, light divergence, spectral content, source size, and packaging. This paper discusses the issues that must be overcome when designing a light source for machine vision applications, and describes the work done by ITI to produce an efficient white light source with computer controlled illumination level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Illumination characteristics play an important part in machine vision inspection by making certain critical features conspicuous. Frequently, the angle of illumination can be of importance in enhancing the detection of features that have height or depth. The highlights or shadows produced by grazing-incidence illumination produce contrast differences that are easily detected by an appropriate vision system. This paper explores the role of this lighting technique in machine vision applications. Three examples are presented that describe the application of this technique on paper documents. In the first example, grazing-incidence is used to detect the presence of an erased signature on a forged personal check. In the second example, the technique is used to detect the presence of an address label on a piece of mail. In the third example, grazing-incidence is used to verify the impression signature of a financial document.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In typical machine vision problems such as quality control or object location, it is often the case that elements of interest are small protuberances over a surface. We present an innovative and robust approach aiming at detecting such protuberances. Its basic ideas are to detect the shadows produced by the protuberances and to use several light sources simultaneously to enhance detection. Each light source produces a different set of shadows; combining the shadows produced by all light sources helps to locate the protuberance, because these shadows are the only significantly varying patterns between views. Rather than using several white light sources in sequence, it is possible to use simultaneous color sources with appropriate filters to separate the image into independent channels. The approach has been validated on a concrete problem with highly variable protuberances and nonplanar surfaces. The results confirm the robustness of this approach, which could be used for other problems as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this report we describe an optical scattering measurement instrument developed for the design of machine vision illumination. The instrument can measure the radiance of surfaces as a function of the illumination and viewing angles and it can be used to measure relative bidirectional reflectance (BRDF) characteristics of materials. The constructions used were the traditional mechanically scanned single detector based solution and a new video camera based solution. The benefits of the video camera based solution are the rapid measurement of a large scattering angle range and the possibility of viewing scattering patterns on a monitor in real time. The experimental results of the BRDF measurements on copper and galvanized steel are modeled by using reflectance models based on geometrical optics and wave theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We did a comprehensive characterization of liquid crystal displays. The aim was to select the most proper for the development of an adaptive light pattern projector for use in industrial profilometry. Super twisted nematic displays have been chosen for this application. Active matrix displays have proven to be superior to passive matrix displays in terms of contrast, spatial characteristics of transmission, transient behavior of the transmission, and dependence of the transmission curves upon temperature. The choice of a suitable liquid crystal display made it possible to define the project criteria for the development of the projector unit. The main features of this projector and its use with profilometry for automatic quality control in industrial framework are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distinguishing shadow boundaries from object boundaries is a difficult task for machine vision systems. A new edge detector is presented which produces qualitatively distinguishable edge signals at shadow penumbras and abrupt object edges. The detector requires the use of spatially extended light sources and sufficient video resolution to resolve the shadow penumbras of interest. A novel approach to high frequency noise suppression is employed which requires no image-dependent adjustment of signal thresholds. The ability of the operator to distinguish shadow penumbras from abrupt object boundaries while suppressing responses to high frequency noise and texture is illustrated with a number of video images. Similarities between this approach and the perception of shadow boundaries by the human visual system are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a novel methodology for accurate determination of surface normals and light source location from depth and reflectance data. Estimation of local surface orientation using depth data alone from range finders with standard depth errors can produce significant error. On the other hand, shape-from-shading using reflectance data alone produces approximate surface orientation results that are highly dependent upon just the right initial surface orientation estimates as well as regularization parameters. Combining these two sources of information gives vastly more accurate surface orientation estimates under general conditions than either one alone, even when the light source location is not initially known. Apart from increased knowledge of local orientation, this also can provide better knowledge of local curvature. We propose novel iterative methods which enforce satisfaction of the image irradiance equation and surface integrability without using regularization. These iterative methods work in the case where the light source is any finite distance from the object producing variable incident light orientation over the object. These are realistic machine vision conditions in a laboratory setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A CMOS circuit has been developed which integrates sensors and processing circuitry aimed at implementing autonomous robot perception and control functions. The sensors are an array of photodetectors and the processing circuitry analyzes the array data to extract a basic set of sensory primitives. In addition, the processing circuitry provides a low-resolution determination of the location of any brightness edges which cross the array. Ultimately, this sensor-processor circuitry will be used as part of an overall integrated sensorimotor system for autonomous robots. In the complete system, individual sensorimotor units will produce motion requests for the robot as a whole and an operating system, serving in part as a motion request handler, will arbitrate among suggested motions. The nature of the motion requests will be dependent both on sensor input and on the current goals of the robot. Ideally, the entire set of sensors, the processing circuitry, and the operating system will reside on a single VLSI chip. The current chip achieves many of the objectives of the complete integrated sensorimotor system, namely, it acquires sensory information, manipulates that data, and ultimately provides a digital output signal set which could serve as a motor signal set. Much of the on- chip processing is done by sensory primitive modules which calculate spatial convolutions of the sensor array data. Convolution kernels which were actually implemented were chosen based primarily on their usefulness in solving low-level vision problems. Specific kernels on the current chip include discrete approximations to the x-direction first derivative operator, the y-direction first derivative operator, and the laplacian operator. The spatial convolution function is achieved using current mode analog signal processing techniques. The output of the spatial convolution modules is piped into a higher level module which generates an estimate of the location of brightness edges which cross the array. This location estimate, which takes the form of a set of digital signals, can be readily translated into a (motor system dependent) motion request format, if indeed it cannot be used directly for this purpose. Location estimation, although it is the only higher level function implemented on the current chip, is just one example of a useful sensory primitive-based function. Additional higher level modules could be used to implement alternate functions which estimate other important environmental properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is important to select the right solid state camera in every machine vision application. The solid state camera technical specifications, such as spectral response, signal to noise ratio, dynamic range, sensitivity, sensor type and size, horizontal, and vertical resolution are the leading criteria for sensor selection. In general, it is expected that a camera with better specifications will improve gaging accuracy of a vision system. Yet, many times the result does not meet the expectation. In some cases the system performance even decreases. The reason is that the analog output voltage from the camera is sampled asynchronously by the common industrial machine vision systems and it can resolve in worse edge deformation and mislocation for the camera with higher resolution, and better signal to noise ratio than for the camera of lower performance. In this paper, we examine this effect with particular emphasis on edge detection performance. Video sampling timing charts and achieved subpixel accuracy and repeatability, when using several common solid state cameras with the same frame grabber, are presented. The guidelines for selecting a solid state camera according to the frame grabber are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Changes in measured image irradiance have many physical causes and are the primary cue for several visual processes such as edge detection and shape from shading. Using physical models for CCD image sensors and material reflectance, we quantify the variation in digitized pixel values that is due to sensor noise and reflectance variation. This analysis forms the basis of algorithms for sensor characterization and calibration and for scene description. Specifically, algorithms are developed for estimating the parameters of sensor noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current. While these techniques have many potential uses, we describe in particular how they can be used to estimate a measure of scene variation. This measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations. Experimental results obtained using these algorithms are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a new technique for calibrating a camera with very high accuracy and low computational cost. The geometric camera parameters considered include camera position, orientation, focal length, radial lens distortion, pixel size, and optical axis piercing point. With our method, the camera parameters to be estimated are divided into two parts: the radial lens distortion coefficient (kappa) and a composite parameter vector c composed of all the above geometric camera parameters other than (kappa) . Instead of using nonlinear optimization techniques, the estimation of (kappa) is transformed into an eigenvalue problem of a 8 X 8 matrix. Our method is fast since it requires only linear computation. It is accurate since the effect of the lens distortion is considered and all the information contained in the calibration points is used. Computer simulation and real experiment have shown that the performance of our calibration method is better than that of the well-known method proposed by Tsai.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the use of a photogrammetric technique for the calibration of a coordinate imaging system based on stereoscopic TV cameras and structured lighting. Experiments with this system showed that the generated 3-D coordinate images were prone to inaccuracies caused by optical distortions in lenses and also by general geometrical distortions due to positioning of the components of the vision system in three-dimensional space. This necessitated the use of a powerful calibration method which took account of all potential sources of error. A particular photogrammetric technique known as direct linear transformation was used for this purpose, resulting in a worst case accuracy of about 4 mm at a range of 1.8 - 2.2 meters. A discussion justifying this choice of calibration technique is presented together with the complete procedure used. The results obtained are analyzed and possible methods for further improvements in the accuracy of the coordinate data are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new method for reconstructing a scene from different views through a high-distortion lens camera. Unlike other approaches, no a priori calibrations nor specific test patterns are required. Several pairs of correspondence between input images are used to estimate intrinsic parameters such as focal length and distortion coefficients. From these correspondences, relative movement of the camera between input images is computed as rotation matrices. We assumed radial lens distortion, modeled with a third order polynomial with two distortion coefficients, which covers highly distorted zoom lenses. Since we allow distortion with two coefficients and focal length to be unknown, it is not easy to get these three parameters explicitly from the correspondence alone. To avoid time consumption and the problem of local minima, we take the following steps: uniform searching in the reduced dimension; fitting a function to get a better guess of focal length; and polishing solutions by repeating the uniform search to get the final coefficients of distortion. The total number of evaluations is remarkably reduced by this multistage optimization. Some experimental results are presented, showing that more than 5% of lens distortion is reduced and the rotation of the camera is recovered, and we show a registration of four outdoor pictures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many impressive developments in image simulation technology have led to extensive use of synthetic images in the motion picture industry for special effects and animation, and also in applications such as aircraft flight simulators. Although these images appear correct to the human eye, they generally are not suitable for development of image processing and machine vision applications because the logarithmic response of the human eye does not match the linear response of most electronic detectors. Synthetic images must accurately represent the effects which are present in detected images, whether produced by the source(s) of illumination, the scene itself, the medium through which the sensor is viewing the scene, the sensor system, or electronic circuits between the detector array and the processing system if they are to be useful for development and analysis of image processing (and machine vision) systems. Recent developments have led to the use of laser sensors for various machine vision applications including collision avoidance, wire detection and avoidance, intrusion detection, and underwater imaging systems. With recent developments in low cost laser systems, the use of these sensors for numerous applications relating to machine vision is likely to continue to expand for the foreseeable future. SPARTA's work in the area of image synthesis began with the development of a coherent laser radar simulation running on IBM and compatible personal computers, and has since branched into modeling of incoherent active and passive systems as well. SPARTA's current optical imaging sensor simulation, SENSORSIM, is written in ANSI standard FORTRAN '77 to ensure portability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Morphological processing combined with other techniques is used to analyze disordered structures. Disordered structures can consists of a number of objects of a given shape (the task is to determine the number of objects and their length and orientation distribution) or texture (this occurs when the number of particles is large) and the task is to describe the texture and to discriminate different textures. To solve such problems, we employ a new morphological skeletonization algorithm, a Hough Transform together with morphological operations, morphological erosions with directional structuring elements, and develop new parameters to describe and distinguish textures. Our algorithms can be implemented in digital or optical processors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of efficient high speed techniques to recognize, locate, and quantify damage is vitally important for successful automated inspection systems such as ones used for the inspection of undersea pipelines. Two critical problems must be solved to achieve these goals: the reduction of nonuseful information present in the video image and automatic recognition and quantification of extent and location of damage. Artificial neural network processed moire profilometry appears to be a promising technique to accomplish this. Real time video moire techniques have been developed which clearly distinguish damaged and undamaged areas on structures, thus reducing the amount of extraneous information input into an inspection system. Artificial neural networks have demonstrated advantages for image processing, since they can learn the desired response to a given input and are inherently fast when implemented in hardware due to their parallel computing architecture. Video moire images of pipes with dents of different depths were used to train a neural network, with the desired output being the location and severity of the damage. The system was then successfully tested with a second series of moire images. The techniques employed and the results obtained are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two optimized magneto-optical spatial light modulators (MOSLM) are used in a coherent optical correlator providing the possibility to rapidly change both the analyzed image and the test picture recorded in the spectral plane. The experimental results and theoretical modelling are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The successful use of the neural-like processor for distorted image recognition, based on two magneto-optical spatial light modulators (MOSLM), is demonstrated. The processor work is based on the dosed correction of the distorted image, recorded on the first MOSLM, with the help of the ideal image spectrum, recorded on the second MOSLM, place in the processor spectral plane. It is shown that the neural-like algorithm provides, in some cases, the essentially enhanced image distinguishability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intensity images are a rich source of information about a scene. However very often it is difficult to distinguish if the observed attribute reflects the textural or spatial character of an object. On the other hand range images give the full information about the spatial character of the scene. In the case of range images, because of the very strong light deflection on the edges, it is almost impossible to extract exact information about their position. This paper describes the construction of a laser range finder which provides a source of both kinds of images, range and intensity. Thanks to the special optical construction the images are fully correlated to each other. Such images can solve the problems common in the 3-D image processing systems. We show how the problems, often met by the laser band range finders, like edge detection and tangential surfaces can be treated using a system equipped with this kind of sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a new 3-D imaging technique using a line-scan camera. The technique is based on the intensity ratio sensing principle that was first introduced by J. Schwartz in 1983. In this technique a 2-D image of the scene is acquired twice, each time with a different illumination profile. It is shown by Schwartz that if the two light profiles are created by sources emanating from essentially the same direction, the 3-D image of the scene can be reconstructed from the ratio of the gray levels in the two images. The two light profiles can be created, for example, by using a single light source and filters that can be mechanically switched. Though our scheme is based on the same fundamental principle, it is significantly different in its embodiment. Instead of a 2-D camera we use a line-scan camera, and instead of filters we use lines of light to create the illumination gradients. We analyze the new technique and show that it has significant advantages over Schwartz's embodiment mainly in intensity dynamic range and simplicity of calibration. We also describe a line-scan vision system that was used to test the feasibility of this technique and show 3-D images produced by it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a hybrid system of optics and computer for 3D object recognition is presented. The system consists of a Twyman-Green interferometer, a He-Ne laser, a computer, a TV camera, and an image processor. The structured light produced by a Twyman-Green interferometer is split in and illuminates objects in two directions at the same time. Moire contour is formed on the surface of object. In order to delete unwanted patterns in moire contour, we don't utilize the moire contour on the surface of object. We place a TV camera in the middle of the angle between two illuminating directions and take two groups of deformed fringes on the surface of objects. Two groups of deformed fringes are processed using the digital image processing system controlled and operated by XOR logic in the computer, moire fringes are then extracted from the complicated environment. 3D coordinates of points of the object are obtained after moire fringe is followed, and points belonging to the same fringe are given the same altitude. The object is described by its projected drawings in three coordinate planes. The projected drawings in three coordinate planes of the known objects are stored in the library of judgment. The object can be recognized by inquiring the library of judgment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase shifted moire interferometry is one of the most effective tools for obtaining a full-field depth map. The major draw back of the technique is the two pi ambiguity which limits the measurement depth range to one fringe or requires the counting of fringes across the image. In either case, only a relative measurement is obtained, no information is available about the absolute distance to the camera. By moving the moire projection system (field shifting) the period of the moire pattern is changed allowing extraction of absolute depth information. We have built an instrument, employing field shifted moire to produce a full field depth map with 12 bits of depth resolution. The performance and applications of this instrument are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moire techniques can be a powerful tool to determine surface shape or deviation of a shape in progress from a final or desired shape. The presence of the high contrast viewing grating and the distorted grating of the final image plane makes the moire pattern hard to see. Moving grating techniques have been developed to improve the visibility of the moire pattern, but at the expense of complex moving parts. We have developed several variable resolution projection moire techniques that either move the grating or eliminate its presence electronically, and have neither mechanical moving parts nor any physical gratings. One system uses an acousto-optics cell to generate, project, and move the gratings, while the moire is viewed through a second synchronized A-O cell. The second system uses an interferometer to generate and project variable spacing gratings which are made to move across the target and across a reference surface by an A-O beam deflector. Video processing of the reference image generates the transmissive filter which produces the moire pattern. A third system removes the grating presence electronically but retains high contrast moire contours. Noise reduction is shown in moire images of targets ranging in size from 1 to 700 cm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Typical construction and performance data for a pulsed time-of-flight laser rangefinding device intended for industrial measurements is presented. It is shown that by using a laser diode transmitter with a peak power of 5 - 15 W, a measurement range of a few tens of meters can be attained with respect to a noncooperative target. The available single shot resolution reaches mm-level in a fraction of a second. Accuracy depends greatly on the construction and adjustment of the device and levels of better than +/- 3 mm can be achieved in the above measurement range. Various construction details and other factors affecting to the available resolution and accuracy are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-of-flight laser rangefinders based on the propagation speed of light and direct detection may be roughly divided into two classes depending on whether their light emission is continuous or pulsed. In the modulated continuous wave (CW) rangefinding technique an amplitude modulated light carrier is emitted and the distance information is extracted from the received signal by comparing its modulation phase to that of the emitted signal. In the pulsed time-of-flight (TOF) rangefinding method the distance is obtained by measuring the time interval between the transmitted and received light
pulses. A comparison is made in this paper between the TOF and CW techniques in terms of the range measurement resolution achievable. The basis for comparison is that the actual measurement or averaging time is the same with both techniques. Comparison is made in the case where the average optical power level used is the same and also in that in which an unambiguous measurement distance is set for both techniques. With an equal average optical power level and a noise contribution dominated by the signal itself, it is shown that the ratio of achievable range resolution between the TOF and CW techniques is directly proportional to the ratio of the modulation frequency of the CW method to the receiver bandwidth of the TOF method. When the average optical power level is reduced and the photon-noise limited condition is not achieved, the TOF mode rangefinder gains an advantage over the CW mode because the resolution is directly proportional to the optical power level and the available energy in the TOF technique can be concentrated at the moment of timing. This enables a TOF rangefinder to be much faster than a similar CW rangefinder in many practical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Progress in the development of 3D systems for inspection and measurement has resulted in new systems using several imaging techniques. Requirements for sub-pixel inspection accuracy are now common throughout the industry, mandating a thorough examination of sensor performance limits. The biggest challenge for any 3D system is accurate measurement of object location and height when the intrascene dynamic range is large. This paper examines several fundamental sources of error in 3D systems, particularly imaging errors found near object edges. The results are important for development of 3D metrology system specifications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moire contouring methods have been shown to be able to delineate the full surfaces of gently curved parts. One limitation of most interferometric based methods is that the resulting fringe pattern does not differentiate between positive and negative slope surfaces (peaks from valleys). Many dynamic analysis methods, such as phase shifting, do permit slope determination through the use of multiple images. This paper presents a method of moire fringe generation which discriminates peaks from valleys through the use of phase gratings. A phase grating creates a change in a static moire pattern whenever the sign of the surface slope changes. By selecting the sign of one slope on a continuous surface, the sign of any other region can be determined. The slopes are determined by the phase of the moire fringe pattern without moving the fringes. This method does not provide absolute slope determination, but only relative to any other slope on the same, continuous surface. We discuss the basic theory of moire fringe generation using phase gratings, and relate the fringe creation mechanism to an explanation of the slope effect. Finally, we present experimental results of some simple examples of practical applications of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optical method for on-line hot steel surface quality inspection is proposed. The method is based on the four laser sources illuminating the surface at grazing incidence from different directions. A CCD line array camera, located perpendicularly over the surface, captures the scattered light generated by each source separately. Surface defects induce shadow and bright field patterns which are related to each source direction and angle of incidence. Such a multiple source configuration is required for the discrimination of real surface defects from normal reflectivity and surface geometry variations. The patterns are digitized for each source separately and then combined using a simple and fast algorithm. The method was tested in our laboratory on cold steel billet samples. Characteristic signatures were obtained for most surface defects such as blisters, scale embedments, large cracks and pinholes. A scanning rate of several thousand lines per second is planned for the industrial prototype by selecting a different wavelength for each laser source and by using a four wavelength CCD line array camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we consider the inspection of surface imperfections in ceramics, in particular porcelain products. An optical system is presented which addresses three critical aspects of the detection: the characteristics of the surface; the right choice of illumination system; and the collection of the diffused light which comes from the porcelain. Furthermore, a computer- based approach, with its fundamentals in the binary morphological image analysis, is used to support the processing and quality control. The motivation for working the morphology is the fact that we can acquire a low gradient image which is ideal for thresholding purposes, resulting in an effective binary image. The porcelain that we have analyzed has a glazed surface. As a matter of fact, it exhibits a significant coefficient of specular reflection. An extended white light source is used to illuminate a quarter of the surface of the plate. Along the axis of the beam a variable-aperture diaphragm is mounted for controlling the amount of light which reaches the surface. Other devices in the path are two linear dichroic polarizers for creating the shading field. This kind of illumination isolates an appropriate portion of the plate being inspected with low gradient contrast over the part being observed. By adjusting the relative orientations of the polarizers we obtain a contrast optimized image when the axis of the polarizers are nearly perpendicular. The diffused beam enters the CCD camera objective after passing through a large diameter positive lens. The method, as described above, improves the performance of the inspection process. It is found that the inspection time per plate is substantially lower, 2 seconds instead of 15 seconds, and that the automation eliminates the possibility of the subjective element introduced by human visual perception. Regarding the effectiveness of the method, we are making successive improvements in order to attenuate considerably the specular component of reflection. To this end, we are considering the use of a different involving media whose refractive index is close to that of the glaze used on the porcelain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.