We are motivated by the question if scanning laser projection with low speckle noise is possible. Scanning laser projection requires “instantaneous” speckle reduction, within a few nanoseconds – meaning that no moving diffusors can be used. We will argue that instantaneous speckle reduction is possible by conversion of spatial coherence to spatial incoherence - but nature demands for a compensation. The cost can be estimated via the information theoretical concept “channel capacity”, which incorporates the etendue as well as the signal-to-noise ratio. We will show that an optical system with low spatial coherence (=low speckle noise) must provide significantly more degrees of freedom than a coherent imaging system. The consequence for the technical optical system is serious: significant speckle reduction can only be achieved by an excessively large projection aperture. This is not just a sophistic consideration, it seriously restricts the design of scanning laser projectors.
In principle, PMD needs the two components of the local surface gradient. Therefore a sequence of two orthogonal sinusoidal fringe patterns have to be displayed and captured separately. It is easy and convenient by using a digital display, but it will be much difficult to build a PMD system with mechanic gratings. In this paper, we present a novel phase-shift technique by using the cross fringe pattern, in which a one-dimensional N-phase shift allows for the acquisition of the two orthogonal phases, with only N exposures instead of 2N exposures. Therefore, it make PMD possible be implemented by a one-dimensional translation of the fringe pattern, instead of the common two-dimensional translation, which will be quite useful for certain applications.
With electroencephalography (EEG), a person’s brain activity can be monitored over time and sources of activity localized. With this information, brain regions showing pathological activity, such as epileptic spikes, can be delineated. In cases of severe drug-resistant epilepsy, surgical resection of these brain regions may be the only treatment option. This requires a precise localization of the responsible seizure generators. They can be reconstructed from EEG data when the electrode positions are known. The standard method employs a "digitization pen" and has severe drawbacks: It is time consuming, the result is user-dependent, and the patient has to hold still. We present a novel method which overcomes these drawbacks. It is based on the optical "Flying Triangulation" (FlyTri) sensor which allows a motion-robust acquisition of precise 3D data. To compare the two methods, the electrode positions were determined with each method for a real-sized head model with EEG electrodes and their deviation to the ground-truth data calculated. The standard deviation for the current method was 3.39 mm while it was 0.98 mm for the new method. The influence of these results on the final EEG source localization was investigated by simulating EEG data. The digitization pen result deviates substantially from the true source location and time series. In contrast, the FlyTri result agrees with the original information. Our findings suggest that FlyTri might become a valuable tool in the field of medical brain research, because of its improved precision and contactless handling. Future applications might include co-registration of multimodal information.
Quantitative deflectometry is a new tool to measure specular surfaces. The spectrum of measurable surfaces ranges from flat to freeform surfaces with steep slopes, with a size ranging from millimeters to several meters. We illustrate this by several applications: eye glass measurements, measurements of big mirrors, and in-line measurements in ultra-precision manufacturing without unclamping of the sample. We describe important properties of deflectometry and compare its potentials and limitations with interferometry. We discuss which method is superior for which application and how the potential of deflectometry may be developing in the future.
Deflectometric methods that are capable of providing full-field topography data for specular freeform surfaces have been
around for more than a decade. They have proven successful in various fields of application, such as the measurement of
progressive power eyeglasses, painted car body panels, or windshields. However, up to now deflectometry has not been
considered as a viable competitor to interferometry, especially for the qualification of optical components. The reason is
that, despite the unparalleled local sensitivity provided by deflectometric methods, the global height accuracy attainable
with this measurement technique used to be limited to several microns over a field of 100 mm. Moreover, spurious
reflections at the rear surface of transparent objects could easily mess up the measured signal completely. Due to new
calibration and evaluation procedures, this situation has changed lately. We will give a comparative assessment of the
strengths and – now partly revised – weaknesses of both measurement principles from the current perspective. By
presenting recent developments and measurement examples from different applications, we will show that deflectometry
is now heading to become a serious competitor to interferometry.
Structured-illumination microscopy is an incoherent method to measure the microtopography of rough and smooth
objects. The principle: A sinusoidal fringe pattern is projected into the focal plane of a microscope. While the object is
scanned axially, the contrast evaluation of the observed pattern delivers the 3D topography with a height uncertainty of
only a few nanometers. By means of a high aperture the system can measure steep slopes: +/- 50 degrees on smooth
objects (NA=0.8) and +/- 80 degrees on rough surfaces are possible. For industrial applications a fast measurement is
one of the most desired aspects. We face this demand by exploiting the physical and information-theoretical limits of the
sensor, and giving rules for a trade-off between accuracy and efficiency. We further present a new method for data
acquisition and evaluation which allows for a fast mechanical scanning without "stop-and-go".
In white-light interferometry at rough surfaces ("Coherence radar") the measuring uncertainty is physically limited by the arbitrary phase of the individual speckle interferograms. As a consequence, the standard deviation of the measured shape data is inevitably given by the (optically unresolved) roughness of the surface. The statistical error in each measuring point depends on the brightness of the corresponding speckle; a dark speckle yields a more uncertain measurement than a bright one. If the brightness is below the noise threshold of the camera, the measurement fails completely and an outlier occurs. We present a new method to significantly reduce the measuring uncertainty and the number of outliers. We achieve this by generating several statistically independent speckle patterns, by use of different directions of the illumination. We evaluate the different measurements and select the best measurement or assign more weight to brighter speckles
We present a new method to measure specular free-form surfaces within seconds. We call the measuring principle `Phase Measuring Deflectometry' (PMD). With a stereo based enhancement of PMD we are able to measure both the height and the slope of the surface. The basic principle is to project sinusoidal fringe patterns onto a screen located remotely from the surface under test and to observe the fringe patterns reflected via the surface. Any slope variations of the surface lead to distortions of the patterns. Using well-known phase-shift algorithms, we can precisely measure these distortions and thus calculate the surface normal in each pixel. We will deduce the method's diffraction-theoretical limits and explain how to reach them. A major challenge is the necessary calibration. We solved this task by combining various photogrammetric methods. We reach a repeatability of the local slope down to a few arc seconds and an absolute accuracy of a few arc minutes. One important field of application is the measurement of the local curvature of progressive eyeglass lenses. We will present experimental results and compare these results with the theoretical limits.
Commonly, optical systems are called coherent, if a laser is used (right), and incoherent if other sources come into play (wrong). Most opticists are not aware that parasitic spatial coherence is ubiquitous, even if it is unobvious. The pretended incoherent approach may lead to significant quantitative measuring errors of illumination or reflectivity, 3d shape, size or distance. On the other hand, a favorable property of spatial coherence is that among the "speckle noise" we may reveal useful information about the object, by white light interferometry. This report will discuss simple rules to estimate the occuring errors and how to reduce spatial coherence. We will further discuss the complex signal formation in white light inteferometry and roughness measurements far beyond the bandwidth limit of the observing optics.
The combination of optical 3D-sensing, CAD/CAM and reverse engineering offers new opportunities for medical diagnosis and therapy, as well as for art conservation. We report our activities on the fields of dental CAD/CAM, face surgery, restoration, and others. We will discuss essential problems and solutions, specifically addressing the physical and technological limits for these applications. One example is that speckle noise limits the dynamical range of optical sensors to about 5000: 1, but a good visualization needs a dynamical range of 100000: 1 . A technical (and commercial) limitation of rapid prototyping is the high cost that prevents mass production. As a consequence, for those "real" applications mentioned above, we do not need only optical 3D-sensors that work at the limits given by physics, we need as well perfect surface reconstruction techniques to automatically smooth the sensor data, without destruction of edges. We have to register many views, automatically, and to visualize the data. The processing chain is only complete with effective technology for rapid prototyping, such as fused deposition modeling or laser sintering.
In Laser Material Processing, surfaces have to be measured at low apertures within the rough environment generated by the production process. As it is hardly possible to measure the material wear through the plasma at the working zone (at ternperatures above 3000 K), common sensors would have a quite poor performance. The ablation sensor presented in this paper solves that problem by utilising just the plasma spot emitting a signal from which we evaluate the distance between sensor and work piece. The specific features of this sensor are: the measurement is not distorted by coherent noise and is insensitive against the (strongly varying) spot size and shape. Hence, the sensor displays extreme accuracy, even with low aperture and in presence of strong turbulence. The achievable measurement on-line uncertainty within the ablation process is ? = 3 tm using a CO2-Laser. - A demand for even finer structures in laser ablation leads to a change from the Lasercaving process (using a CO2-laser) to an ablation by sublimation (using a Nd:YAG laser). The intention is to decrease the thickness of each ablated layer, and thus, generating finer structures. In order to keep the ablation rates at an economically interesting value, the speed between laser and work piece surface has to be increased. This new ablation process tightens the requirements for the sensor performance, even more. In the paper we will explain the basic ideas of the sensor as well as the technology of implementation and a couple of successful applications.
'Spectral radar' combines a white light interferometer with a spectrometer. It is an optical sensor for the acquisition of skin morphology based on OCT techniques. The scattering amplitude along one vertical axis from the surface into the bulk can be measured within one exposure. We will discuss some essentials of signal formation and a new method of signal evaluation that significantly reduces artifacts from some source imperfections. We will further demonstrate new measurements.
Important aims in dermatology are the measurement of pathological alterations of human skin and on the other hand the quantification of the influences caused by pharmaceutic and cosmetic products. We present modifications of the well- established coherence radar that allow in vivo measurement of human skin, in spite of involuntary body movements and bloodflow. The measuring field can be varied from 100 X 100 micrometers 2 to 5 X 5 mm2. The measuring time is 5 to 15 s and the longitudinal measuring uncertainty is about 2 micrometers . A fiber optical implementation allows the separation of the sensor head from the mechanical scan. The mobile and compact sensor head can now be freely positioned and adjusted to each part of the patient's skin. Disturbances caused by unavoidable movement of the patient can be compensated by modified setups of the coherence radar. We show measurements of clinical and cosmetic relevance.
Spectral radar is an optical sensor for tomography, working in the Fourier domain, rather than in the time domain. The scattering amplitude a (z) along one vertical axis from the surface into the bulk can be measured within one exposure. No reference arm scanning is necessary. One important property of optical coherence tomography (OCT) sensors is the dynamic range. We will compare the dynamic range of spectral radar with standard OCT. The influence of the Fourier transformation on the dynamic range of spectral radar will be discussed. The clinical relevance of the in vivo measurements will be demonstrated.
The 'coherence radar' was introduced as a method to measure the topology of optically rough surfaces. The basic principle is white light interferometry in individual speckles. We will discuss the potentials and limitations of the coherence radar to measure the microtopology, the roughness parameters, and the out of plane deformation of smooth and rough object surfaces. We have to distinguish objects with optically smooth (polished) surfaces and with optically rough surfaces. Measurements at polished surfaces with simple shapes (flats, spheres) are the domain of classical interferometry. We demonstrate new methods to evaluate white light interferograms and compare them to the standard Fourier evaluation. We achieve standard deviations of the measured signals of a few nanometers. We further demonstrate that we can determine the roughness parameters of a surface by the coherence radar. We use principally two approaches: with very high aperture the surface topology is laterally resolved. From the data we determine the roughness parameters according to standardized evaluation procedures, and compare them with mechanically acquired data. The second approach is by low aperture observation (unresolved topology). Here the coherence radar supplies a statistical distance signal from which we can determine the standard deviation of the surface height variations. We will further discuss a new method to measure the deformation of optically rough surfaces, based on the coherence radar. Unless than with standard speckle interferometry, the new method displays absolute deformation. For small out-of-plane deformation (correlated speckle), the potential sensitivity is in the nanometer regime. Large deformations (uncorrelated speckle) can be measured with an uncertainty equal to the surface roughness.
KEYWORDS: Data modeling, Smoothing, Digital filtering, 3D modeling, Calibration, Reverse modeling, Filtering (signal processing), 3D image processing, Visualization, Electronic filtering
In order to digitize the whole surface of a three-dimensional object by means of an optical range sensor, usually multiple range images are acquired from different viewpoints and merged into a single surface description. The simplest and most accurate way is to generate a polyhedral surface. The data are usually distorted by measuring errors like noise, aliasing, outliers, calibration and registration errors, etc., so that they have to be filtered. Calibration and registration errors first appear after merging of different views. As the merged data are no longer represented on a grid, conventional filters for digital signal processing are not applicable. We introduce a new approach for modeling and smoothing scattered data based on an approximation of a mesh of circular arcs. This new method enables interpolation of curved surfaces using solely the vertex position and the associated vertex normals of a polyhedral mesh. The new smoothing filter is specifically adapted to the requirements of geometric data, as it minimizes curvature variations. In contrast to linear filters, undesired surface undulations are avoided, which is an important pre- condition for NC milling and rendering.
KEYWORDS: Sensors, Calibration, Reverse modeling, Data modeling, Image registration, 3D metrology, Reverse engineering, 3D acquisition, 3D modeling, 3D image processing
Optical 3D sensors are used as tools for reverse engineering: First the shape of an object is digitized by acquisition of multiple range images from different view points. Then the range images are registered and the data is turned into a CAD description, e.g. tensor product surfaces, by surface modeling software. For many applications however it is sufficient to generate a polyhedral surface. We present a nearly automatic procedure covering the complete task of data acquisition, calibration, surface registration and surface reconstruction using a mesh of triangles. A couple of measurements, such as teeth, works of art and cutting tools are shown.
KEYWORDS: Skin, Tissue optics, Coherence (optics), In vivo imaging, 3D metrology, Biomedical optics, Medical diagnostics, Radar, Time metrology, Fiber optics sensors
Optical coherence profilometry (OCP) may be a useful tool for medical diagnosis of human skin. Different medical indications show distinct alterations of the skin surface. We measure the 3D shape of the surface of the skin by the use of the 'coherence radar', which is based on short- coherence-interferometry. The measuring uncertainty is less than 3 micrometers . The measuring time takes about 4 seconds. We perform in vivo 3D skin mapping of naked skin without preparation. We describe methods to compensate for the movement of the patient during the measurement. In order to realize the sensor for clinical application a fiber optical implementation is introduced.
The 'spectral radar' is an optical sensor for the acquisition of skin morphology. The scattering amplitude a(z) along one vertical axis from the surface into the bulk can be measured within one exposure. No reference arm scanning is necessary. We discus the theory of the sensor, including the dynamical range and we show in vivo measurements of human skin by a fiber optical implementation of the sensor.
We discuss different modifications of white light interferometry, for the acquisition of human skin morphology. In a first experiment we display the diffusion of light within tissue, versus time. Light is focused onto the surface of the sample, penetrates the sample, is scattered and partly emerges from the surface again. For each point of the surface we can measure a certain run time profile of the emerging photons, via the speckle contrast. The local scattering behavior of the skin is encoded in the run time profile. Further we present a sensor for the acquisition of cross-sectional images of volume scatterers, we call it 'spectral radar.' The scattering amplitude a(z) along one vertical axis from the surface into the bulk can be measured within one exposure. No reference arm scanning is necessary, hence a short measurement time is possible. The depth uncertainty within a range of 1000 micrometer is about 10 micrometer. In first measurements we distinguished a melanoma maligna from healthy skin, in vitro and we measured the thickness of a fingernail in vivo. We further demonstrate a third method, the 'coherence radar' for in vivo measurements of skin surface topology, with an accuracy of a few micrometers, and a field of 512 by 512 pixels.
We adapted a method, the 'coherence radar', that was originally developed for the precise measurement of surface topology, to measure bulk properties within strongly scattering media. The sensor is based on short-coherence- interferometry. It enables the 2D observation of light propagation in scattering media with a high temporal resolution. The measurements are carried out by observing photons that traveled form an entrance focus through the bulk of the sample, and back to the surface. The source of information is the speckle contrast. One important result is that during the propagation a sharp photon horizon evolves. This photon horizon can be used for the detection on inhomogeneities in the scattering properties. In solid samples we measured absorbing obstacles with a depth of 320 micrometers and a depth uncertainty of < 5 percent. The measuring time is about 30 seconds. The observation of the photon horizon can also be realized in 'life' volume scatterers with moving scattering particles. First in vivo measurements of human skin have been successful.
We present a sensor for acquisition of cross-sectional images of volume scatterers, we call it 'spectral radar'. Medical and technical applications are possible. The sensor is a modified Michelson interferometer, with a broad bandwidth light source. The scattering amplitude a(z) along one vertical axis from the surface into the bulk can be measured within one exposure. No reference arm scanning is necessary. Measurement results of stationary and non stationary scattering phantoms, human skin and of a fish eye in vitro are shown.
Object segmentation, recognition and localization are challenging because of the large amount of input data and because of the invariances required. We discuss strategies to overcome these problems, considering sensors, algorithms and architectures. Specifically, we address neural nets and Hough strategies. The ability of implicit learning makes neural nets interesting for industrial inspection: compared to classical methods they promise robustness against variations of the input data. Furthermore, no expert is necessary for supervision. The inherent parallelity simplifies the design of algorithms. However, the advantages are counterbalanced by a serious drawback: the high computational complexity -- if images are considered. The ability of optics, to help by its inherent parallelity is limited, because neural architectures are usually space variant and cannot simply be implemented optically. We discuss approaching these problems by feature extraction, by sparse algorithms and by space invariant architectures. A competitive strategy for object recognition and localization is based on probability tables, such as the Hough transform uses: a couple of weak but independent hypotheses can give a safe decision about the kind and the locus of an object. This method requires a learning phase prior to the working phase, as the neural strategy does. In that sense it is similar, however, the computational complexity can be much smaller. This makes it possible to segment, localize and recognize objects invariant against shift, rotation and scale.
A video image circulating in a loop with a local nonlinearity and a convolution operator can be considered as a high dimensional nonlinear dynamical system, or as a specific neural net. The evolving image displays spatiotemporal deterministic chaos, oscillations, and stable patterns. The specific behavior of the system is strongly determined by the coupling of pixels, i.e., by the synaptic pattern and by the nonlinearity. The system can be trained to display a certain given image as a fixed point. Thus, it is able to associatively restore perturbed input images, independently of a spatial shift.
We introduce a 3-D sensor with extraordinary features. It supplies an accuracy which is only limited by the roughness of the object surface. This differs from other coherent sensors, where the depth accuracy is limited by the aperture of observation, when they are looking onto optically rough surfaces. As a consequence, our sensor supplies high accuracy, even with small aperture (we can look into narrow holes). The sensor supplies high distance accuracy at volume scatterers as well. The sensor is based essentially on a Michelson interferometer, with the rough object surface serving as one 'mirror'. This is possible because instead of the phase of the interferogram, only the occurrence of interference is detected, via small coherence of the source. We call the method 'coherence radar'.
A fundamental limit for the distance uncertainty of coherent 3D-sensors is presented. The minimum distance uncertainty is given by (delta) z equals 1/2(pi) (DOT) (lambda) /sin2u, with the aperture of observation sinu and wavelength (lambda) . This distance uncertainty can be derived via speckle statistics for different sensing principles, and surprisingly the same result can be obtained directly from Heisenberg's uncertainty, principle for a single photon. Because speckles are the main reason for distance uncertainty, possibilities to overcome the speckle problem are discussed. This leads to an uncertainty principle between lateral resolution and longitudinal distance uncertainty. A way to improve the distance uncertainty without sacrificing lateral resolution is the use of temporally incoherent light.
We introduce two new invariant transforms. The first one is invariant under shift rotation and scaling of the input pattern simultaneously. The second transform based on the Rapid Transform is fast shift invariant and invertible. Both transforms are useful for signal classification and recognition.
We show a system for sequential distance measurement which works as a 3D-sensor. One point of the object is illuminated via an xy-scanning-system. The scattered wave carries the distance information in its curvature at the location of the sensor. We collinearly measure the radius of this wave in the sensor plane by shearing interferometry. Heterodyne modulation of the interference pattern is performed by a photoelastic modulator for high speed and robustness against environmental light. The sensor requires very small apertures for illumination and detection so shading effects and hidden points are minimized. 2.
We will discuss some physical limits of 3D-sensors that work on ''rough'' surfaces. An important result is that the most frequently used principles (laser triangulation and focus sensing with projected light spots) cannot achieve a better depth resolution than given by the Rayleigh depth of field. However we can achieve ''superresolution'' in depth by sacrificing lateral resolution: There is an ''uncertainty relation'' between lateral and longitudinal resolution. 1 .
We present a feedback system where a picture u(t) circulates under successive application of a convolution operation
with the kernel h and of a nonlinearity NL:
t1(t + 1) = NL((u(t)*h)1) . (1)
This system represents a special class of neural networks: it is space invariant. In comparison to space variant neuronal
networks it can be implement much easier, for example by Fast Fourier Transform, or even optically. Nevertheless, it
exhibits a broad spectrum of behavior: there may be deterministic chaos in space and time, i. e. the system is unpredictable
in principle and displays no fixed points, or stable structures may evolve.' Certain convolution kernels may lead to the
evolution of stable structures, i. e. fixed points, that look like patterns from nature, for example like crystals or like
magnetic domains.' In experiments described in 2 we observed that the stable structures can be disturbed quite heavily and
are yet autoassociativly restored during a couple of iteration cycles. Here we show how to adjust the kernel h in order to
obtain a certain desired stable state u of eq. 1, and how to apply the system to shift invariant pattern recognition.
A 'diffraction-free' light distribution does not change its transverse intensity
pattern as it propagates through a homogeneous medium. Hence, it would be extremely
useful as a probe in optical metrology, specifically for 'light sectioning'. Is such a
'light plane' possible? How much confined can light be, without being spreaded by
diffraction? We discuss these questions. It turns out that a light plane cannot be
generated by monochromatic, coherent illumination. However, it is possible to generate a
light plane incoherently, by a scanning axicon beam.
Optical 3D-sensors are useful tools for automatic inspection.
But frequently they are complex and expensive.
We introduce a simple 'point sensor' for the acquisition
of 3D object data, based on a modification of the
'confocal focus sensing'. This principle was described
by Shamir et ala, for sensing microscopic, mirrorlike
objects. Our modification works for macroscopic objects
with rough surfaces. Rough surfaces introduce problems,
because, with coherent illumination, we get speckled
spot images. To reduce speckle effects,often a very high
illumination aperture is used. But when measuring macroscopic
objects one needs small apertures to get enough depth
of field and to avoid shading. We reduce speckle by using a
broadband laser diode or a white light source. Fig.l
The sensor works as follows (Fig.l): a light spot is projected
onto the object surface. Two images of this light spot are created by
the lens L and two beamsplitters. Two pinholes are located on the optical
axis, at different distances. The flux l and 2 behind the pinholes
is measured. The ratio D1/D2 encodes the object distance. 41/b2 is independent
from the object reflectivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.