This article [Opt. Eng.. 51, , 097001 (2012)] was originally published on 13 Sep. 2012 with an error on p. 6, column 2, line 1. There was a stray multiplication symbol before the equation. The corrected line should appear as follows:
.
"...function {where L(x,y)=100exp[(x−128 220 ) 2 +(y−128 220 ) 2 ]} ."
.
The paper was corrected online on 18 Sep 2012. The article appears correctly in print.
Uneven illumination is a common problem in practical optical systems designed for machine vision applications, and it leads to significant errors when phase-shifting algorithms (PSA) are used to reconstruct the surface of a moving object. We propose an illumination-reflectivity-focus model to characterize this uneven illumination effect on phase-measuring profilometry. With this model, we separate the illumination factor effectively and consider the phase reconstruction from an optimization perspective. Furthermore, we formulate an illumination-invariant phase-shifting algorithm (II-PSA) to reconstruct the surface of a moving object under an uneven illumination environment. Experimental results show that it can improve the reconstruction quality both visually and numerically.
Uneven illumination is a common problem in real optical systems for machine vision applications, and it contributes
significant errors when using phase-shifting algorithms (PSA) to reconstruct the surface of a moving
object. Here, we propose an illumination-reflectivity-focus (IRF) model to characterize this uneven illumination
effect on phase-measuring profilometry. With this model, we separate the illumination factor effectively, and
then formulate the phase reconstruction as an optimization problem. To simplify the optimization process, we
calibrate the uneven illumination distribution beforehand, and then use the calibrated illumination information
during surface profilometry. After calibration, the degrees of freedom are reduced. Accordingly, we develop
a novel illumination-invariant phase-shifting algorithm (II-PSA) to reconstruct the surface of a moving object
under an uneven illumination environment. Experimental results show that the proposed algorithm can improve
the reconstruction quality both visually and numerically. Therefore, using this IRF model and the corresponding
II-PSA, not only can we handle uneven illumination in a real optical system with a large field of view (FOV),
but we also develop a robust and efficient method for reconstructing the surface of a moving object.
A challenge in the semiconductor industry is the 3D inspection of solder bumps grown on wafers for direct die-to-die bonding. In an earlier work we proposed a novel mechanism for reconstructing wafer bump surface in 3D, which is based upon projecting a binary pattern to the surface and capturing image of the illuminated scene. By shifting the binary pattern in space and every time taking a separate image of the illuminated surface, each position on the illuminated surface will be attached with a binary code in the sequence of images taken. 3D information about the bump surface can then be obtained over these coded points via triangulation. However, when a binary pattern is projected onto the inspected surface through projection lenses, the high order harmonics of the pattern are often diminished because of the lens' limited bandwidth. This will lead to blurring of the projected fringe boundaries in the captured image data and make differentiation between dark and bright fringes there difficult. In addition, different compositions of the target surface, some metallic (the solder surface) and some not (the substrate surface of the wafer), have different reflectance functions (including both the specular and lambertian components). This makes fringe boundary detection in the image data an even more challenging problem. This paper proposes a solution to the problem. It makes use of the spatial-temporal image volume over the target surface to tackle the issue of inhomogeneous reflectance function. It is shown that the observed intensity profile across the images of a fixed point has the same up-and-down profile of the orignal binary gratings, regardless of the reflectance on the target surface, from which edges can be detected using classical methods like the gradient based ones. Preliminary study through theoretical analysis and empirical experiments on real image data demonstrate the feasibility of proposed approach.
As the electronic industry advances rapidly, the shrunk dimension of the device leads to more stringent requirement on process control and quality assurance. For instance, the tiny size of the solder bumps grown on wafers for direct die-to-die bonding pose great challenge to the inspection of the bumps’ 3D quality. Traditional pattern projection method of recovering 3D is about projecting a light pattern to the inspected surface and imaging the illuminated surface from one or more points of view. However, image saturation and the specular nature of the bump surface are issues. This paper proposes a new 3D reconstruction mechanism for inspecting the surface of such wafer bumps. It is still based upon the light pattern projection framework, but uses the Ronchi pattern - a pattern that contrasts with the traditionally used gray level one. With the use of a parallel or point light source in combination with a binary grating, it allows a discrete pattern to be projected onto the inspected surface. As the projected pattern is binary, the image information is binary as well. With such a bright-or-dark world for each image position, the above-mentioned difficult issues are avoided. Preliminary study shows that the mechanism holds promises that existing approaches do not.
This paper aims at developing a novel defect detection algorithm for the semiconductor assembly process by image analysis of a single captured image, without reference to another image during inspection. The integrated circuit (IC) pattern is usually periodic and regular. Therefore, we can implement a classification scheme whereby the regular pattern in the die image is classified as the acceptable circuit pattern and the die defect can be modeled as irregularity on the image. The detection of irregularity in image is thus equivalent to the detection of die defect. We propose a method where the defect detection algorithm first segments the die image into different
regions according to the circuit pattern by a set of morphological segmentations with different structuring element sizes. Then, a feature vector, which consists of many image attributes, is calculated for each segmented region. Lastly, the defective region is extracted by the feature vector classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.