PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The application of overlay run-to-run control in high-mix production fabs (such as development fabs, ASIC fabs or foundries) encounters some unique problems. A new observer algorithm called JADETM (Just-in-Time Adaptive Disturbance Estimation) was developed to solve the high-mix run-to-run control problem. JADE uses recursive weighted least-squares parameter estimation to identify the contributions to variation that are dependent upon tool, product, reference tool, and reference reticle. With the JADE Observer algorithm, run-to-run controllers use all available feedback data independent of the length of time since a particular product was processed. The application of JADE, compared to traditional control techniques, will be demonstrated on high and low-mix fab lithography overlay data. This comparison illustrates the degradation of the typical streamline observer formulation under high-mix operation, with an actual worsening of overlay control relative to open-loop (no run-to-run control) operation. In contrast, the JADE algorithm control performance, while matching the typical streamline formulation at low-mix operation, will be shown to be unaffected under very high-mix photolithography operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main objective of the CMP run-to-run controller is to reduce the lot-to-lot variation in the post-polish oxide film thickness. Besides tool-induced variation, product-induced variation is also a significant source of variation to the CMP process. But the need to compensate for device pattern dependencies has not been addressed until recently. In this work, two removal rate models (topography factor model and sheet film equivalent model) are compared and the equivalency between them is derived. A new control method is proposed based on the sheet film equivalent model, which shows significant improvement compared to the traditional control method based on topography factor model. The performance of the proposed control algorithm is demonstrated using both simulated and industrial examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed advanced process control (APC); run-by-run model based process control (RbR MBPC) system for deep sub-100nm gate fabrication of CMOS logic ships, designed in order to achieve lot-to-lot variance of gate line width within ±1nm, using critical dimension measurement scanning electron microscope (CD-SEM). Using a lot-mean resist linewidth (pre-etch CD), gate etching plasma condition can be modified to control poly-silicon gate linewidth (post-etch CD) on target. Using etching shift amount of a pilot-wafer within a processing lot, model in the MBPC can be updated to avoid changes of the intercept of the model that is linear equation. The MBPC system was applied to deep sub-100nm gate fabrication and was tested using test lots of 73 to evaluate performance. At an initial lot-mean pre-etch CDs spread of 9.31 nm, the lot-mean post-etch CDs spread was reduced to range of 2.49 nm and its variance was 0.55 nm of 1σ. The range of the linear equation intercept was 8.12 nm and then the range of the prediction errors of feedback control was 2.26 nm, which is originated from both the 1st wafer effect of a pilot wafer of processing lot and measurement CD errors. We found that the prediction error is the largest in errors of the MBPC system. The prediction of model intercept is crucial in the MBPC system in order to achieve lot-to-lot variance of gate line width within ±1nm for gate etching fabrication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a stability analysis is conducted for several feedback controllers of photolithography processes. We emphasize the stability of process controllers in the presence of model mismatch, and other uncertainties such as system drift and unknown noise. Real data of critical dimension (CD) in shallow trench isolation area from an Intel manufacturing fab is used for model analysis. The feedbacks studied in this paper include a controller based on an adaptive model, and several controllers based on existing estimation methods such as EWMA, extended EWMA, and d-EWMA. Both theoretical analysis and computer simulations are presented to show the stability of the controlled process under these feedbacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, the advanced usage of simulation tools for optical lithography requires substantial knowledge about the impact of model parameters and process conditions on simulation results. In many cases up to 30 or 40 parameters have to be tuned for different experimental data in order to obtain reliable simulation results. Consequently, the investigation of the impact of all model and process parameters on simulation results can be very time consuming. Therefore, we applied a correlation analysis, a well known statistical method, that allows a sensitivity analysis of simulation parameters. We compared the results of the sensitivity analysis method with the outcome of a standard “one-factor-at-a-time-method” and discuss the advantages and disadvantages of both methodologies. A calibrated ArF photoresist model has been examined with both sensitivity analysis methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pattern matching has long been a cornerstone of industrial inspection. For example, in order to obtain high accuracy, modern overlay metrology tool optics are optimized to ensure symmetry around the central axis. To obtain best performance, the metrology target should be as close as possible to that axis, hence a pattern recognition stage is usually used to verify target position before measurement. However most of the work performed to date has concentrated on situations where the imaging process could be described by simple ray-tracing, where the image is formed by albedo difference between surfaces rather than interference. However, current semiconductor technology requires optical identification of targets less than 30 microns (i.e. about 50 wavelengths) across, and of order 1 wavelength deep, and this description is no longer valid; interference and focusing effects become dominant. In this paper we examine these effects, and their impact on a number of different techniques. We compare image-based and CAD-derived models in the training of the pattern recognition system; CAD-derived models are of particular interest due to their use in “imageless” recipe creation techniques. Our chief metrics are precision and reliability. We show that for both types of pattern matching approach, submicron precision and high reliability is achievable even in very challenging optical environments. We show that, while generally inferior to image based models, that models derived from design data are more robust to changes caused by process variation, namely changes in illumination, contrast and focus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The oxide etch rate of a single chamber of plasma etch tool is estimated from plasma impedance data collected during the etch process. The etch rate is estimated using a linear statistical model and etch rate measurements performed on special test wafers. Stepwise regression is used to select possible predictors from a large pool of summary statistics calculated from the plasma impedance waveforms. The relationship of the estimated mean etch rate to yield and potential yield optimization is explored. An example application of an
advanced process controller to optimize the yield of the wafers processed by the etch tool in the presence of varying chamber conditions is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yield forecasting is a key component in running a successful semiconductor fab. It is also a significant challenge for facilities such as ASIC houses, which fabricate a wide range of devices using multiple technologies. Yield forecasting takes on increased significance in these environments, with new products introduced frequently and many products running only in small numbers. An accurate yield prediction system can greatly accelerate the process of identifying design bugs, test program issues and process integration problems. To this end, we have constructed a forecasting model geared for our ASIC manufacturing line. The model will accommodate an arbitrary number of design and/or process elements, each with an associated defectivity term. In addition, we have automated the generation of the yield forecast through passively linking to the already existing EDA design tools and scripts used by LSI Logic. Once the model is constructed, an automated query engine can extract the design and process parameters for any requested device, insert the data into the forecasting model, and deliver the resulting yield prediction. The actual yield for any lot or group of lots may thus be compared to the forecast, greatly assisting yield enhancement activities. This is especially useful for prototype lots and low-volume devices, for which it eliminates a great deal of manual computation and searching of design files. Using the model in conjunction with the query engine, any deviations from expected yield performance are generated automatically, quickly and efficiently highlighting opportunities for improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a systematic methodology to extract two-transistor split-gate flash memory cell model for an accurate DC simulation of SoC designs. Since measured device characteristics require re-design of test-structures with FG contacts, we have used a technology CAD (TCAD) based methodology to develop 2T-cell models for sub-0.18 μm split gate flash memory cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a new method of estimating the systematic spatial variation across wafers. Current methods for this task share some common deficiencies. For example, few of these techniques are able to decompose the systematic variation into components that can be assigned to different types of tools. Most of these methods are also sensitive to outliers and require that the outliers be manually removed before the model can be estimated. Almost none of the previous
methods can account for high-frequency effects caused by reticle
non-uniformity. Our method is based on a linear regression model with various components to account for the systematic variation that occurs in practice. Polynomial components model the smooth variation caused by tools that cannot process the wafer uniformly. Reticle components model the variation that occurs due to non-uniformities in the microlithography and etch tools. To generate distinct patterns, we apply QR orthogonalization to the systematic patterns prior to regression. To limit the effects of outliers, we employ robust regression. We demonstrate the performance of our technique with an
example on data collected from production wafers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to control critical dimensions of structures on semiconductor devices is paramount to improving die yield and device performance. Historical methods of in-line metrology, Scanning Electron Microscopes (SEM), ellipsometry, and scatterometry give the ability to monitor critical dimensions and film thickness. These methods, with some challenges on smaller technology nodes, have proven effective in identification of changes critical measurements. They let you know that something has changed. The next step in factory performance is to improve the ability to quickly identify the root cause of the variation and to address it to minimize the impact on revenue. Our focus has been on some novel means of characterizing tool performance. In this paper, we outline our methods of system fingerprinting of real-time temperature measurements. This trace represents what the wafer is subjected to during normal processing. Trough periodic monitoring we are able to determine if the efficacy of conductance between the plasma and the chuck is normal. In addition, when variations in down stream measurements (CD’s and film thickness) arise, we are able to quickly identify if our production tools are operation normally. If there are abnormalities in the tool performance, we are able to quickly identify where and when the problem is taking place. The real time aspect of monitoring plasma on temperature is an added level of resolution that aids in trouble shooting tool performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Meeting a specific CD uniformity roadmap becomes more and more difficult as different budget components affecting CD uniformity fail to meet their requirements. For example, reticle manufacturing is at the edge of its potential, and hotplates impact CD uniformity by design. Also, etch processes must be balanced between optimal settings for varying structures. While work continues to enhance the performance of individual budget components, applying local exposure dose compensation with a scanner can provide a near-term solution for improving CD uniformity. Within the wafer processing chain, only the scanner has the unique capability to influence the final quality across-field and field-to-field in a controlled manner, making it the most effective tool for compensation. This paper describes the subsystems required for dose compensation and presents a solution that allows full integration into an automated fabrication environment. Examples will show that both the reticle contribution as well as the process-induced across-wafer fingerprint, including etch, can be improved by up to 50 percent. This improvement is demonstrated both on test structures and on memory device layers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, overlay measurements are characterized by “recipe”, which defines both physical parameters such as focus, illumination et cetera, and also the software parameters such as algorithm to be used and regions of interest. Setting up these recipes requires both engineering time and wafer availability on an overlay tool, so reducing these requirements will result in higher tool productivity.
One of the significant challenges to automating this process is that the parameters are highly and complexly correlated. At the same time, a high level of traceability and transparency is required in the recipe creation process, so a technique that maintains its decisions in terms of well defined physical parameters is desirable. Running time should be short, given the system (automatic recipe creation) is being implemented to reduce overheads. Finally, a failure of the system to determine acceptable parameters should be obvious, so a certainty metric is also desirable. The complex, nonlinear interactions make solution by an expert system difficult at best, especially in the verification of the resulting decision network. The transparency requirements tend to preclude classical neural networks and similar techniques. Genetic algorithms and other “global minimization” techniques require too much computational power (given system footprint and cost requirements). A Bayesian network, however, provides a solution to these requirements. Such a network, with appropriate priors, can be used during recipe creation / optimization not just to select a good set of parameters, but also to guide the direction of search, by evaluating the network state while only incomplete information is available. As a Bayesian network maintains an estimate of the probability distribution of nodal values, a maximum-entropy approach can be utilized to obtain a working recipe in a minimum or near-minimum number of steps. In this paper we discuss the potential use of a Bayesian network in such a capacity, reducing the amount of engineering intervention. We discuss the benefits of this approach, especially improved repeatability and traceability of the learning process, and quantification of uncertainty in decisions made. We also consider the problems associated with this approach, especially in detailed construction of network topology, validation of the Bayesian network and the recipes it generates, and issues arising from the integration of a Bayesian network with a complex multithreaded application; these primarily relate to maintaining Bayesian network and system architecture integrity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most process window analysis applications are capable of deriving the functional focus-dose workspace available to any set of device specifications. Previous work in this area has concentrated on calculating the superpositioned optimum operating points of various combinations of feature orientations or feature types. These studies invariably result in an average performance calculation that is biased by the impact of the substrate, reticle and exposure tool contributed perturbations. Many SEM's and optical metrology tools now provide full-feature profile information for multiple points in the exposure field. The inclusion of field spatial information into the process window analysis results in a calculation of greater accuracy and process understanding because now the capabilities of each exposure tool can be individually modeled and optimized. Such an analysis provides the added benefit that after the exposure tool is characterized, it's process perturbations can be removed from the analysis to provide greater understanding of the true process performance. Process window variables are shown to vary significantly across the exposure field of the scanner. Evaluating the depth-of-focus and optimum focus-dose at each point in the exposure field yields additional information on the imaging response of the reticle and scan-linearity of the exposure tool's reticle stage. The optimal focus response of the reticle is then removed from a full wafer exposure and the results are modeled to obtain a true process response and performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lot-to-lot ADI CD data are generally used to tighten the variation of exposure energy of an exposure tool through an APC feedback system. With decreasing device size, the process window of an exposure tool becomes smaller and smaller. Therefore, whether the ADI CD can reveal the real behavior of a scanner or not becomes more and more a critical question, especially for the polysilicon gate layer. CD-SEM has generally been chosen as the metrology tool for this purpose. Because of the limitations of top-down CD-SEMs, an APC system could be easily misled by improper ADI CD data if the CD data were measured on a T-topped photo resist. ArF resist shrinkage and line edge roughness are also traditional causes for improper CD feedback if the user did not operate the CDSEM carefully. Another candidate for this APC application is spectroscopic-ellipsometry-based scatterometry technology, commonly referred to as SpectraCD. In recent studies, SpectraCD was proven to be able to reveal profile variation with excellent stability. The feasibility of improving a CDSEM-based APC system by a SpectraCD-based system in a high-volume manufacturing fab is therefore worthy of study.
This study starts from an analysis of the historical data for the polysilicon ADI CD of a 130 nm product. Two different sets of CD measured from the two different metrology tools were analyzed. In the fab, CDSEM was the metrology tool chosen for the APC feedback. The CD data measured by SpectraCD over a 2 month timeframe were plotted as a CD trend chart of the specific exposure tool. There are several trend-ups and trend-downs observed, even though the overall CD range is small. After a series of analyses, the exposure tool has been proven to be quite stable and the CD data measured by SpectraCD also reveal the real behavior of the exposure tool correctly. The scanner is shown to have been misled by improper CD feedback. In comparison with CDSEM, the linearity of the correlation between ADI and AEI CDs, which represents the consistence of etch bias, can also be improved from 0.4 to 0.8 by SpectraCD. The root causes are still under investigation, but one suspected reason is related to resist profile. All the analysis results will be reported in this paper. The data provided sufficient motivation for switching the APC feedback system of the fab from a CDSEM-based system to a SpectraCD-based system. The results of the new APC system will also be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of an Automated Process Control (APC) depends highly on the amount of relevant measurement data points. The quality of APC for low volume products is lower than high volume products, since there is not enough data to respond to tool parameters drift or incoming variations. In order to improve low volume runners control it is proposed to use high volume runners data to generate feedback for low volume runners. Product to product differences can be minimized by applying bias. This bias does not remain stable due to tool parameters drift or incoming variations. The current paper addresses these issues and reviews different methods for bias control/change if needed. Intel Litho APC is using EWMA time based weighting for parameters like Overlay parameters, Focus and Dose control. The data for each set of feedback list is segmented by several partition variables (tool, operation, etc.) within a defined expiration period. For low volume runners it is possible to widen the partition by adding main runners data with applied bias. Historical data shows possible bias variability following process or tool drifts over time. Different cases of partition biases are reviewed based on Litho parameters examples. Various algorithms for bias control and bias calculation are reviewed. Simulation studies are performed to predict the impact of deploying this strategy in production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the subthreshold behavior of Fin Field Effect Transistor (FINFET) by solving 3D Laplace, and Poisson equations. Based on the potential distribution inside the fin, the appropriate band bending and the change in the band bending (∂ψs) were calculated. Three-dimensional analysis of (∂ψs) the change in the band bending indicates that (∂ψs)is less (by ~ 20% for a channel width (Tfin) of 20 nm) in the middle of the channel compared to that at the Si-SiO2 interface. The decrease in (∂ψs) towards the middle of the channel indicates that the control of the gate decreases towards the middle of the channel. Simulation results show that the S-factor of the device increases as Tfin increases. It is observed that the S-factors calculated from the Laplace and the Poisson equations differ by ~7% for a device with a Tfin = 50 nm. However this difference in S-factor gradually decreases and for smaller channel width devices, the S-factors calculated using Laplace and Poisson equations are the same. A comparison of S-factors obtained from Laplace and Poisson equation shows that the S-factor obtained from Poisson equation agrees very well with the reported experimental results. Thus, the systemic study of subthreshold behavior of FinFET shows that it is most appropriate to determine the S-factor of wider channel devices by solving 3D Poisson equation with appropriate doping concentration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The overriding motivation for yield engineering is profitability. This is achieved through application of yield management. The first application is to continually reduce waste in the form of yield loss. New products, new technologies and the dynamic state of the process and equipment keep introducing new ways to cause yield loss. In response, the yield management efforts have to continually come up with new solutions to minimize it. The second application of yield engineering is to aid in accurate product pricing. This is achieved through predicting future results of the yield engineering effort. The more accurate the yield prediction, the more accurate the wafer start volume, the more accurate the wafer pricing. Another aspect of yield prediction pertains to gauging the impact of a yield problem and predicting how long that will last. The ability to predict such impacts again feeds into wafer start calculations and wafer pricing. The question then is that if the stakes on yield management are so high why is it that most yield management efforts are run like science and engineering projects and less like manufacturing? In the eighties manufacturing put the theory of constraints1 into practice and put a premium on stability and predictability in manufacturing activities, why can't the same be done for yield management activities? This line of introspection led us to define and implement a business process to manage the yield engineering activities. We analyzed the best known methods (BKM) and deployed a workflow tool to make them the standard operating procedure (SOP) for yield managment. We present a case study in deploying a Business Process Management solution for Semiconductor Yield Engineering in a high-mix ASIC environment. We will present a description of the situation prior to deployment, a window into the development process and a valuation of the benefits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel broadband radio frequency (RF) sensor technology, which can be used for plasma process control, including Fault Detection and Classification (FDC). Plasma is a non-linear complex electrical load, therefore generates harmonics of the driving frequency in the electrical circuit. Plasma etch processes have dependencies on chamber pressure, delivered power, wall and substrate temperatures, gas phase and surface chemistry, chamber geometry and particles, and many other second order contributions. Any changes, which affect the plasma complex impedance, will be reflected in the Fourier spectrum of the driving RF power source.
We have found that high-resolution broadband sensing, up to 1GHz or more than 50 harmonics (for a fundamental frequency of 13.56MHz), greatly increases the effectiveness of RF sensing for process-state monitoring. This paper describes the measurement sampling technique; the broadband RF sensor and presents data from commercial plasma etch tool monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Critical dimension (CD) or linewidth is one the most critical variable in the lithography process with the most direct impact on the device speed and performance of integrated circuit. The absorption coefficient is one of the photoresist properties that can have an impact on the CD uniformity. The absorption coefficient determined the required exposure dose for printing the features. Hence, nonuniformity in absorption coefficient across the substrate will lead to nonuniformity in the linewidth. This paper presents an innovative approach to controlling the within wafer photoresist absorption coefficient uniformity. Previous works in the literature can only control the average uniformity of the absorption coefficient. Our approach uses an array of spectrometers positioned above a multizone bakeplate to monitor the absorption coefficient. The absorption coefficient can be extracted from the spectrometers data using standard optimization algorithms. With these in-situ measurements, the temperature profile of the bakeplate is controlled in real time by manipulating the heater power distribution using conventional proportional-integral (PI) control algorithm. We have experimentally obtained a repeatable improvement in the absorption coefficient uniformity from wafer-to-wafer and within wafer. A 50% improvement in absorption coefficient uniformity is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In today’s highly competitive markets, it is imperative for a manufacturing fab to perform with a high capability to adapt to the latest technologies, while being flexible to change part mix to meet dynamic market demands and at the same time achieve a high throughput to maximize productivity. The ability for a fab to adjust in this multi-part, mixed technology environment requires advanced control mechanisms to ensure that the correct control settings are used for each wafer processed. These circumstances are compelling fabs globally to gain the ability to introduce a multitude of parts with the correct control settings without having to relearn before processing. In this paper the concept of controller state is explored to address these issues by studying mechanisms to predict post-event settings. The concept of inheritance is explored to expand the controller state capability to predict optimized process settings across parts and tools. This paper explores the development and implementation of these mechanisms at Infineon Technologies, Richmond to reduce send-ahead (SAHD) and rework events. The benefits of controller state to handle tool events and part introductions are illustrated using events such as scheduled preventive maintenance, hardware upgrades, porting parts across tools and reintroduction of low runner parts back into production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To attain quick turn-around time (TAT) and high yield, it is very important to remove all the problems affecting the semiconductor volume production line. For this purpose, we have used a lithography management system (LMS) as an advanced process control system. The LMS stores the critical dimension and overlay inspection results as well as the log data of the exposure tool in a relational database. This enables a quick and efficient grasp of the productivity under the present conditions and helps to identify the causes of errors. Furthermore, we developed a mining tool, called a log data extraction and correlation miner (LMS-LEC), for factor analysis on the LMS. Despite low correlation between all data, a high correlation may exist between parameters in a certain data domain. The LMS-LEC can mine such correlations easily. With this tool, we can discover previously unknown error sources that have been buried in the vast quantity of data handled by the LMS and thereby increase of the effectiveness of the exposure and inspection tool. The LMS-LEC is an extremely useful software mining tool for “equipment health” monitoring, advanced fault detection, and sophisticated data analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In-line measurements are used to monitor semiconductor manufacturing processes for excessive variation using statistical process control (SPC) chart techniques. Systematic spatial wafer variation often occurs in a recognizable pattern across the wafer that is characteristic of a particular manufacturing step. Visualization tools are used to associate these patterns with specific
manufacturing steps preceding the measurement. Acquiring the measurements is an expensive and slow process. The number of sites measured on a wafer must be minimized while still providing sufficient data to monitor the process. We address two key challenges to effective wafer-level monitoring. The first challenge is to select a small sample of inspection sites that maximize detection sensitivity to the patterns of interest, while minimizing the confounding effects of other types of wafer variation. The second challenge is to develop a detection algorithm that maximizes sensitivity to the patterns of interest without exceeding a user-specified false positive rate. We propose new sampling and detection methods. Both methods are based on a linear regression model with distinct and orthogonal components. The model is flexible enough to include many types of systematic spatial variation across the wafer. Because the components are orthogonal, the degree of each type of
variation can be estimated and detected independently with very few samples. A formal hypothesis test can then be used to determine whether specific patterns are present. This approach enables one to determine the sensitivity of a sample plan to patterns of interest and the minimum number of measurements necessary to adequately monitor the process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a “good idea”, and “wouldn’t it be great” concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated “lights out” manufacturing facilities.
A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs’ existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue Control Technologies has developed an advance service oriented architecture Run to Run Control System which addresses these requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edge defocus can be reduced by several techniques on a Nikon S204 scanner. By using main software MCSV version 3.44 or above, as well as disable range, and scan direction. Edge defocus can be reduced but is not eliminated at 3 o'clock and 4 o'clock. Wafer flatness is studied to investigate the cause of this defocus. The chuck shape is the key to avoid to defocusing at these two positions. Modification of chuck shape can reduce yield loss.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Critical dimension (CD) or linewidth is one the most critical variable in the lithography process with the most direct impact on the device speed and performance of integrated circuit. The resist development step is one of the critical step in the lithography process that can have an impact on the CD uniformity. The development rate can have an impact on the CD uniformity from wafer-to-wafer and within-wafer. Non-uniformity in the time to reach endpoint is the result of non-uniformity in film thickness, exposure dosage and resist chemical compound. This can in turn lead to non-uniformity in the linewidth. Conventional approach to control this process include monitoring the end-point of the develop process and adjust the development time or concentration from wafer-to-wafer or run-to-run. This paper presents an innovative approach to control the photoresist development rate in real-time by monitoring the photoresist thickness. Our approach uses a spectrometer positioned above a bakeplate to monitor the development rate. The absorption coefficient can be extracted from the spectrometers data using standard optimization algorithms. With these in-situ measurements, the temperature profile of the bakeplate is controlled in real time by manipulating the heater power distribution using conventional proportional-integral (PI) control algorithm. We have experimentally obtained a repeatable improvement in the time to reach end-point for the develop process from wafer-to-wafer. Nonuniformity of less than 5% in the time to reach endpoint has been achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tight control of critical dimensions (CDs) of integrated circuit (IC) is required to achieve desired circuit performances, and getting more and more important as the IC CD shrinks. Phenomena and solutions of inter-field and intra-field CD errors have been widely studied for years. One of the well-known intra-field CD errors is so called the developer micro-loading effect due to the different pattern density loadings across the exposure field, in the other words, the more different the pattern density is, the more CD errors it would be expected. Some of the circuit layouts, e.g. thick gate oxide layers of dual gate oxide processes, and gate layers of embedded memory products, have this kind of across field pattern density concerns because of the different pattern density areas. Some researches showed that eliminating the by-products during the development process could reduce the developer micro-loading effect. With a multi-step development process (Puddle-Static Development-Dry-Puddle-Static Development-Rinse/Dry), the by-products can be removed and achieve a better CD uniformity. In this paper, optimization of the first puddle time in the multi-step development process is found to be the most critical to achieve uniform intra-field CDs. The purpose of the first puddle step is not only to remove the by-products but also to control the influence of the by-products to achieve uniform intra-field CDs. Once most of the by-products generated during the whole development process were carried away by the first puddle step, the optimum static Dev. time is needed to obtain the minimum intra-field CD difference. However, different photo-resists with different chemical formulations are expected to have identical optimum puddle time due to different chemical reactions of each by-product species, e.g. i-line PRs vs DUV PRs, or annealing type DUV PRs vs acetel type DUV PRs. These comparisons will be explained in details in this paper. Finally, the source of the by-products during the developer process was also identified to verify the validation of the multi-step developer process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Focus and exposure dose control in lithography is a key challenge for CD (critical dimension) control at 90 nm technology node and beyond. Specially, more high accurate focus control will be necessary for low power MOS devices. Focus and dose line navigator (FDLN) is one of the candidates as in-line controller. The FDLN methodology involves two steps: first, create a focus-dose matrix (FEM) for building the library as supervised data using test wafer. The library means relational equation between the topography of photoresist patterns (line width: CD, height: HT, a side wall angle: SWA) and FEM exposure conditions, second, measure standard production wafer and feed the raw data into the library (which extrapolate focus and dose), which is then provided to the user. Using FDLN, current volume production’s focus and dose deviation from the best condition can be obtained. In this time, we have evaluated FDLN using an optical CD measurement tool and process wafer. STI, Cu-CMP ,metal wafers are used in this time as actual process. We acquired several FEM set of image feature from wafers, which were exposed by ArF scanner. According to our experiment, the estimation precision for focus and dose are below 24nm and below 1.7% respectively. And CD difference in a chip can reduce to one third as compared with the conventional QC method. These results suggest that FDLN can be the solution as in-line focus controller for volume production, enabling the progression toward Advanced Process Control (APC)
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.