In today’s advanced semiconductor industry with the multitude of new products and device requirements, lithography process changes are inevitable and expected due to tighter overlay specifications and the increasing influence of process effects. To minimize the overlay impact from the wafer-to-wafer and lot-to-lot variation, lithography engineers often need to adjust wafer alignment and overlay strategies to optimize product yield. Overlay simulation methods are often used to estimate if new alignment and/or overlay strategies can help achieve better overlay performance. By removing the old corrections from the original alignment and overlay data, and then applying new corrections, the overlay performance can be readily predicted, saving significant time. However, available software suites only simulate the current layer overlay. Overlay risks would therefore remain on subsequent layers without an effective simulation method to predict the impact resulting from changes to the alignment and/or overlay strategies on the current layer. In our study, we demonstrate a new method to simulate the overlay performance on the subsequent layers by re-calculating the current layer exposure grid. Our dataset has 3 layers: L0, L1, and L2. On L1 (current layer), we apply a new alignment model and overlay control strategy with a feedforward solution. Then, we re-calculate the exposure grid to generate a new (virtual) alignment and overlay dataset for L2 (following layer). Finally, we estimate the overlay impact on L2 with the new alignment and overlay dataset and summarize the benefits of this new cross-layer overlay simulation method.
The semiconductor industry has been largely using the mean+3sigma overlay dispositioning metric for over 20 years now. As technology shrink progresses, this metric does not represent the most accurate overlay condition on the wafer. The continuous usage of the traditional metric leads to non-optimal rework decisions and potential yield loss in high-volume manufacturing (HVM). We propose an alternative overlay dispositioning metric that reduces rework without compromising accuracy or sampling. The proposed metric is called ‘number-of-dies-within-spec’. It is obtained by first evaluating the overlay model on a dense grid, followed by comparing the grid values against the spec limits. Based on that, each die can be evaluated, and dies below the spec are counted to obtain the wafer key performance indicator (KPI) “number-of-dies-within-spec”. This paper shows the rework gain for two layers when using our proposed metric against the traditional mean+3sigma dispositioning.
In device overlay is an important contributor to the on-product overlay budget. Well known are overlay bias effects, i.e., differences between overlay targets and in-product features. These can be corrected by a non-zero overlay correction in the run-to-run system. In this paper, we examine micro-scale effects, which happen on scales of a few micrometers, for which there is not exposure tool correction possible. With a high-voltage SEM, we use a novel method to investigate both logic and memory wafers and identify several micro-scale effects that are a significant contributor to the on-product overlay budget. We characterize the behavior across the wafer and local variations. In the root cause analysis, we find several possible explanations including mask writer issues, local stress, and impact of the product pitch.
Overlay control continues to be a critical aspect of successful semiconductor lithography processing, with overlay control systems becoming more and more elaborate to meet the requirements of advanced semiconductor nodes. Sampling optimization is especially important including the number of overlay measurements to perform on each wafer, the number of wafers to measure per lot, and where exactly to measure on each wafer. Conventional sampling optimization methodology is to collect dense data for a short period and use this data to optimize the locations to measure on the wafer. In recent years, rule-based sampling was introduced to relax this data requirement and improve the time to result. However, in both scenarios, one single sample plan is generated in offline optimization, which is then used in high volume manufacturing (HVM) without change, hence named “static sampling”. In this paper, we introduce a “dynamic sampling” approach, where multiple rule-based sample plans are generated, that complement each other by measuring different locations on the wafer, while meeting spatial and population balancing criteria. These sample plans can then be used in an alternating manner on a per-wafer basis (wafer-by-wafer dynamic sampling) and per-lot basis (lot-by-lot dynamic sampling) in HVM. In this paper, we first demonstrate the risks and the inherent trade-offs associated with static sampling by using overlay budget breakdown and best/worst case advanced process control (APC) simulations. We then characterize the overlay control improvement potential of dynamic sampling schemes through APC simulations using multiple metrics: on-product overlay, rework overlay and monitoring accuracy. Finally, we calculate the on-product overlay versus throughput cost function analysis and determine which dynamic sampling scheme is the most useful for which throughput conditions.
Overlay requirements are becoming more demanding as the semiconductor device design dimensions continue to reduce in size. Thus, it is increasingly crucial to control overlay under tightened specifications to maximize product yield. To meet these requirements, selecting the best overlay model is important to best manage the lithography process with confidence. A well-characterized overlay model is essential in the wafer production process as it becomes the gauge to optimize product yield, reduce rework process costs and shorten cycle time. In this paper, we will introduce the powerful machine-learning method of Cross-Validation (CV) which helps to improve the prediction capability of an overlay model. This method provides a numerical value that can indicate how well the overlay model predicts the misalignment of a new wafer. Our test result shows that the 5th-order model exhibits an overfitting problem, while the 4th-order model shows good performance. We also discuss how we apply the CV to the correction per exposure (CPE) models that are commonly used.
Integrated Circuit (IC) fabrication requires performing a long sequence of complex process steps. Among them, photolithography patterning plays the most important role to define the dimensions, doping regions, and intercon nections for each device. With the advancement of lithographic processing, minimum feature sizes continue to shrink, and the devices become denser. At the same time, the specifications for overlay accuracy, wafer critical dimension uniformity, and acceptable focus deviations also become tighter. Hence, even nanometer-sized defects on wafer substrates can ha ve a crucial impact on the quality of the lithographic exposures and limit the performance of such devices. Detection and elimination of such surface defects (“focus spots”) at the early stage of processing have been of primary concern to prevent the loss in manufacturing productivity and significant product yield degradation. In this paper, we present a focus spot monitoring framework to detect focus spots and chuck spots accurately by using wafer leveling data. We discuss different strategies how to detect focus spots, how to classify them, and how to monitor and correct them effectively. We evaluate existing focus spot monitoring solutions and how to improve upon them. Altogether, a stable, reliable focus spot monitoring solution is described for optimal focus corrections and rework decisions.
In advanced technology nodes, the focus window becomes tighter to achieve smaller CD features while maintaining or improving product yields. During the past decades, focus spot monitoring (FSM) has been a critical topic in high-volume manufacturing, not only for minimizing the contamination impact on focus performance but also for scanner productivity concerns if wafer table cleaning needs to be executed. Although there is a dedicated FSM option combined with automatic wafer table cleaning from the exposure tools, the users often need to be careful to design the threshold and monitor the area by different products and layers, to prevent false positive alarmsthat impact the productivity of scanners. In some cases, a small focus spot threshold can cause more false positive alarms at the wafer edge area due to the edge roll-off effect on the wafer table and steep wafer topography, which brings difficulty to detecting small focus spots due to contamination. In our study, we compare the classic FSM provided by exposure tools to a newly developed automated FSM mechanism. There are several mathematical steps and approaches implemented into our new type of FSM to reduce false positive focus spot alarms. For comparison, we evaluated the performance of classic and new FSM methods on different layers, which showed special topography, edge roll-off effect, or strong intra-field signature. Finally, a new robust and user-friendly FSM method has been demonstrated and proven that even with a tight threshold, the false positive alarm especially around the wafer edge area can be fully eliminated.
In a leading-edge high-volume manufacturing fab, lithographers focus on searching for a suitable alignment layout strategy to cover process-induced overlay variation. However, how to minimize scanner cross-chuck overlay impact also draws attention due to WPH loss from chuck dedication. In this paper we evaluate a novel algorithm to analyze lithography scanner process/metrology data and introduce a new KPI called “model accuracy” for alignment sampling layout strategy creation, which takes into account robustness index as wafer-to-wafer/chuck-to-chuck variation. Combined with simulated overlay performance, an optimal alignment layout strategy is recommended for a maximum coverage of cross-chuck overlay, which leads to maximum productivity.
With semiconductor technology approaching and exceeding 10 nm design rules the quality requirements for photomasks are continuously tightening. One of the crucial parameters is improved control of the critical dimension (CD) across the photomask. As long as linearity and through pitch effects are not involved, the quality measure is typically defined as CD uniformity. This parameter is normally measured on repeating structures of same size and shape, which are not necessarily placed in identical environments. Density dependent process effects, also called loading effects (LE), pose a challenge for the required CD control. There are several possible contributors to this kind of error within the mask manufacturing flow, such as etch driven loading effects, fogging effects during 50kV exposure and develop driven loading effects. All of these operate at different working ranges, starting at millimeters going down to only a few 100 μm scale. It is comparably easy to derive models for large scale phenomena like etch loading or fogging effects, in contrast to that it is not as straight forward to find suitable models for very short-range effects. A large amount of CD measurements taken by CD SEM is needed to identify such signals of low magnitude and short scales, which make the setup very resource intensive. Furthermore, this methodology requires artificial designs and test structures which aim to sample only the effect of interest. In this paper we present a strategy which combines CD SEM measurements from dedicated test masks with the results from regular product masks. The aim is the derivation and validation of the loading effect correction range and strength. In the first step the data from test masks is analyzed to set up the basic correction parameters. Following this, the approach is supplemented by product data where we combine mask CD and design data. The clear field distribution of the design is convoluted with respect to a hierarchy of length scales. This data is the input for a support vector machine analysis. Thus, we employ a flat machine learning algorithm. However, the input data has been set up to reflect multiple layers of convolution. This particular approach has been chosen, as each convolution length scale is associated with mask process properties, thus alleviating the burden of interpretation which typically mars the interpretation of models obtained by machine learning approaches.
KEYWORDS: Machine learning, Data modeling, Critical dimension metrology, Performance modeling, Process control, Visualization, Data centers, Solids, Data processing, Optical inspection
The currently increasing demand for photo-masks in the regime of the 14nm technology drives many initiatives towards capacity and throughput increase of existing production line. Such improvements are facilitated by improved control mechanisms of the tools and processes used within a production line. While process control of long range parameters such as the average CD behavior is demanding yet conceptually well understood, other parameters such as the small scales CD properties are quite often elusive to process control. These properties often require a dedicated test mask to be processed in order to be validated. In this paper we introduce a systematic approach towards a product based monitoring of small scale CD behavior which uses a CD characteristic extracted from the defect inspection process. This characteristic represents the influence of CD relevant processes starting from 200m up to 4000 m. Large variations in the scale and magnitude of the CD characteristic are induced by layout specific design variations. However, the shape of these distinct curves is remarkably similar, which enables their use for monitoring as well as controlling the mask processes on the above stated spatial scales. In this paper it is demonstrated, that a meaningful process evaluation can be performed by using the classification capabilities of the support vector machines. The small scales CD characteristics presented in figure 1 originate from two distinct tools. Matching of the two tools can be assessed by training a support vector machine to classify the small scales CD characteristics according to their origin. The classification performance on the resampled training set as well as on the validation set is a robust measure for tool matching. The results of this approach are depicted in figure 2. The left panel shows the AUC statistics of bootstrapping resamples for tool comparison “A”. In this case no noticeable difference between the two tools is found (an average AUC of 0.55 suggest no learnable difference). This is contrasted by the tool comparison “B”, here the classifier has an average AUC of 0.75, indicating a learnable difference in the tool performances. This result is backed by the process understand of both tool types.
KEYWORDS: Critical dimension metrology, Machine learning, Data modeling, Process control, Visualization, Data centers, Solids, Detection and tracking algorithms, Data processing, Optical inspection
The currently increasing demand for photo-masks in the regime of the 14nm technology drives many initiatives towards capacity and throughput increase of existing production lines. Such improvements are facilitated by improved control mechanisms of the tools and processes used within a production line. While process control of long range parameters such as the average CD behavior is demanding yet conceptually well understood, other parameters such as the small scale CD properties are quite often elusive to process control. These properties often require a dedicated test mask to be processed in order to be validated. In this paper we introduce a systematic approach towards a product based monitoring of small scale CD behavior which uses a CD characteristic extracted from the defect inspection process. This characteristic represents the influence of CD relevant processes starting from 200 10-6m up to 4000 10-6m. Large variations in the scale and magnitude of the CD characteristic are induced by layout specific design variations. However, the shape of these distinct curves is remarkably similar, which enables their use for monitoring as well as controlling the mask processes on the above stated spatial scales. In this paper it is demonstrated, that a meaningful monitoring of the CD characteristic can be enabled through the use of machine learning methods. A classical monitoring scheme is typically based on measuring the deviation of each curve from the average behavior. However, the monitoring of a curve and deviations thereof often requires the evaluation of the overall shape of the curve. Thus we propose a monitoring concept which uses a support vector machine in order to learn the shapes of the CD characteristics. It is demonstrated that a statistical model of the CD characteristics can be trained and used in order to monitor single excursions (see Figure 1) as well as overall process changes.
With the substantial surge in the need for high-end masks it becomes increasingly important to raise the capacity of the corresponding production lines. To this end the efficient qualification of matching tools and processes within a production line is of utmost relevance. Matching is typically judged by the processing of dedicated lots on the new tool and process. The amount of qualification lots should on the one hand be very small, as the production of qualification plates is expensive and uses capacity of the production corridor. On the other hand the strict requirements of high-end products induce very tight specification limits on the matching criteria. It is thus often very difficult to assess tool or process matching on the basis of a small amount of lots. In this paper we expound on a machine learning based strategy which assesses the mask characteristics of a qualification plate by learning the typical behavior of these characteristics within the production line variations. We show that by careful selection of reference production plates as well as by setting specification limits based on the production behavior we can manage the qualification tasks efficiently by using a small number of masks. The specification characteristics as well as the specific limits are selected and determined using a Naïve Bayes learner. The resulting performance for prediction of tool and process matching is assessed by considering the resulting receiving operator curve. As a result we obtain an approach towards the assessment of qualification data which enables engineers to assess the tool and process matching using a small amount of matching data under the constraint of substantial measurement uncertainties. As an outlook we discuss how this approach can be used to examine the reverse question of detecting process failures, i.e. the automated ability to raise a flag when the current production characteristics start to deviate from their typical characteristics. Overall, in this paper we show how the rapidly evolving field of machine learning increasingly impacts the semiconductor production process.
As the concepts of machine learning and artificial intelligence continue to grow in importance in the context of internet related applications it is still in its infancy when it comes to process control within the semiconductor industry. Especially the branch of mask manufacturing presents a challenge to the concepts of machine learning since the business process intrinsically induces pronounced product variability on the background of small plate numbers. In this paper we present the architectural set up of a machine learning algorithm which successfully deals with the demands and pitfalls of mask manufacturing. A detailed motivation of this basic set up followed by an analysis of its statistical properties is given. The machine learning set up for mask manufacturing involves two learning steps: an initial step which identifies and classifies the basic global CD patterns of a process. These results form the basis for the extraction of an optimized training set via balanced sampling. A second learning step uses this training set to obtain the local as well as global CD relationships induced by the manufacturing process. Using two production motivated examples we show how this approach is flexible and powerful enough to deal with the exacting demands of mask manufacturing. In one example we show how dedicated covariates can be used in conjunction with increased spatial resolution of the CD map model in order to deal with pathological CD effects at the mask boundary. The other example shows how the model set up enables strategies for dealing tool specific CD signature differences. In this case the balanced sampling enables a process control scheme which allows usage of the full tool park within the specified tight tolerance budget. Overall, this paper shows that the current rapid developments off the machine learning algorithms can be successfully used within the context of semiconductor manufacturing.
KEYWORDS: Photomasks, Critical dimension metrology, Semiconducting wafers, Manufacturing, Scanning electron microscopy, Error analysis, Semiconductors, Process control, Lithography, Control systems
In the process of semicondutcor fabrication the translation of the final product requirements into specific targets for each component of the manufacturing process is one of the most demanding tasks. This involves the careful assessment of the error budgets of each component as well as the sensible balancing of the costs implied by the requirements. Photolithographic masks play a pivotal role in the semiconductor fabrication. This attributes a crucial role to mask error budgeting within the overall wafer production process. Masks with borderline performance with respect to the wafer fabrication requirements have a detrimental effect on the wafer process window thus inducing delays and costs. However, prohibitively strict mask specifications will induce large costs and delays in the mask manufacturing process. Thus setting smart control mechanisms for mask quality assessment is highly relevant for an efficient production flow. To this end GLOBALFOUNDRIES and the AMTC have set up a new mask specification check to enable a smart ship to control process for mask manufacturing. Within this process the mask CD distribution is checked as to whether it is commensurable with the advanced dose control capabilities of the stepper in the wafer factory. If this is the case, masks with borderline CD performance will be usable within the manufacturing process as the signatures can be compensated. In this paper we give a detailed explanation of the smart ship control approach with its implications for mask quality.
For the specific requirements of the 14nm and 20nm site applications a new CD map approach was developed at the AMTC. This approach relies on a well established machine learning technique called recursive partitioning. Recursive partitioning is a powerful technique which creates a decision tree by successively testing whether the quantity of interest can be explained by one of the supplied covariates. The test performed is generally a statistical test with a pre-supplied significance level. Once the test indicates significant association between the variable of interest and a covariate a split performed at a threshold value which minimizes the variation within the newly attained groups. This partitioning is recurred until either no significant association can be detected or the resulting sub group size falls below a pre-supplied level.
Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.
Achieving the required critical dimensions (CD) with the best possible uniformity (CDU) on photo-masks has
always played a pivotal role in enabling chip technology. Current control strategies are based on scanning
electron microscopy (SEM) based measurements implying a sparse spatial resolution on the order of ~ 10-2 m
to 10-1 m. A higher spatial resolution could be reached with an adequate measurement sampling, however the
increase in the number of measurements makes this approach in the context of a productive environment
unfeasible. With the advent of more powerful defect inspection tools a significantly higher spatial resolution
of 10-4 m can be achieved by measuring also CD during the regular defect inspection. This method is not
limited to the measurement of specific measurement features thus paving the way to a CD assessment of all
electrically relevant mask patterns. Enabling such a CD measurement gives way to new realms of CD control.
Deterministic short range CD effects which were previously interpreted as noise can be resolved and
addressed by CD compensation methods. This in can lead to substantial improvements of the CD uniformity.
Thus the defect inspection mediated CD control closes a substantial gap in the mask manufacturing process
by allowing the control of short range CD effects which were up till now beyond the reach of regular CD
SEM based control strategies. This increase in spatial resolution also counters the decrease in measurement
precision due to the usage of an optical system.
In this paper we present detailed results on a) the CD data generated during the inspection process, b) the
analytical tools needed for relating this data to CD SEM measurement and c) how the CD inspection process
enables new dimension of CD compensation within the mask manufacturing process. We find that the
inspection based CD measurement generates typically around 500000 measurements with a homogeneous
covering of the active mask area. In comparing the CD inspection results with CD SEM measurement on a
single measurement point base we find that optical limitations of the inspection tool play a substantial role
within the photon based inspection process. Once these shift are characterized and removed a correlation
coefficient of 0.9 between these two CD measurement techniques is found. This finding agrees well with a
signature based matching approach. Based on these findings we set up a dedicated pooling algorithm which
performs on outlier removal for all CD inspections together with a data clustering according to feature
specific tool induced shifts. This way tool induced shift effects can be removed and CD signature
computation is enabled. A statistical model of the CD signatures which relates the mask design parameters on
the relevant length scales to CD effects thus enabling the computation CD compensation maps. The
compensation maps address the CD effects on various distinct length scales and we show that long and short range contributions to the CD variation are decreased. We find that the CD uniformity is improved by 25%
using this novel CD compensation strategy.
Critical dimension uniformity (CDU) is an important parameter for photomask and wafer manufacturing. In
order to reduce long-range CD variation, compensation techniques for mask writers and scanners have been
developed. Both techniques require mask CD measurements with high spatial sampling. Scanning electron
microscopes (SEMs), which provide CD measurements at very high precision, cannot in practice provide the
required spatial sampling due to their low speed. In contrast mask inspection systems, some of which have the
ability to perform optical CD measurements with very high sampling frequencies, are an interesting alternative.
In this paper we evaluate the CDU measurement results with those of a CD-SEM.
Strict reticle critical dimension (CD) control is needed to supply ≤ 20nm wafer technology nodes. In front end
lithographic processes for example, precise temperature control in resist baking steps is considered paramount to limiting
reticle CD error sources. Additionally, current density during writing and focus are continuously tracked in 50kV e-beam
pattern generators (PG) in order to provide stable CD performance. Despite these strict controls (and many others),
feedback compensation strategies are increasingly utilized in mask manufacturing to reach < 2nm 3σ CD uniformity
(CDU). Such compensations require stable reticle CD signatures which can be problematic when alternate or backup
process tools are employed. The AMTC has applied principle component analysis (PCA) to resist CD measurements of
50kV test reticles fabricated with chemically amplified resists (CAR) in order to quantify the resist CDU capabilities of
front and backup lithographic process tools. PCA results elucidate significant resist CDU differences between similar
lithographic process tools that are considered well matched via CDU 3σ comparisons.
The utility of PCA relies on the statistical analysis of large data sets however, reticle CD sampling is typically sparse, on
the 10-2 m or centimeter (cm) scale using conventional scanning electron microscopes (CD SEM). Higher CD spatial
resolutions can be achieved using advanced inspection tools, which provide CD data on a substantially smaller length
scale (10-4 m), thus yielding a considerably larger CD snapshot for front/backup process tool comparisons. Combining
PCA analysis with high spatial resolution CD data provides novel insights into the opportunities for tool and process CD
capabilities.
Critical dimensions (CD) measured in resist are key to understanding the CD distribution on photomasks. Vital to this
understanding is the separation of spatially random and systematic contributions to the CD distribution. Random
contributions will not appear in post etch CD measurements (final) whereas systematic contributions will strongly impact
final CDs. Resist CD signatures and their variations drive final CD distributions, thus an understanding of the mechanisms
influencing the resist CD signature and its variation play a pivotal role in CD distribution improvements. Current
technological demands require strict control of reticle critical dimension uniformity (CDU) and the Advanced Mask
Technology Center (AMTC) has found significant reductions in reticle CDU are enabled through the statistical analysis of
large data sets. To this end, we employ Principle Component Analysis (PCA) - a methodology well established at the
AMTC1- to show how different portions of the lithographic process contribute to CD variations. These portions include
photomask blank preparation as well as a correction parameter in the front end process. CD variations were markedly
changed by modulating these two lithographic portions, leading to improved final CDU on test reticles in two different
chemically amplified resist (CAR) processes.
Improvement of pattern placement accuracy is essential to solve upcoming challenges in mask making. Placement
errors are driven by multiple effects with electron mediated resist surface charging being a major error source. Modeling
this systematic effect thus allows the determination of the placement errors before plate processing. This opens the door
to an effective charging compensation.
In this paper we study the simulated benefit of two distinct charging compensation models in the context of full-scale
mask production layouts. The potential pattern placement improvements are evaluated using actual placement results
obtained without charging effect corrections. An in depth comparison of the two models is presented, demonstrating the
differences in placement error prediction between using a static or a dynamic charging model. We find that substantial
improvements can be achieved using the dynamic charging model. Productive implementation of this functionality is
the natural next step.
Critical Dimension uniformity (CDU) is one of the most critical parameters for the characterization of
photomasks. Lately it has been shown that advanced CD (critical dimension) SEM tools and mask processes
can distinguish the random short-range CD variation from the global CD signature, which is driven by
process and design characteristics. Current electron beam writers can utilize this global CD signature
information and correct the CDU of photomasks accordingly. Therefore a detailed knowledge of the
signature will benefit strongly photomask CDU.
Electron beam writer based signature compensation relies primarily on CD signatures derived from CD
SEMs. Here higher spatial resolutions of the signature are achieved only by high cycle times at metrology.
The trade off between cycle time and resolution leads to a CD resolution somewhere around one cm. Even
then the photomask will have to stay a substantially percentage of the total cycle time at a non-value added
process step.
In this paper we argue that the solution for this dilemma can be found at a completely different process area -
at inspection. We present data showing that the novel CD map feature of the NPI inspection tools enables CD
maps in unparalleled resolution in the mm region. This far exceeds CD SEMs by a factor of 100. Also
utilization of a tuneable spectrum of different features are not limited to selected CD measurement sites. The
CD map is generated in parallel to the traditional defect inspection and works for pre- and post pellicle
inspections equally well.
To evaluate the method we used a single die layout of a current logic design and referenced all data only to
database. Nevertheless, the data presented will demonstrate the excellent repeatability of the CD map
measurement and the good matching to CD SEM measurements.
Reticle critical dimension (CD) errors must be minimized in order for photomask manufacturers to meet tight CD uniformity
(CDU) requirements. Determining the source of reticle CD errors and reducing or eliminating their CDU contributions are
some of the most relevant tasks facing process engineers. The AMTC has applied principal component analysis (PCA) to
reticle resist CD measurements in order to examine variations in the data. PCA provided the major components of resist CD
variation which were rescaled into reticle CD signatures. The dominant component of CD signature variation is very similar
in shape and magnitude between two different chemically amplified resist (CAR) processes, most likely indicating the
variation source is a common process or tool. CD variational signatures from PCA were used as a basis for launching
investigations into potential reticle CD error sources. PCA was further applied to resist CD measurements from alternate
process tools to assist efforts in judging the effectiveness of resist CD signature matching.
KEYWORDS: Photomasks, Critical dimension metrology, Signal to noise ratio, Principal component analysis, Process control, Image registration, Electron beam melting, Error analysis, Lithography, Tolerancing
Current high end chips require an extremely precise fabrication of lithographic masks. Some of the most critical
parameters are the placement of structures on the masks as well as their dimensional tolerances. Improving these two key
parameters has always been one of the central objectives of the Advanced Mask Technology Center (AMTC). To this
end, the AMTC has complemented its process development by a set of enhancement schemes which are used to
compensate residual process signatures. In this paper, improvements achieved in the area of CD uniformity (CDU) and
pattern placement are shown. The correction schemes take first principle considerations as well as empirical findings into
account. Based on this, a set of design and process parameters is used to determine the spatial corrections which will
optimize mask quality parameters. This enables the AMTC to tailor the writing parameters to the needs of each mask
design. Latest results for the 32nm technology show that values as low as 5nm image placement error and 3nm CDU can
be reached at the same time.
Critical Dimension uniformity (CDU) is one of the most critical parameters for the characterization of
photomasks. For years the understanding was that CDU describes a rather random fluctuation of the CD
across the mask. With more advanced CD tools and mask processes the local short-range CD variation (on a
length scale of micrometre) can be distinguished from the global CD signature (typically on a length scale of
centimetre). Recent developments in the pattern generator sector allow correcting for such global CD
signatures. This triggers the current challenge to find stable methods to characterize the global signature of
photomasks.
In our work we present matching results of a technique that calculates the CD signature using exponentially
weighted surrounding points. We investigated different CD SEM tools of different technology generations.
We show that our method allows determination of the CD signature independently of the measurement tool
with low uncertainty and moderate measurement effort. This holds even true when the CDU value is mainly
dominated by the measurement error. Thus our method provides a tool to extend the utilization of older
generation metrology tools as well as the possibility to improve the measurement capability for CD signature
of current tools.
The production process of photo-masks for memory devices is highly demanding since homogeneity of mask parameters
plays a pivotal role for the overall mask quality. Spatially homogeneous mask designs - which are dominant on memory
devices - should in the best case be transferred into a mask exhibiting the same homogeneous behavior. This means that
CD deviations from the mean should ideally bear no systematic signature but at most some random noise. However,
many steps in the mask production process can introduce spatial correlations so that CD deviations are not only
stochastically distributed over the mask but exhibit a pronounced signature. Thus, the determination and quantification of
these deviations is crucial for a) assessing the mask quality and b) driving process improvements to remove CD
signatures.
The most common data analysis method for separating signatures from noise is to average over a number of samples.
Unfortunately, due to the nature of mask manufacturing often there is only one sample available. In this paper we
propose the technique of Thin Plate Spline Smoothing for the determination and quantification of the CD signature of a
given single mask. This analysis is complemented by two statistical tests which assess the fit quality by analyzing the
residual for normality and correlations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.