The stochastic effect in contact single patterning is one of the primary challenges in extending into sub-40nm pitch with 0.33NA EUV. EUV stochastic defects induced by EUV photon shot noise are known to strongly correlate to image contrast. Mitigation of Mask3D induced contrast fading is one of the key solutions to enable further shrink, while maintaining sufficient defect-free process latitude. Wavefront and pupil co-optimization is designed to compensate the Mask 3D phase error that leads to contrast fading. For application in HVM, the newly developed Pupil/Mask/Wavefront co-optimization gives the best imaging performance while maintaining the illumination efficiency and decreasing the rms wavefront for the final optimal wavefront to ensure there is no negative impact on the rest of the patterns that are not included in the optimization. In this paper, we investigate how to apply Pupil/Mask/Wavefront co-optimization to improve the image contrast of a sub-40nm pitch contact hole array, including in-resist verification. We will first explain the fundamentals of Mask 3D fading mitigation via phase injection for a 1D feature and how to extend this concept to 2D features. We will compare the effectiveness of new Pupil/Mask/Wavefront co-optimization versus Zernike Z5 or Z6 only phase injection method. Finally, we will show the potential benefit in combination with using a low-n phase shifting mask for which the optimum image contrast is achieved with the co-optimized wavefront, pupil and mask.
As the industry is developing curvilinear mask solutions, some curvilinear postoptical proximity correction (OPC) masks have been reported with file sizes in excess of 10 times the corresponding Manhattan postOPC files, which can greatly impact mask data storage, transfer, and processing. Some file size reduction utilizing spline fittings has been reported in mask postprocessing. However, from an OPC perspective, mask postprocessing is undesirable. In this study, we show that maintaining an adequate density of mask control points (MCPs) is key to achieving the desired on-wafer lithographic performance, regardless of whether the MCPs are connected by spline sections or piecewise-linear segments. Our results suggest that spline-based MULTIGON records (defined by the Curvilinear Working Group convened in 2019) may not offer clear lithographic performance or file size benefits. We will also offer some guidance for controlling piecewise-linear file size without compromising lithographic performance.
The EUV High-NA scanner brings innovative design changes to projection optics, such as introducing center obscuration and the anamorphic projection optical system in the projection optics box (POB) to improve the system transmission while the NA is improved1 . These design changes need to be accounted for in the computational lithography software solutions, to ensure accurate modeling and optimization of the High-NA system performance on wafer. In this paper, we will systematically investigate the benefits of Source Mask Optimization (SMO) and mask only optimization to explore EUV High-NA full chip patterning solutions, where mask 3D effects (M3D) are captured in the optical modeling. The paper will focus on assessing the performance (including process window, depth of focus, normalized image log slope) of through-pitch 1D Line/space (L/S) patterns and 2D Contact/Hole (CH) patterns after aforementioned optimizations and demonstrate the impact of center obscuration on imaging. In addition, we will investigate the effect of sub-resolution assistant feature (SRAF) on High-NA patterning via comparing the optimized lithographic performance with and without SRAF. These findings will help determine the most optimal patterning solutions for EUV High-NA as we move towards the first High NA EUV insertion. The paper will also discuss the anamorphic SMO where MRC and mask description needs to change from wafer plane (1x1) to scaled reticle plane (1x2). The interfield stitching will also be briefly discussed in this paper.
As the industry is developing curvilinear mask solutions, some curvilinear post-OPC masks have been reported with file sizes in excess of 10 times the corresponding Manhattan post-OPC files, which can greatly impact mask data storage, transfer and processing. Some file size reduction utilizing spline fittings has been reported in mask post-processing. However, from an OPC perspective, mask post-processing is undesirable. In this study, we show that maintaining an adequate density of mask control points (MCPs) is key to achieving the desired on-wafer lithographic performance, regardless of whether the MCPs are connected by spline sections or piecewise-linear segments. Our results suggest that) may not offer clear lithographic performance or file size benefits. We will also offer some guidance for controlling piecewise-linear file size without compromising lithographic performance.
With the adoption of extreme ultraviolet (EUV) lithography for high-volume production of advanced nodes, stochastic variability and resulting failures, both post litho and post etch, have drawn increasing attention. There is a strong need for accurate models for stochastic edge placement error (SEPE) with a direct link to the induced stochastic failure probability (FP). Additionally, to prevent stochastic failure from occurring on wafers, a holistic stochastic-aware computational lithography suite of products is needed, such as stochastic-aware mask source optimization (SMO), stochastic-aware optical proximity correction (OPC), stochastic-aware lithography manufacturability check (LMC), and stochastic-aware process optimization and characterization. In this paper, we will present a framework to model both SEPE and FP. This approach allows us to study the correlation between SEPE and FP systematically and paves the way to directly correlate SEPE and FP. Additionally, this paper will demonstrate that such a stochastic model can be used to optimize source and mask to significantly reduce SEPE, minimize FP, and improve stochastic-aware process window. The paper will also propose a flow to integrate the stochastic model in OPC to enhance the stochastic-aware process window and EUV manufacturability.
In advanced DRAM fabrication, wafer alignment is a key enabler to meet on-product overlay performance requirement. Due to the extreme complexity of patterning and integration process involved, it’s becoming a challenge to design alignment marks that can be patterned robustly through process window, meet process integration constraints, withstand large process variation or changes, and provide accurate alignment measurement, during early development. The unique tilted pattern in DRAM fabrication technology poses special challenges during both design and process phase. In this paper, we present a holistic computational approach to design robust alignment marks with ASML’s integrated Design for Control (D4C) and OPC solutions. With this integrated solution, we design a complex set of alignment marks for the entire full flow process from FEOL through BEOL, tailored by each stack of different lithography layers. In mark design stage, marks’ signal and robustness are optimized by D4C simulation, taking into account the design rule and process constraints, while patterning fidelity and process window of these marks is ensured by OPC, subject to the design rule constraints. We demonstrate that the process window (PW) of the resulting alignment marks, especially for the challenging layers with extreme off-axis illuminations and tight design constraints, are significantly improved, while simultaneously accurate and robust alignment measurements are obtained on full loop wafers.
In the advent of multiple patterning techniques in semiconductor industry, metrology has progressively become a burden. With multiple patterning techniques such as Litho-Etch-Litho-Etch and Sidewall Assisted Double Patterning, the number of processing step have increased significantly and therefore, so as the amount of metrology steps needed for both control and yield monitoring. The amount of metrology needed is increasing in each and every node as more layers needed multiple patterning steps, and more patterning steps per layer. In addition to this, there is that need for guided defect inspection, which in itself requires substantially denser focus, overlay, and CD metrology as before. Metrology efficiency will therefore be cruicial to the next semiconductor nodes. ASML's emulated wafer concept offers a highly efficient method for hybrid metrology for focus, CD, and overlay. In this concept metrology is combined with scanner's sensor data in order to predict the on-product performance. The principle underlying the method is to isolate and estimate individual root-causes which are then combined to compute the on-product performance. The goal is to use all the information available to avoid ever increasing amounts of metrology.
Designing metrology targets that mimic process device cell behavior is becoming a common component in overlay process control. For an advanced DRAM process (sub 20 nm node), the extreme illumination methods needed to pattern the critical device features makes it harder to control the aberration induced overlay delta between metrology target and device patterns. To compensate for this delta, a Non-Zero-Offset is applied to the metrology measurement that is based on a manual calibration measurement using CD-SEM Overlay.
In this paper, we document how this mismatch can be minimized through the right choice of metrology targets and measurement recipe.
Aberration sensitivity matching between overlay metrology targets and the device cell pattern has become a common requirement on the latest DRAM process nodes. While the extreme illumination modes used demand that the delta in aberration sensitivity must be optimized, it is effectively limited by the ability to print an optimum target that will meet detectability and accuracy requirements. Therefore, advanced OPC techniques are required to ensure printability and have optimal detectability performance while maintaining sufficient process window to avoid patterning or defectivity issues.
In this paper, we have compared various mark designs with real cell in terms of aberration sensitivity under the specific illumination condition. The specific illumination model was used for aberration sensitivity simulation while varying mask tones and target designs. Then, diffraction based simulation was conducted to analyze the effect of aberration sensitivity on the actual overlay values. The simulation results were confirmed by comparing the OL results obtained by diffraction based metrology with the cell level OL values obtained using Critical Dimension Scanning Electron Microscope.
Kaustuve Bhattacharyya, Arie den Boef, Martin Jak, Gary Zhang, Martijn Maassen, Robin Tijssen, Omer Adam, Andreas Fuchs, Youping Zhang, Jacky Huang, Vincent Couraudon, Wilson Tzeng, Eason Su, Cathy Wang, Jim Kavanagh, Christophe Fouquet
KEYWORDS: Overlay metrology, Metrology, Time metrology, Target acquisition, Semiconducting wafers, Target detection, Etching, Back end of line, Scanners, Process control
High-end semiconductor lithography requirements for CD, focus and overlay control drive the need for diffraction-based metrology1,2,3,4 and integrated metrology5. In the advanced nodes, more complex lithography techniques (such as multiple patterning), use of multi-layer overlay measurements in process control, advanced device designs (such as advanced FinFET), as well as advanced materials (like hardmasks) are introduced. These pose new challenges for lithometro cycle time, cost, process control and metrology accuracy. In this publication a holistic approach is taken to face these challenges via a novel target design, a brand new implementation of multi-layer overlay measurement capability in diffraction-based mode and integrated metrology.
In order to handle the upcoming 1x DRAM overlay and yield requirements, metrology needs to evolve to more accurately represent product device patterns while being robust to process effects. One way to address this is to optimize the metrology target design. A viable solution needs to address multiple challenges. The target needs to be resistant to process damage. A single target needs to measure overlay between two or more layers. Targets need to meet design rule and depth of focus requirements under extreme illumination conditions. These must be achieved while maintaining good precision and throughput with an ultra-small target. In this publication, a holistic approach is used to address these challenges, using computationally optimized metrology targets with an advanced overlay control loop.
In order to meet current and future node overlay, CD and focus requirements, metrology and process control performance need to be continuously improved. In addition, more complex lithography techniques, such as double patterning, advanced device designs, such as FinFET, as well as advanced materials like hardmasks, pose new challenges for metrology and process control. In this publication several systematic steps are taken to face these challenges.
Negative tone development (NTD) processes have been widely explored as a way to enhance the printability of
dark field features such as contact holes and trenches. A key consequence of implementing NTD processes and
subsequent tone reversal of dark field reticles is the significantly higher transmission of bright field masks and thus
higher light intensity in the projection optics. This large increase in mask transmission coupled with the higher
throughput requirements of multiple patterning and the use of freeform illumination created by source mask
optimization creates a significant amount of lens heating induced aberrations that must be characterized and
mitigated. In this paper, we examine the lens heating induced aberrations for high transmission reticles common
to NTD using both simulations and experiments on a 193 immersion lithography tool. We observe a substantial
amount of aberrations as described by even and odd order Zernike drifts during the course of a wafer exposure lot.
These Zernike drifts per lot are demonstrated to have the following lithographic effects: critical dimension shifts,
pitch dependent best focus shifts and image placement errors between coarse and fine patterned features. Lastly,
mitigation strategies are demonstrated using various controllers and lens manipulators, including FlexWave with
full Zernike control up to Z64, to substantially reduce the lens heating effects observed on-wafer.
In this paper we describe the basic principle of FlexWave, a new high resolution
wavefront manipulator, and discuss experimental data on imaging, focus and overlay.
For this we integrated the FlexWave module in a 1.35 NA immersion scanner. With
FlexWave we can perform both static and dynamic wavefront corrections. Wavefront
control with FlexWave minimizes lens aberrations under high productivity usage of the
scanner, hence maintaining overlay and focus performance, but moreover, the high
resolution wavefront tuning can be used to compensate for litho related effects.
Especially now mask 3D effects are becoming a major error component, additional
tuning is required. Optimized wavefront can be achieved with computational lithography,
by either co-optimizing source, mask, and Wavefront Target prior to tape-out, or by
tuning Wavefront Targets for specific masks and scanners after the reticle is made.
As the industry drives to lower k1 imaging we commonly accept the use of higher NA imaging and advanced
illumination conditions. The advent of this technology shift has given rise to very exotic pupil spread functions that
have some areas of high thermal energy density creating new modeling and control challenges. Modern scanners are
equipped with advanced lens manipulators that introduce controlled adjustments of the lens elements to counteract the
lens aberrations existing in the system. However, there are some specific non-correctable aberration modes that are
detrimental to important structures. In this paper, we introduce a methodology for minimizing the impact of aberrations
for specific designs at hand. We employ computational lithography to analyze the design being imaged, and then devise
a lens manipulator control scheme aimed at optimizing the aberration level for the specific design. The optimization
scheme does not minimize the overall aberration, but directs the aberration control to optimize the imaging performance,
such as CD control or process window, for the target design. Through computational lithography, we can identify the
aberration modes that are most detrimental to the design, and also correlations between imaging responses of
independent aberration modes. Then an optimization algorithm is applied to determine how to use the lens manipulators
to drive the aberrations modes to levels that are best for the specified imaging performance metric achievable with the
tool. We show an example where this method is applied to an aggressive memory device imaged with an advanced ArF
scanner. We demonstrate with both simulation and experimental data that this application specific tool optimization
successfully compensated for the thermal induced aberrations dynamically, improving the imaging performance
consistently through the lot.
Application specific aberration as a result of localized heating of lens elements during exposure has become more
significant in recent years due to increasing low k1 applications. Modern scanners are equipped with sophisticated lens
manipulators that are optimized and controlled by scanner software in real time to reduce this aberration. Advanced lens
control options can even optimize lens manipulators to achieve better process window and overlay performance for a
given application. This is accomplished by including litho metrics as part of the lens optimization process. Litho metrics
refer to any lithographic properties of interest (i.e., CD variation, image shift, etc...) that are sensitive to lens aberrations.
But, there are challenges that prevent effective use of litho metrics in practice. There are often a large number of critical
device features that need monitoring and the associated litho metrics (e.g., CD) generally show strong non-linear
response to Zernikes. These issues greatly complicate the lens control algorithm, making real-time lens optimization
difficult. We have developed a computational method to address these issues. It transforms the complex physical litho
metrics into a compact set of linearized "virtual" litho metrics, ranked by their importance to process window. These
new litho metrics can be readily used by the existing scanner software for lens optimization. Both simulations and
experiments showed that the litho metrics generated by this method improved aberration control.
At 65nm technology node and below, with the ever-smaller process window, it is no longer sufficient to apply traditional
model-based verification at only the nominal condition. Full-chip, full process-window verification has started to
integrate into the OPC flow at the 65nm production as a way of preventing potentially weak post-OPC designs from
reaching the mask making step. Through process-window analysis can be done by way of simulating wafer images at
each of the corresponding focus and exposure dose conditions throughout the process window using an accurate and
predictive FEM model. Alternatively, due to the strong correlation between the post-OPC design sensitivity to dose
variation and aerial image (AI) quality, the study of through-dose behavior of the post-OPC design can also be carried
out by carefully analyzing the AI. These types of analysis can be performed at multiple defocus conditions to assess the
robustness of the post-OPC designs with respect to focus and dose variations. In this paper, we study the AI based
approach for post-OPC verification in detail.
For metal layer, the primary metrics for verification are bridging, necking, and via coverage. In this paper we are mainly
interested in studying bridging and necking. The minimum AI value in the open space gives an indication of its
susceptibility to bridging in an over-dosed situation. Lower minimum intensity indicates less risk of bridging.
Conversely, the maximum AI between the metal lines provides indication of potential necking issues in an under-dosed
situation.
At times, however, in a complex 2D pattern area, the location as to where the AI reaches either maximum or minimum is
not obvious. This requires a full-chip, dense image-based approach to fully explore the AI profile of the entire space of
the design. We have developed such an algorithm to find the AI maximums and minimums that will bear true relevance
to the bridging and necking analysis. In this paper, we apply the full-chip image-based analysis to 65nm metal layers.
We demonstrate the capturing of potential bridging or necking issues as identified by the AI analysis. Finally, we show
the performance of the full-chip image-based verification.
Due to the low k1-factor which leads to reduced process latitude, it is becoming increasingly important that OPC and lithography verification take into account process variations. An essential element to the successful implementation of full-chip, process window aware RET/OPC design and verification is a lithography model that is able to accurately describe the lithography process across the entire focus-exposure window. Moreover, a straightforward calibration without requiring excessive amount of through process window measurements is also critical to ensure quick turnaround-time.
In this paper, we introduce a new Focus Exposure Matrix (FEM) model based on Brion's Tachyon platform. The FEM model has two adjustable parameters: focus and exposure. By adjusting these parameters, new models at arbitrary process conditions within the process window can be quickly derived, with which large number of simulation results can be obtained at different exposure and focus for detailed process window analysis. The fitting of FEM model is through a single calibration process using wafer measurements at limited number of sampling locations within the process window. The resulting calibrated FEM model is shown to have superior fitting as well as prediction accuracy, without requiring massive additional focus-exposure measurements.
Accurate FEM modeling enables two important applications in the deep sub-wavelength regime: lithography manufacturability check (LMC) and optical proximity correction (OPC). FEM-enabled LMC proves to be a substantial advance in model-based verification by providing through process window analysis capability. Furthermore, FEM models can be employed in OPC practically to prevent catastrophic failures due to process variation while still maintaining satisfactory OPC quality in terms of matching the modeled wafer image to design intent.
In this paper, we use real data and simulation results to demonstrate the quality of the FEM-model and its effectiveness in the LMC and OPC applications.
Lithography process modeling is critical for effective model-based optical proximity correction (OPC) or verification. Physics based full resist and etching model can provide very accurate prediction of the resist profile, but its speed forbids the use in practical production OPC and verification applications. Simplified models have therefore been developed. These models collapse some complicated but less crucial physics into "parameters" which are tuned to best fit the real measurement data. However, as the feature patterns vary, the aerial image around the patterns can experience a wide range of intensity distribution patterns. It is difficult to use a single set of "parameters" to fit into all these profiles. As compromises are made, accuracy suffers. The properties that contribute to such variations are primarily pattern shapes, dimensions, and in the case of phase-shift masks, phase-interaction. One way to improve the model accuracy is to build multiple "local" models such that each model contains a set of parameters that are optimized for the given pattern. As we perform simulation, we identify the pattern and then pick the model that is best suited for the given pattern. In this paper, we demonstrate how it is difficult for a single model to fit a set of data with large varying patterns. Then we show how multiple model methodology can be applied to improve model accuracy. As we apply the models, there will be "gray" areas where the pattern is not clearly identified to belong to the class for which a model is available. We explain how such situation should be coped with, and how the simulation responds to model "switching".
In typical integrated circuits (IC) designs, the final layout generally contains a lot of repeated patterns. Many of these repetitions are captured by the layout hierarchy. That is, the layout contains many cells that are each repeatedly placed in many locations with different transformation. Effective use of such repetition information in the computation intensive operations such as model-based optical proximity correction (OPC), verification, or contour generation, can lead to significant performance improvement. However, in many other occasions, such repetition information is not directly available. For example, if the layout is flattened, then all the hierarchy that captures the repetition information is lost. Even in hierarchical layout, a cell can contain repeated geometries or patterns. In order for the application to take advantage of such property, a mechanism to efficiently capture such repetition information is necessary. In this paper, we consider the model-based applications that have a unique property, which allows us to find different geometrical patterns that are equivalent in principle for simulation purpose. We introduce a proximity-based pattern identification method which aims at recognizing the maximum amount of repetition in the layout. This method not only captures repeated or symmetric geometries that are present from either the flattening of the hierarchy or within a cell itself, but also finds symmetries within the geometries themselves. The method also finds partial repetitions of geometries that are not completely identical or symmetric. Ideally, these “equivalent” patterns will eventually carry the same processing results with miniscule variations small enough to be ignored for the application. For this reason, it is sufficient to run the computationally expensive model-based operations for one of the pattern and carry the result to the rest of the patterns of the same family. Doing so reduces the problem size as well as the amount of data that requires processing. The total processing time therefore can be dramatically reduced. We demonstrate the method by using OPC as a test example. We show the level of problem size reduction and job run time reduction due to the specific nature of different layouts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.