This work presents how the combination of EDA and CDSEM tools enable development and manufacturing engineers to collect CDSEM data of a large diversity of features and contexts seamlessly for OPC model calibration and validation, process development, and inline manufacturing monitoring. We will present the application and results of a solution proposed in a previously published paper[1] and then review the benefits of enabling development and manufacturing engineers to make metrology-related decisions within their environments. Finally, new applications for automated CDSEM recipe generation and data collection will be discussed.
This work presents software tools that enable engineers to make relevant SEM measurement decisions in the EDA
environment, presented in the optimal context for the engineer, and pass them seamlessly into the SEM environment. We
present the tools and interfaces leveraged in this solution and explore the benefits of enabling OPC modeling engineers
to make metrology-related decisions within the OPC environment. New opportunities for automation of metrologyrelated
OPC tasks are also discussed.
Improvements in compact lithography models and compute resources have allowed EDA suppliers to keep up with
the accuracy and turnaround time (TAT) requirements for each new technology node. Compact lithography models
are derived from the Hopkins method to calculate the image at the wafer. They consist of the pre-calculated optical
kernel set that includes properties of projection and source optics as well as resist effects. The image at the wafer is
formed by the convolution of optical kernel set with the mask transmission. The compact model is used for optical
proximity correction (OPC) and lithography rule checking (LRC) due to its excellent turnaround time in full chip
applications. Leading edge technology nodes, however, are inherently more sensitive to process variation and
typically contain more low contrast areas, sometimes resulting in marginal hotspots. In these localized areas, it is
desirable to have access to more predictive first principle lithography simulation. The Abbe method for lithography
simulation includes full 3D resist models that solves from first principles the reaction/diffusion equation of the post
exposure bake to provide the highest accuracy. These rigorous models have the ability to provide added insight into
3D developed profile in resist at the wafer level to assist in the application of OPC and disposition of hotspots found
by LRC using compact models. This paper will explore the benefits of a tightly integrated rigorous lithography
simulation during LRC hotspot detection step of the post OPC flow. Multiple user flows will be addressed along
with methods for automating the flows to maximize the imaging predictability where needed while keeping the
impact to turn around time to a minimum.
KEYWORDS: Data modeling, Metrology, Scanning electron microscopy, Optical proximity correction, Calibration, Statistical modeling, Process modeling, Data analysis, Data visualization, Diffractive optical elements
Modern OPC modeling relies on substantial volumes of metrology data to meet pattern coverage and precision
requirements. This data must be reviewed and cleaned prior to model calibration to prevent bad data from adversely
affecting calibration. We propose implementing specific tools in the metrology flow to improve metrology engineering
efficiency and resulting data quality. The metrology flow with and without these tools will be discussed, and the inherent tradeoffs will be identified. To demonstrate the benefit of the proposed flow, engineering efficiency and the impact of better data on model calibration will be quantified.
With each new technology node there is an increase in the number of layers requiring Optical Proximity Correction
(OPC) and verification. This increases the time spent on the mask tapeout flow which is already a lengthy portion of the
production flow. New technology nodes not only have additional layers that require OPC but most critical layers also
end up with more complex OPC requirements relative to previous generations slowing the tapeout flow even further. In
an effort to maintain acceptable turnaround time (TAT) more hardware resources are added at each node and electronic
design automation (EDA) suppliers are pushed to improve the software performance. The more we can parallelize
operations within the tapeout flow the more efficient we can be with the use of the CPU resources and drive down the
overall TAT. Traditional flows go through several cycles where data is broken up into templates, the templates are
distributed to compute farms for processing, pieced back together, and sometimes written to disk before starting the next
operation in the tapeout flow. During each of these cycles there are ramp up, ramp down, and input/output (I/O) times
that are incurred affecting the efficient use of hardware resources. This paper will explore the advantages of pipelining
the templates from one operation to the next in order to minimize these effects.
Sub-resolution assist features (SRAFs) or scatter bars (SBs) have steadily proliferated through IC
manufacturer data preparation flows as k1 is pushed lower with each technology node. The use of this
technology is quite common for gate layer at 130 nm and below, with increasingly complex geometric rules
being utilized to govern the placement of SBs in proximity to target layer features. Recently, model based
approaches for placement of SBs has arisen. In this work, the variety of rule-based and model-based SB
options are explored for the gate layer by using new characterization and optimization functions available
in the latest generation of correction and OPC verification tools. These include the ability to quantify
across chip CD control with statistics on a per gate basis. The analysis includes the effects of defocus,
exposure, and misalignment, and it is shown that significant improvements to CD control through the full
manufacturing variability window can be realized.
In our continued pursuit to keep up with Moor's Law we are encountering lower and lower k1
factors resulting in increased sensitivity to lithography / OPC un-friendly designs, mask rule
constraints and OPC setup file errors such as bad fragmentation, sub-optimal site placement, and
poor convergence during the OPC application process. While the process has become evermore
sensitive and more vulnerable to yield loss, the incurred costs associated with such losses is
continuing to increase in the form of higher reticle costs, longer cycle times for learning, increased
costs associated with the lithography tools, and most importantly lost revenue due to bringing a
product to market late. This has resulted in an increased need for virtual manufacturing tools that
are capable of accurately simulating the lithography process and detecting failures and weak points
in the layout so they can be resolved before committing a layout to silicon and / or identified for
inline monitoring during the wafer manufacturing process. This paper will attempt to outline a
verification flow that is employed in a high volume manufacturing environment to identify, prevent,
monitor and resolve critical lithography failures and yield inhibitors thereby minimizing how much
we succumb to the aforementioned semiconductor manufacturing vulnerabilities.
The latest improvements in process-aware lithography modeling have resulted in improved simulation accuracy through the dose and focus process window. This coupled with the advancements in high speed, full chip grid-based simulation provide a powerful combination for accurate process window simulation. At the 65nm node, gate CD control becomes ever more critical so understanding the amount of CD variation through the full process window is crucial. This paper will use the aforementioned simulation capability to assess the impact of process variation on ACLV (Across-Chip Linewidth Variation) and critical failures at the 65nm node. The impact of focus, exposure, and misalignment errors in manufacturing is explored to quantify both CD control and catastrophic printing failure. It is shown that there is good correlation between predicted and experimental results.
KEYWORDS: Optical proximity correction, Photomasks, Data modeling, Back end of line, Semiconducting wafers, Visualization, Databases, Scanning electron microscopy, Data integration, Metals
SMIC is a pure-play IC foundry, as foundry culture Turn-Around Time is the most important thing FABs concern about. And aggressive tape out schedule required significant reduction of GDS to mask flow run time. So the objective of this work is to evaluate an OPC methodology and integrated mask data preparation flow on runtime performance via so-called 1-IO-tape-out platform. By the way, to achieve fully automated OPC/MDP flow for production. To evaluate, we choose BEOL layers since they were the ones hit most by runtime performance -- not like FEOL, for example, Poly to CT layers there're still some non-critical layers in the between, OPC mask makings & wafer schedules are not so tight. BEOL, like M2, V2,then M3 V3 and so on, critical layer OPC mask comes one by one continuously. Hence, that's why we pick BEOL layers. And the integrated flow we evaluated included 4 layers of metal with MB-OPC and 6 layers of Via with R-B OPC. Our definition of success to this work is to improve runtime performance at least of larger than 2x. At meantime, of course, we can not sacrifice the model accuracy, so maintaining equal or better model accuracy and OPC/mask-data output quality is also a must. For MDP, we also test the advantage of OASIS and compared with GDS format.
With the advent of the first immersion and hyper-NA exposure tools, source polarization quality will become a hot topic. At these oblique incident angles, unintentional source polarization could result in the intensity loss of diffraction orders possibly inducing resolution or process window loss. Measuring source polarization error on a production lithographic exposure tool is very cumbersome, but it is possible to reverse engineer any source error similarly to what has been accomplished with intensity error. As noted in the intensity maps from the source illumination, it is not safe to assume an ideal or binary source map, so model fitness is improved by emulating the real error. Likewise, by varying the source polarization types (TE, TM, Linear X and Linear Y) and ratios to obtain improved model fitness, one could deduce the residual source polarization error. This paper will show the resolution and process window gain from utilizing source polarization in immersion lithography. It will include a technique demonstrating how to extract source polarization error from empirical data using the Calibre model and will document the modeling inaccuracy from this error.
To perform a thorough source optimization during process development is becoming more critical as we move to leading edge-technology nodes. With each new node the acceptable process margin continues to shrink as a result of lowering k1 factors. This drives the need for thorough source optimization prior to locking down a process in order to attain the maximum common depth of focus (DOF) the process will allow. Optical proximity correction (OPC) has become a process-enabling tool in lithography by providing a common process window for structures that would otherwise not have overlapping windows. But what effect does this have on the source optimization? With the introduction of immersion lithography there is yet another parameter, namely source polarization, that may need to be included in an illumination optimization process. This paper explored the effect polarization and OPC have on illumination optimization. The Calibre ILO (Illumination Optimization) tool was used to perform the illumination optimization and provided plots of DOF vs. various parametric illumination settings. This was used to screen the various illumination settings for the one with optimum process margins. The resulting illumination conditions were then implemented and analyzed at a full chip level. Based on these results, a conclusion was made on the impact source polarization and OPC would have on the illumination optimization process.
To shorten the turn around time and reduce the amount of effort for SRAF insertion and optimization on any arbitrary layout, a new model-based SRAF insertion and optimization flow is developed. It is based on the pixel-based mask optimization technique [1] to find the optimal mask shapes that result in the best image contrast. The contrast-optimized mask is decomposed into main features and assist features. The decomposed assist features are then run through a simplification process for shot count reduction to improve mask writing throughput. Model-based Optical Proximity Correction (OPC) is applied finally to achieve required pattern fidelity for the current technology. In this flow, main features and assist features are allowed to be optimized simultaneously such that the effect of SRAF optimization and Optical Proximity Correction (OPC) are achieved. Since the objective of the mask optimization is the image fidelity, and there is no light coming through assist features (in dark field case), the assist features were ensured not to print even with high dose. The results on 65nm/contact layer showed this approach greatly reduced the total time and effort required for SRAF placement optimization compared to rule-based method, with better lithographic performance for various layout types when compared to rule-based approach.
Illumination optimization has always been an important part of the process characterization and setup for new technology nodes. As we move to the 130nm node and beyond, this phase becomes even more critical due to the limited amount of available process window and the application of advanced model based optical proximity corrections (OPC). Illumination optimization has some obvious benefits in that it maximizes process latitude and therefore makes a process more robust to dose and focus variations that naturally occur during the manufacturing process. By mitigating the effect of process excursions, there are fewer numbers of reworks, faster cycle times and ultimately higher yield. Although these are the typical benefits associated with illumination optimization, there are also other potential benefits realized from an OPC modeling and mask data preparation (MDP) perspective as well. This paper will look into the not so obvious effects illumination optimization has on OPC and MDP. A fundamental process model built with suboptimal optical settings is compared against a model based on the optimal optical conditions. The optimal optical conditions will be determined based on simulations of the process window for several structures in a design using a metric of maximum common depth of focus (DOF) for a given minimum exposure latitude (EL). The amount of OPC correction will be quantified for both models and a comparison of OPC aggressiveness will be made. OPC runtimes will also be compared as well as output file size, amount of fragmentation, and the number of shot counts required in the mask making process. In conclusion, a summary is provided highlighting where OPC and MDP can benefit from proper illumination optimization.
Model based OPC for low k1 lithography has a large impact on mask cost, and hence must be optimized with respect to mask manufacturability and mask cost without sacrificing device performance. Design IP blocks not designed with the lithography process in mind (not "litho friendly") require more complex RET/OPC solutions, which can in turn result in unnecessary increases in the mask cost and turn around time. These blocks are typically replicated many times across a design and can therefore have a compounding effect.
Model based OPC for low k1 lithography has a large impact on mask cost, and hence must be optimized with respect to mask manufacturability and mask cost without sacrificing device performance. Design IP blocks not designed with the lithography process in mind (not "litho friendly") require more complex RET/OPC solutions, which can in turn result in unnecessary increases in the mask cost and turn around time. These blocks are typically replicated many times across a design and can therefore have a compounding effect.
Design for manufacturing (DFM) techniques verify and alleviate complex interactions between design and process. DFM can be applied at various stages in your design-to-silicon flow. We will discuss how these DFM methods are applied and implemented at Cypress. We will also show how design rules are defined and present several methods for injecting OPC/RET awareness into the designs prior to mask manufacture.
Lithography models calibrated from experimental data have been used to determine the optimum insertion strategy of sub-resolution assist features in a 130 nm process. This work presents results for 3 different illumination types: Standard, QUASAR, and Annular. The calibrated models are used to classify every edge in the design based on its optical properties (in this case image-log-slope). This classification is used to determine the likelihood of an edge to print on target with the maximum image-log-slope. In other words, the method classifies design edges not in geometrically equivalent classes, but according to equivalent optical responses. After all the edges are classified, a rule table is generated for every process. This table describes the width and separation of the assist features based on a global cost function for each illumination type. The tables are later used to insert the assist features of various widths and separations using pre-defined priority strategies. After the bars have been inserted, OPC is applied to the main structures in the presence of the newly added assist features. Critical areas are tagged for increased fragmentation allowing certain areas to receive the maximum amount of correction and compensate for any proximity effects due to the sub-resolution assist features. The model-assisted solution is compared against a traditional rule-based solution, which was also derived experimentally. Both scenarios have model based OPC correction applied using simulation and experimental data. By comparing both cases it is possible to assess the advantages and disadvantages of both methods.
Lithographers face many hurdles to achieve the ever-shrinking process design rules (PDRs). Proximity effects are becoming more and more an issue requiring model-based Optical Proximity Correction (OPC), sub-resolution assist features, and properly tuned illumination settings in order to minimize these effects while providing enough contrast to maintain a viable process window. For any type of OPC application to be successful, a fundamental illumination optimization must first be completed. Unfortunately, the once trivial illumination optimization has evolved into a major task for ASIC houses that require a manufacturable process window for isolated logic structures as well as dense SRAM features. Since these features commonly appear on the same reticle, today’s illumination optimization must look at “common” process windows for multiple cutlines that include a variety of different feature types and pitches. This is a daunting task for the current single feature simulators and requires a considerable amount of simulation time, engineering time, and fab confirmation data in order to come up with an optimum illumination setting for such a wide variety of features. An internal Illumination Optimization (ILO) application has greatly simplified this process by allowing the user to optimize an illumination setting by simultaneously maximizing the “combined” DOF (depth of focus) over multiple cutlines (simulation sites). Cutlines can be placed on a variety of structures in an actual design as well as several key pitches. Any number of the cutlines can be constrained to the gds drawn CD (critical dimension) while others can be allowed to “float” with pseudo OPC allowing the co-optimization of the illumination setting for any OPC that may be applied in the final design. The automated illumination optimization is then run using a tuned model. Output data is a suggested illumination setting with supporting data used to formulate the recommendation. This paper will present the multi-cutline ILO process and compare it with the work involved to do the same optimization using a single feature simulator. Examples will be shown where multi-cutline ILO was able to resolve hard annular aberrations while maintaining the DOF.
KEYWORDS: Photomasks, Optical proximity correction, Reticles, Data modeling, Process modeling, Semiconducting wafers, Scanning electron microscopy, Image processing, Etching, Deep ultraviolet
As critical dimensions (CDs) approach (lambda) /2, the use of optical proximity correction (OPC) relies heavily on the ability of the mask vendor to resolve the OPC structures consistently. When an OPC model is generated the reticle and wafer processing errors are merged, quantified, and fit to a theoretical model. The effectiveness of the OPC model depends greatly on model fit and therefore consistency in the reticle and wafer processing. Variations in either process can 'break' the model resulting in the wrong corrections being applied. Work is being done in an attempt to model the reticle and wafer processes separately as a means to allow an OPC model to be implemented in any mask process. Until this is possible, reticle factors will always be embedded in the model and need to be understood and controlled. Reticle manufacturing variables that effect OPC models are exposure tool resolution, etch process effects, and process push (pre-bias of the fractured data). Most of the errors from these reticle-manufacturing variables are seen during model generation, but there are some regions that are not and fail to be accounted for such as extremes in the line ends. Since these extreme regions of the mask containing the OPC have a higher mask error enhancement factor (MEEF) than that of the rest of the mask, controlling mask-induced variables is even more important. This paper quantifies the reticle error between different write tools (g-line vs. i-line vs. DUV lasers) and shows the effects reticle processing has on OPC model generation. It also depicts which structures are susceptible to reticle error more than others through reticle modeling and SEM images.
To achieve the demand of the ever-shrinking technologies, design engineers are embedding rule-based OPC (Optical Proximity Correction) or hand-applied OPC into bit-cell libraries. These libraries are then used to generate other components on a chip. This creates problems for the end users, the photolithographers. Should the photolithographer change the process used to generate the simulations for the embedded OPC, the process can become unstable. The temptation to optimize these shrinking cells with embedded adjustments can be overcome by other methods. Manually increasing fragmentation or manually freezing portions of bit cells can provide the same level of accuracy as a well-simulated embedded solution, so now the model-base OPC generated by the end user can be applied, tolerating process or illumination changes. Manually freezing portions of a bit cell can assist in optimization by blocking larger features from receiving a model-based solution, whereas increased fragmentation augments the model-based application. Freezing contacts or local interconnects landing sites at poly for example, would allow the model-based OPC to optimize the poly over the active regions where transistor performance is vital. This paper documents the problems seen with embedding OPC and the proper ways to resolve them. It will provide insight into embedded OPC removal and replacement. Simulations and empirical data document the differences seen between embedded-OPC bit cells and fragment-optimized bit cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.