Many chip design and manufacturing applications including design rules development, optical proximity correction
tuning, and source optimization can benefit from rapid estimation of relative difficulty or printability. Simultaneous
source optimization of thousands of clips has been demonstrated recently, but presents performance challenges. We
describe a fast, source independent method to identify patterns which are likely to dominate the solution. In the context
of source optimization the estimator may be used as a filter after clustering, or to influence the selection of representative
cluster elements. A weighted heuristic formula identifies spectral signatures of several factors contributing to difficulty.
Validation methods are described showing improved process window and reduced error counts on 22 nm layout
compared with programmable illuminator sources derived from hand picked patterns, when the formula is used to
influence training clip selection in source optimization. We also show good correlation with fail prediction on a source
produced with hand picked training clips with some level of optical proximity correction tuning.
Joint optimization (JO) of source and mask together is known to produce better SMO solutions than sequential
optimization of the source and the mask. However, large scale JO problems are very difficult to solve because the global
impact of the source variables causes an enormous number of mask variables to be coupled together. This work presents
innovation that minimize this runtime bottleneck. The proposed SMO parallelization algorithm allows separate mask
regions to be processed efficiently across multiple CPUs in a high performance computing (HPC) environment, despite
the fact that a truly joint optimization is being carried out with source variables that interact across the entire mask.
Building on this engine a progressive deletion (PD) method was developed that can directly compute "binding
constructs" for the optimization, i.e. our method can essentially determine the particular feature content which limits the
process window attainable by the optimum source. This method allows us to minimize the uncertainty inherent to
different clustering/ranking methods in seeking an overall optimum source that results from the use of heuristic metrics.
An objective benchmarking of the effectiveness of different pattern sampling methods was performed during postoptimization
analysis. The PD serves as a golden standard for us to develop optimum pattern clustering/ranking
algorithms. With this work, it is shown that it is not necessary to exhaustively optimize the entire mask together with the
source in order to identify these binding clips. If the number of clips to be optimized exceeds the practical limit of the
parallel SMO engine one can starts with a pattern selection step to achieve high clip count compression before SMO.
With this LSSO capability one can address the challenging problem of layout-specific design, or improve the technology
source as cell layouts and sample layouts replace lithography test structures in the development cycle.
Source-mask optimization (SMO) in optical lithography has in recent years been the subject of increased
exploration as an enabler of 22/20nm and beyond technology nodes [1-6]. It has been shown that intensive
optimization of the fundamental degrees of freedom in the optical system allows for the creation of non-intuitive
solutions in both the source and mask, which yields improved lithographic performance. This paper
will demonstrate the value of SMO software in resolution enhancement techniques (RETs). Major benefits
of SMO include improved through-pitch performance, the possibility of avoiding double exposure, and
superior performance on two dimensional (2D) features. The benefits from only optimized source, only
optimized mask, and both source and mask optimized together will be demonstrated. Furthermore, we
leverage the benefits from intensively optimized masks to solve large array problems in memory use models
(MUMs). Mask synthesis and data prep flows were developed to incorporate the usage of SMO, including
both RETs and MUMs, in several critical layers during 22/20nm technology node development.
Experimental assessment will be presented to demonstrate the benefits achieved by using SMO during
22/20nm node development.
This paper will describe the development, qualification, monitoring, and integration into a production environment of the
world's first fully programmable illuminator for optical lithography. FlexRay TM, a programmable illuminator based on
a MEMs multi-mirror array that was developed for TWINSCAN XT:19x0i and TWINSCAN NXT series ASML
immersion scanners, was first installed in January 2010 at Albany Nanotech, with subsequent tools installed in IBM's
East Fishkill Manufacturing facility. After a brief overview of the concept and benefits of FlexRay, this paper will
provide a comprehensive assessment of its reliability and imaging performance. A CD-based pupil qualification
(CDPQ) procedure will be introduced and shown to be an efficient and effective way to monitor pupil performance.
Various CDPQ and in-resist measurement results will be described, offering convincing evidence that FlexRay reliably
generates high-quality pupils and is well suited for high volume manufacturing at lithography's leading edge.
In recent years the potential of Source-Mask Optimization (SMO) as an enabling technology for 22nm-and-beyond lithography
has been explored and documented in the literature.1-5 It has been shown that intensive optimization of the fundamental
degrees of freedom in the optical system allows for the creation of non-intuitive solutions in both the mask and the
source, which leads to improved lithographic performance. These efforts have driven the need for improved controllability
in illumination5-7 and have pushed the required optimization performance of mask design.8, 9 This paper will present recent
experimental evidence of the performance advantage gained by intensive optimization, and enabling technologies like pixelated
illumination. Controllable pixelated illumination opens up new regimes in control of proximity effects,1, 6, 7 and we
will show corresponding examples of improved through-pitch performance in 22nm Resolution Enhancement Technique
(RET). Simulation results will back-up the experimental results and detail the ability of SMO to drive exposure-count reduction,
as well as a reduction in process variation due to critical factors such as Line Edge Roughness (LER), Mask Error
Enhancement Factor (MEEF), and the Electromagnetic Field (EMF) effect. The benefits of running intensive optimization
with both source and mask variables jointly has been previously discussed.1-3 This paper will build on these results by
demonstrating large-scale jointly-optimized source/mask solutions and their impact on design-rule enumerated designs.
Source optimization in optical lithography has been the subject of increased exploration in recent years [1-4], resulting in
the development of multiple techniques including global optimization of process window [4]. The performance
advantages of source optimization have been demonstrated through theory, simulation, and experiment. This paper will
emphasize global optimization of sources over multiple patterns, e.g. co-optimization of critical SRAM cells and the
critical pitches of random logic, and implement global source optimization into current resolution enhancement
techniques (RETs). The effect on optimal source due to considering multiple patterns is investigated. We demonstrate
that optimal source for limited patterns does work for a large clip of layout. Through theoretical analysis and
simulations, we explain that only critical patterns and/or critical combinations of patterns determine the final optimal
source; for example those patterns that contain constraints which are active in the solution. Furthermore, we illustrate,
through theory and simulation, that pixelated sources have better performance than generic sources and that in general it
is impossible for generic sources to construct a truly optimal solution. Sensitivity, tool matching, and lens heating issues
for pixelated sources are also discussed in this paper. Finally, we use a RETs example with wafer data to demonstrate the
benefits of global source optimization.
Traditional OPC is essentially an iterated feedback process, in which the position of each target edge is corrected by
adjusting a controlling mask edge. However, true optimization adjusts the mask variables collectively, and in so-called
SMO approaches (for Source Mask Optimization) the source variables are adjusted as well. Optimized masks often have
high edge density if synthesis methods are used in an effort to obtain a more global solution, and the correspondence
between individual mask edges and printed target edges becomes less clearcut than in traditionally OPC'd masks.
Restrictions on phase shift and MEEF tend to reduce this departure from traditional solutions, but they trade off the
theoretical performance advantage in dose and focus latitude that phase shift provides for a reduced sensitivity to thick
mask topography and to manufacturing error. Mask variables couple across long distances only in the indirect sense of
stitched connection across chains of neighbor-to-neighbor interactions, but source variables interact directly across entire
masks. Source+mask optimization of large areas therefore involves long-range communication across the parts of the
calculation, though the number of source variables involved is small. Tradeoffs between source structure, pattern
diversity, and design regularity are illustrated, taking into account the limited (but unknown) number of binding features
in a large layout. SMO's exploitation of complex source designs is shown to provide superior solutions to those obtained
by mask optimization alone.
Moreover, in development work the
ability to adjust the source opens up
new options in process engineering,
and these will become particularly
valuable when future exposure tools
provide greater flexibility in
programmable source control. Such
capabilities can be explored in a
preliminary way by using
programmed multi-scans to compose
optimized compound sources with
e.g. multiple poles or annular
elements.
We demonstrate experimentally for the first time the feasibility of applying SMO technology using pixelated illumination. Wafer images of SRAM contact holes were obtained to confirm the feasibility of using SMO for 22nm node lithography. There are still challenges in other areas of SMO integration such as mask build, mask inspection and repair, process modeling, full chip design issues and pixelated illumination, which is the emphasis in this paper. In this first attempt we successfully designed a manufacturable pixelated source and had it fabricated and installed in an exposure tool. The printing result is satisfactory, although there are still some deviations of the wafer image from simulation prediction. Further experiment and modeling of the impact of errors in source design and manufacturing will proceed in more detail. We believe that by tightening all kind of specification and optimizing all procedures will make pixelated illumination a viable technology for 22nm or beyond.
Publisher's Note: The author listing for this paper has been updated to include Carsten Russ. The PDF has been updated to reflect this change.
Near-field interference lithography is a promising variant of multiple patterning in semiconductor device fabrication
that can potentially extend lithographic resolution beyond the current materials-based restrictions on the
Rayleigh resolution of projection systems. With H2O as the immersion medium, non-evanescent propagation
and optical design margins limit achievable pitch to approximately 0.53λ/nH2O = 0.37λ. Non-evanescent images
are constrained only by the comparatively large resist indices (typically1.7) to a pitch resolution of 0.5/nresist
(typically 0.29). Near-field patterning can potentially exploit evanescent waves and thus achieve higher spatial
resolutions. Customized near-field images can be achieved through the modulation of an incoming wavefront
by what is essentially an in-situ hologram that has been formed in an upper layer during an initial patterned
exposure. Contrast Enhancement Layer (CEL) techniques and Talbot near-field interferometry can be considered
special cases of this approach.
Since the technique relies on near-field interference effects to produce the required pattern on the resist, the
shape of the grating and the design of the film stack play a significant role on the outcome. As a result, it is
necessary to resort to full diffraction computations to properly simulate and optimize this process.
The next logical advance for this technology is to systematically design the hologram and the incident wavefront
which is generated from a reduction mask. This task is naturally posed as an optimization problem, where
the goal is to find the set of geometric and incident wavefront parameters that yields the closest fit to a desired
pattern in the resist. As the pattern becomes more complex, the number of design parameters grows, and the
computational problem becomes intractable (particularly in three-dimensions) without the use of advanced numerical
techniques. To treat this problem effectively, specialized numerical methods have been developed. First,
gradient-based optimization techniques are used to accelerate convergence to an optimal design. To compute
derivatives of the parameters, an adjoint-based method was developed. Using the adjoint technique, only two
electromagnetic problems need to be solved per iteration to evaluate the cost function and all the components
of the gradient vector, independent of the number of parameters in the design.
In this paper, we will outline the approach for optimizing the illumination conditions to print three-dimensional images
in resist stacks of varying sensitivity in a single exposure. The algorithmic approach for acheiving both optimal common
and weakest window is presented. Results will be presented which demonstrate the ability of the technique to create threedimensional
structures. The performance of the common and weakest window formulation will be explored using this
approach. Additionally, due to physical restrictions there are limitations to the type of patterns that can be printed with a
single exposure in this manner, thus the abilities of such a technique will be explored.
There is a surprising lack of clarity about the exact quantity that a lithographic source map should specify. Under the
plausible interpretation that input source maps should tabulate radiance, one will find with standard imaging codes that
simulated wafer plane source intensities appear to violate the brightness theorem. The apparent deviation (a cosine
factor in the illumination pupil) represents one of many obliquity/inclination factors involved in propagation through the
imaging system whose interpretation in the literature is often somewhat obscure, but which have become numerically
significant in today's hyper-NA OPC applications. We show that the seeming brightness distortion in the illumination
pupil arises because the customary direction-cosine gridding of this aperture yields non-uniform solid-angle subtense in
the source pixels. Once the appropriate solid angle factor is included, each entry in the source map becomes
proportional to the total |E|^2 that the associated pixel produces on the mask. This quantitative definition of lithographic
source distributions is consistent with the plane-wave spectrum approach adopted by litho simulators, in that these
simulators essentially propagate |E|^2 along the interfering diffraction orders from the mask input to the resist film. It
can be shown using either the rigorous Franz formulation of vector diffraction theory, or an angular spectrum approach,
that such an |E|^2 plane-wave weighting will provide the standard inclination factor if the source elements are incoherent
and the mask model is accurate. This inclination factor is usually derived from a classical Rayleigh-Sommerfeld
diffraction integral, and we show that the nominally discrepant inclination factors used by the various diffraction
integrals of this class can all be made to yield the same result as the Franz formula when rigorous mask simulation is
employed, and further that these cosine factors have a simple geometrical interpretation. On this basis one can then
obtain for the lens as a whole the standard mask-to-wafer obliquity factor used by litho simulators. This obliquity factor
is shown to express the brightness invariance theorem, making the simulator's output consistent with the brightness
theorem if the source map tabulates the product of radiance and pixel solid angle, as our source definition specifies. We
show by experiment that dose-to-clear data can be modeled more accurately when the correct obliquity factor is used.
We provide an expanded description of the global algorithm for mask optimization introduced in our earlier papers, and discuss auxiliary optimizations that can be carried out in the problem constraints and film stack. Mask optimization tends inherently to be a problem with non-convex quadratic constraints, but for small problems we can mitigate this difficulty by exploiting specialized knowledge that applies in the lithography context. If exposure latitude is approximated as maximization of edge slope between image regions whose intensities must print with opposite polarity, we show that the solution space can be approximately divided into regions that contain at most one local minimum. Though the survey of parameter space to identify these regions requires an exhaustive grid search, this search can be accelerated using heuristics, and is not the rate-limiting step at SRAM scale or below. We recover a degree of generality by using a less simplified objective function when we actually assess the local minima. The quasi-binary specialization of lithographic targets is further exploited by searching only in the subspace formed by the dominant joint eigenvectors for dark region intensity and bright region intensity, typically reducing problem dimensionality to less than half that of the full set of frequency-domain variables (i.e. collected diffraction orders). Contrast in this subspace across the bright/dark edge will approximately reflect exposure latitude when we apply the standard fixed edge-placement constraints of lithography. However, during an exploratory stage of optimization we can define preliminary tolerances which more explicitly reflect constraints on devices, e.g. as is done with compactor codes for design migration. Our algorithm can handle vector imaging in a general way, but for the special case of unpolarized illumination and a lens having radial symmetry (but arbitrary source shape) we show that the bilinear function which describes vector interference within the film stack can be expressed in terms of three generic radial functions, enabling rapid numerical evaluation of the Hopkins kernel. By inspection these functions show that one can in principle recover classical scalar-like imaging even at high NA by exposing a very thin layer spaced above a reflective substack. The reflected image largely restores destructive interference in TM polarized fringes, if proper phasing is achieved. With an ideal reflector, the first-order azimuthal contrast loss term vanishes in all TCC components, and complete equivalence to scalar imaging is obtained in classical two-beam imaging.
In three-dimensional (3D) optical elements, light interacts with the element throughout its entire volume (as opposed to a discrete set of surfaces, which is done in traditional optics.) This allows for more degrees of freedom in shaping the optical response, in particular creating shift-variant responses. We have used this property in a number of ways to acquire 3D object information from both reflective and fluorescent objects under a variety of illumination conditions, including laser-line-scan, rainbow and uniform white light. The key benefits of using 3D optics are summarized as excellent resolution over long working distances, reduced or completely eliminated scanning, and simultaneous spectral
imaging. Our research addresses the physics of 3D optical elements, their fabrication, and computational methods for maximal information extraction. In this paper, we first overview the properties of 3D optical elements and then we describe a fabrication and assembly method. Our approach, termed Nanostructured Origami, is appropriate for manufacturing micro-scale optical components which also include sub-wavelength optical elements and non-optical components, e.g. energy storage.
Nanostructured OrigamiTM 3D Fabrication and Assembly Process is a
method of manufacturing 3D nanosystems using exclusively 2D litho tools. The 3D structure is obtained by folding a nanopatterned 2D substrate. We report on the materials, actuation, and modeling aspects of the manufacturing process, and present experimental results from fabricated structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.