Semiconductor process development represents a tremendous opportunity for AI-based approaches which excel in automating routine tasks and recognizing patterns in data. With the right toolset, process engineers can leverage AI models in their day-to-day development. Nevertheless, several key technical challenges must be tackled to successfully implement AI in a semiconductor fabrication environment. For example, model requirements and desired learnings are vastly different when considering the needs of process engineers performing R&D, technology ramp, or high-volume manufacturing. Moreover, preservation of data security remains a pressing issue. Most AI models rely on large sets of data which cannot be shared between manufacturers. In this talk, we will review SandBox’s key innovations in these areas. We will cover critical applications for patterning, etch, and deposition unit process optimization and new opportunities for process co-optimization.
The march towards miniaturization of semiconductor devices places strong constraints on metrology techniques. Effective process control for current and next-gen device manufacture demands characterization of complex three-dimensional structures, accurately, quickly, and preferably non-destructively. In isolation, no currently existing metrology technique can meet all these challenges. We present a hybrid metrology solution in the form of a Physics-Enabled AI system, based on the commercial software tool SandBox Studio AI. Through the integration of information from diverse metrology sources, the system adeptly generates detailed, high-fidelity 3D reconstructions and allows for the extraction of measurements from various planes within the structure, while minimizing measurement-related expenses and material waste. The method’s efficacy was demonstrated on two 3D structures: Gate-All-Around (GAA) FET and 3D NAND Slit, achieving sub-nm accuracy even with limited metrology input data.
Reducing process development time and speeding up time to market are perennial challenges in the microelectronics industry. The development of etch models that permit optimizations across the wafer would enable manufacturers to optimize process design flows and predict process defects before a single wafer is run. The challenges of across-wafer uniformity optimizations include the large variety of features across the wafer, etch variations that occur at multiple scales within the plasma chamber, feature metrology, and computationally expensive model development. Compounding these challenges are trade-offs between data quality and time/cost-effectiveness, the wide variety of measurement information provided by different tools, and the sparsity and inconsistency of human-collected data. We address these challenges with a feature and wafer level modeling approach. First, experiments are conducted for a variety of etch conditions (e.g., pressure, gas composition, flow rate, temperature, power, and bias). Second, a feature level model is calibrated at multiple sites across the wafer based on OCD and/or cross-sectional SEM measurements. Finally, the calibrated model is used to predict an optimal set of process conditions to preserve uniformity across the wafer and to meet recipe targets. We demonstrate the methodology using SandBox Studio™ AI for a FinFET application. Specifically, we show the rapid and automated calibration of feature level models using experimental measurements of the 3D feature etch at a variety of process conditions. Automated image segmentation of X-SEM data is also performed here for single case using Weave® to demonstrate how such data can be acquired quickly in a development environment. We then demonstrate the effectiveness of the reduced-order model to predict optimal recipe conditions to improve overall recipe performance. We show how, with this hybrid-metrology computational approach, a process window that yields 89.2% of the wafer can be captured.
Current best practices for the extraction of critical dimensions (CDs) from microscopic images requires semiconductor process engineers to analyze images one by one, which is tedious, prone to human bias, time-consuming and expensive. Automated CD extraction using machine learning methodologies is an approach to accelerate and improve the accuracy of this process. Deep learning convolutional neural nets specifically can be used effectively for image segmentation and identification of different material regions; however, providing enough annotated data for training and testing is an ongoing challenge. Here, we demonstrate a method where only one sample image is needed for the neural net to learn how to extract the CDs of interests. The methodology is specifically demonstrated for extracting CDs from a metal assisted chemical etching process. Each experimental SEM image is automatically measured in about 45 seconds. The extracted CD measurements are within 4 nm (<5% error) of the human measured CDs. This automated extraction enables process engineers to improve the accuracy of their metrology workflow, reduce their total metrology costs, and accelerate their process development.
Identification of optimal recipes for multi-step and cyclic etch processes where the outcome of each step depends on the progression of the previous steps is a major challenge. Selecting the order and duration of each step is typically performed by a tedious trial and error process where the number of experimental trials scales exponentially with process complexity. Here we present a simulation-based methodology that significantly accelerates the process. We use limited experimental data taken at various process conditions, which may include pressure, gas type, gas flow rate, power, bias, and time to calibrate a step-aware reduced-order physics-based etch and deposition model. This model is used to generate predictions with steps permuted in any desired order and duration. The calibrated model predicts ordering, timing, and possible cycling of each step to achieve desired etch targets. The methodology is demonstrated on a multilayer stack with three possible steps, including etch and deposition. It is shown that the total number of experiments required for the proposed methodology is significantly less than that required by standard methods like full-factorial design of experiment. We also demonstrate how the etch data and the resulting calibrated model can be used to determine the optimal etch recipe for different aperture and/or mask geometries without having to perform further experiments.
The development of new technologies and advanced nodes is capitally intensive due to process design strategies that involve dependent unit processes with different yields and performances. This has led to the exploration of model-based optimization to cut the cost and time of recipe creation; however, computational optimization of semiconductor processes is quite challenging due to multi-dimensional parameter spaces and limited experimental data. SandBox Studio™ AI is a computational tool that automatically builds a hybrid physics-based and machine learning model that can be used to predict optimal process recipes and explore novel process changes such as different incoming mask geometries and step durations. Herein, we show the utilization of SandBox Studio™ AI to build a computational representation of a cyclic etch and deposition process of a high aspect ratio channel etch with the following detrimental effects – bowing, resist over-etching, clogging via deposition, and twisting. The model was calibrated to a synthetic data set of thirteen experiments with five varying process parameters. Then, an optimal recipe was predicted that minimized the observed detrimental effects. The model was then used to explore different incoming mask geometries and step durations to improve the recipe even further. This capability is made possible by the software’s foundational physics-based model and is not possible using conventional statistics and machine learning based tools.
KEYWORDS: Process modeling, Etching, Optimization (mathematics), Model-based design, 3D modeling, Calibration, Statistical modeling, Statistical analysis, Space mirrors, Process engineering
A method for automated creation and optimization of multistep etch recipes is presented. Here we demonstrate how an automated model-based process optimization approach can cut the cost and time of recipe creation by 75% or more as compared with traditional experimental design approaches. Underlying the success of the method are reduced-order physics-based models for simulating the process and performing subsequent analysis of the multi-dimensional parameter space. SandBox Studio™ AI is used to automate the model selection, model calibration and subsequent process optimization. The process engineer is only required to provide the incoming stack and experimental measurements for model calibration and updates. The method is applied to the optimization of a channel etch for 3D NAND devices. A reduced-order model that captures the physics and chemistry of the multistep reaction is automatically selected and calibrated. A mirror AI model is simultaneously and automatically created to enable nearly instantaneous predictions across the large process space. The AI model is much faster to evaluate and is used to make a Quilt™, a 2D projection of etch performance in the multidimensional process parameter space. A Quilt™ process map is then used to automatically determine the optimal process window to achieve the target CDs.
Semiconductor process engineers currently spend almost 10% of their time extracting critical dimensions from microscope images. Images are analyzed one by one, which is tedious, prone to human bias, time-consuming and expensive. Accurate, automated detection of edges and different materials in a stack are the key technical challenges for computer-extracted critical dimensions (CDs). Here we demonstrate the performance of a method for edge detection and material detection via segmentation methods embodied in the software tool Weave™. This-approach uses optimized thresholding via a level set method to identify multiple edges and materials without the need of extensive, annotated, experimental training data. The method is evaluated based on accuracy (prediction of CDs) and materials identification (ability to identify the different materials in an image). Based on evaluation of the method with 20 test SEM images, the method’s performance is excellent. Ninety percent of the CDs measured from the automated analysis are within 2% of the actual values. The errors for the remaining 10% of measurements range from 4-9%.
As the critical dimensions (CDs) of etch profiles continue to decrease, precise control of plasma etch processing becomes increasingly important. Achieving this control requires optimizing etch recipes, which is time consuming and expensive as an extensive amount of experiments must be performed. Here we present a method for the prediction of process windows to achieve target CDs for high aspect ratio trenches using model-based experimental design. A reduced-order model of the physics and chemistry of the etch is used to identify the best experiments to perform to calibrate the model. The model is then used to efficiently explore the process parameter space to identify the largest ranges of process parameters that achieve desired ranges of CDs. The methodology is practically demonstrated on a three-step trench etch through three layers of material consisting of spin-on-glass, spin-on-carbon and silicon. It is found that this physics-model based method requires less than half as many experiments to identify the optimal etch recipe than full-factorial design of experiments.
KEYWORDS: Etching, Calibration, 3D modeling, Process modeling, Solid modeling, Silicon, Model-based design, Data modeling, 3D acquisition, Visual process modeling
We present a model-based experimental design methodology for accelerating 3D etch optimization with demonstration on 3D NAND structures. The design and optimization of etch recipes for such 3D structures face significant challenges requiring costly and time-consuming experiments in order to achieve the required tolerances. 3D NAND memory devices additionally require accurate nanofabrication of high aspect ratio trenches and isolation slits, which are challenging to manufacture reliably within specifications. Our model efficiently captures the relevant physical and chemical processes, which allows them to be calibrated using a limited number of experimental samples and can reproduce realistic 3D etch of multilayer materials, including bowing, necking, and tapering. Since our GPU-powered simulations run in a matter of minutes, the relevant process parameter space can be explored extensively in a short amount of time. The calibrated physics-based model can be used to train adaptive machine-learning-based heuristics which enable near-instant queries, for example for data visualization and analytics. With this approach, we show a rapid methodology for locating optimal windows in the process parameter space for etching 3D structures. Optimality metrics under consideration include both conformances to specified tolerances as well as robustness against process parameter variations. These techniques can reduce cost and time to market for complex multi-layer three-dimensional device designs and improve semiconductor device yields.
A methodology is presented to virtually predict etch profiles on flexible substrates across multi-dimensional process spaces using a minimal number of calibration experiments. Simulations and predictions of the physics and chemical kinetics of plasma etch on flexible substrates are performed using the commercial software SandBox StudioTM. The evolution of a trench profile is computed using surface kinetics models and the level set method. Local etch rates include visibility effects to account for partial shielding of the etch as the pattern is developed and the effects of redeposition. The results of the experiments are then used to update the calibrated model parameters. If the process objectives (e.g., sidewall angle, trench critical dimensions, and across the web uniformity) are not achieved, then a new set of experiments is suggested by the methodology. The process is repeated until the optimal process conditions are identified. The methodology is validated by experiments on etching line-space patterns of polysilicon films on polymer substrates. Results with reactive ion etching with either CF4 and HBr are shown and the optimal etch recipes (power, etch time and gas flow rates) determined. It is found that this coupled simulation-experiment approach is much more efficient than full factorial experimental design at predicting process outcomes. The methodology presented requires 66% fewer experiments reducing the cost of development by a factor of three.
Uniformity of critical dimensions (CDs) across a wafer is an increasing challenge as both CDs and tolerances shrink. Plasma etch uniformity is achieved in part through reactor design and in part through the operating conditions or process recipe of the reactor. The identification of a recipe for a specific etch process is time consuming and expensive, requiring extensive experiments and metrology. Here we present two modules in SandBox StudioTM, SB-Bayesian and SBNeuralNet, to accelerate the prediction and optimization of etch recipes for across the wafer uniformity. A model of etch rates across the wafer is created that accounts for injector locations, gas flow rates and distribution and plasma powers. Synthetic experiments on etching line-space patterns on 300 mm wafers are performed and the CDs and their variations are computed at several hundred site locations. SB-Bayesian requires many fewer experiments to be calibrated and achieve an excellent qualitative match with the experimental data. SB-NeuralNet achieves comparable levels of accuracy to SBBayesian at predicting average CDs and uniformity, but it does not perform as well at predicting trends across the wafer. It is shown that neural nets require a prohibitive amount of experimental data to successfully predict wafer patterns. SBBayesian and SB-NeuralNet were used to create detailed process maps across the parameters space of interest to identify optimal recipes to achieve required CDs and tolerances. Both modules can predict optimal recipe conditions for achieving identified target CD and uniformity metrics. Using these tools, etch recipes for across the wafer uniformity are rapidly optimized at lower cost.
The design and optimization of highly nonlinear and complex processes like plasma etching is challenging and timeconsuming. Significant effort has been devoted to creating plasma profile simulators to facilitate the development of etch recipes. Nevertheless, these simulators are often difficult to use in practice due to the large number of unknown parameters in the plasma discharge and surface kinetics of the etch material, the dependency of the etch rate on the evolving front profile, and the disparate length scales of the system. Here, we expand on the development of a previously published, data informed, Bayesian approach embodied in the platform RODEo (Recipe Optimization for Deposition and Etching). RODEo is used to predict etch rates and etch profiles over a range of powers, pressures, gas flow rates, and gas mixing ratios of an CF4/Ar gas chemistry. Three examples are shown: (1) etch rate predictions of an unknown material “X” using simulated experiments for a CF4/Ar chemistry, (2) etch rate predictions of SiO2 in a Plasma-Therm 790 RIE reactor for a CF4/Ar chemistry, and (3) profile prediction using level set methods.
A two-dimensional, cellular automata model for atomic layer etching (ALE) is presented and used to predict the etch rate and the evolution of the roughness of various surfaces as a function of the efficiencies or probabilities of the adsorption and removal steps in the ALE process. The atoms of the material to be etched are initially placed in a two-dimensional array several layers thick. The etch follows the two step process of ALE. First, the initial reaction step (e.g., Cl reacting with Si) is assumed to occur at 100% efficiency activating the exposed, surface atoms; that is, all exposed atoms react with the etching gas. The second reaction step (e.g., Ar ion bombardment or sputtering) occurs with efficiencies that are assumed to vary depending on the exposure of the surface atoms relative to their neighbors and on the strength of bombardment. For sufficiently high bombardment or sputtering, atoms below the activated surface atoms can also be removed, which gives etch rates greater than one layer per ALE cycle. The bounds on the efficiencies of the second removal step are extracted from experimental measurements and fully detailed molecular dynamics simulations from the literature. A trade-off is observed between etch rate and surface roughness as the Ar ion bombardment is increased.
Predicting the etch and deposition profiles created using plasma processes is challenging due to the complexity of plasma discharges and plasma-surface interactions. Volume-averaged global models allow for efficient prediction of important processing parameters and provide a means to quickly determine the effect of a variety of process inputs on the plasma discharge. However, global models are limited based on simplifying assumptions to describe the chemical reaction network. Here a database of 128 reactions is compiled and their corresponding rate constants collected from 24 sources for an Ar/CF4 plasma using the platform RODEo (Recipe Optimization for Deposition and Etching). Six different reaction sets were tested which employed anywhere from 12 to all 128 reactions to evaluate the impact of the reaction database on particle species densities and electron temperature. Because many the reactions used in our database had conflicting rate constants as reported in literature, we also present a method to deal with those uncertainties when constructing the model which includes weighting each reaction rate and filtering outliers. By analyzing the link between a reaction’s rate constant and its impact on the predicted plasma densities and electron temperatures, we determine the conditions at which a reaction is deemed necessary to the plasma model. The results of this study provide a foundation for determining which minimal set of reactions must be included in the reaction set of the plasma model.
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
A fast and inexpensive scheme for etch rate prediction using flexible continuum models and Bayesian statistics is demonstrated. Bulk etch rates of MgO are predicted using a steady-state model with volume-averaged plasma parameters and classical Langmuir surface kinetics. Plasma particle and surface kinetics are modeled within a global plasma framework using single component Metropolis Hastings methods and limited data. The accuracy of these predictions is evaluated with synthetic and experimental etch rate data for magnesium oxide in an ICP-RIE system. This approach is compared and superior to factorial models generated from JMP, a software package frequently employed for recipe creation and optimization.
Polymer shrinkage from curing in nanoimprint lithography (NIL) strongly affects the ultimate shapes of two- and three-dimensional structures produced following etching. We computationally study the curing step in the NIL process and predict the shape changes caused by polymer shrinkage. The shape changes are predicted for crosses, diamonds with sharp and rounded tips, and multitiered structures that are applicable for multibit memory devices and dual damascene processes. The shape changes from curing are shown to be governed by the shrinkage coefficient of the polymer resist, its Poisson’s ratio, and the geometric aspect ratios of the shapes. Finite element simulations demonstrate that shape change due to polymer densification is equal to the average volumetric contraction of the resist material, but shrinkage is not isotropic and vertical displacement dominates. The thickness of the residual layer does not impact the final profile of the imprinted shapes considered. Further analysis shows that diamonds with sharp tips stay sharp while the tips of rounded diamonds get sharper. Additionally, shape changes for multitiered structures are not uniformly distributed among the tiers. Using etch simulations, we demonstrate the significant impact of polymer shrinkage on the final feature profile.
Nanosculpting, the fabrication of two- and three-dimensional shapes at the nanoscale, enables applications in
photonics, metamaterials, multi-bit magnetic memory, and bio-nanoparticles. A promising high resolution and high
throughput method for nanosculpting is nanoimprint lithography (NIL). A key requirement to achieving
manufacturing viability of nanosculptures in NIL is maintaining image fidelity through each step of the imprinting
process. In particular, polymer densification during UV curing can distort the imprinted image. Here we study the
shape changes introduced by polymer densification and develop a forward method for predicting changes in
nanoscale geometries from UV curing. We show that shape changes by polymer densification are governed by the
Poisson’s ratio, the shrinkage coefficient of the polymer resist, and the geometric aspect ratios of the nanosculpted
shape. We also show that the size of the residual layer does not impact the final profile of the imprinted shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.