An effective algorithm was developed to prepare a mask layout from several chip databases. The underlying idea is solving so called “strip packing” problems, which actually have been very common in many industries. A major difference between applications is the definition of cost function used in a particular packing problem. In the present case, the work is to test whether all component chips can fit onto a mask with limited area and how these chips are placed. Based upon this goal, the cost function is simply the minimum rectangle that can accommodate all chips. If the minimum enclosing rectangle is smaller than the available mask area, it is feasible to make up the desired mask. The algorithm also tells us the placement of all chips if the placement is possible. The computing time is very short, in the range of seconds, and expected to be short for most cases, too. After all, the changes in other cases are the number and sizes of rectangles. For an operation on GDSII files, the required process time is virtually the time of reading component databases and writing out the result database. This is a generic algorithm, so it does not reply on any particular tools. As long as a tool can perform common arithmetic operations and layout generation, it could be used to implement the job. This work eliminates the manual try-and-error packing process during the data preparation stage of a shuttle mask. Breaking one more bottleneck, which is the manual packing process, could significantly reduce the cycle time of mask data preparation.
The request of pattern recognition has been frequently brought up by both mask and wafer engineers. Despite different intentions, pattern recognition is usually the first step of many applications and hence plays a major role to accomplish certain tasks. For the purpose of this work, pattern recognition is defined as searching a specific polygon or a group of particular patterns from a chip layout. Operator scan is truly not an efficient approach of pattern recognition, in particular, for cases with huge design database of advanced semiconductor integrated circuits. Obviously, an automation system of pattern recognition is necessary and benefits the data preparation process. Two categories of pattern recognition are discussed in the present study, 'fuzzy search' and 'exact match.' Each category has its own application, but the searching algorithms could be much different. Details of searching algorithms are given for both categories of pattern recognition. Due to the nature of industrial standard, the scope of the present application is limited to database with GDSII format. Hence, coordinate searching is internally used inside the searching engine.
The benefit of assist feature has been greatly appreciated in the aspect of bringing the process windows of isolated and semi-isolated patterns into together with that of dense patterns; hence a common process window is attainable. The width of assist feature and the distance between assist feature and main pattern are two basic, fundamental specifications from the viewpoint of lithographer. In fact, there exist other specifications that are essential to success of assist feature implementation. For instance, the distance of two adjacent assist features and the gap between ends of assist feature to main patterns are all of necessity in terms of lithographic performance. From the perspective of feasibility and ease of photomask fabrication there are some specifications and/or constraints that should be implemented and enforced. One illustrative example is the extent of assist feature end pullback as two slightly off- axis assist features either joint or separate with a distance smaller than a given minimum space. Recently, the request of multiple assist features has enormously increased. The task of implementing multiple assist features is not trivial at all; the fact is that it introduces many more specifications to be contended. Under most circumstances, the implementation of assist feature involves lithographic engineers, mask-making engineers, and CAD engineers or script implementation engineers; this brings out the importance of communication mechanism that can describe the true intention of each specification. The question is that whether the mechanism is sufficient or not. The goal of the present work is to develop templates of specifications for assist feature implementation. Currently, many conditions and constraints have been identified and collected. One example is the central-edge template, which is allowed to prioritize the 'central assist feature' vs. 'edge assist feature.' It is believed that with the presence of specification templates both CAD and script implementation engineer will have a clear and consistent guideline to achieve the true intention of each specification.
Following mask inspection, mask-defect classification is a process of reviewing and classifying each captured defect according to prior-defined printability rules. With the current hardware configuration in manufacturing environments, this review and classification process is a mandatory manual task. For cases with a relatively small number of captured defects, defect classification itself does not put too much burden to operators or engineers. With a moderate increase of defects, it would however, become a time-consuming process and prolong the total mask-making cycle time. Should too many nuisance defects be caught under a given detection sensitivity, engineers would generally loosed the detection sensitivity in order to reduce the number of nuisance defects. By doing that however, there exists potential threat of missing real defects. The present study describes a 'progressive self-learning' (PSL) algorithm for defect classification to relieve loading from operators or engineers and further accelerate defect review/classification process. Basically, the PSL algorithm involves with image extraction, digitization, alignment and matching. One key concept of this PSL algorithm is that there is not any pre-stored defect library in the first place of a particular run. In turn, a defect library is 'progressively' built during the initial stage of defect review and classification at each run. The merit of this design can be realized by its flexibility. An additional benefit is that all defect images are stored and suitable for network transfer. The C language is adopted to implement the present algorithm to avoid the porting issue, so as not bound to a particular machine. Assessment of the PSL algorithm is examined in terms of efficiency and the accurate rate.
Layouts of semiconductor integrated circuits are composed of polygons. Ideally, all edges of polygons are either orthogonal or 45-degree with respect to the layout coordinate axes. Yet there are cases that non-ideal edges, which are not orthogonal or 45-degree, exist in layouts. From the perspective of data preparation, benefits can be obtained to exclude those non-orthogonal and non-45 degree edges in chip layouts, since the existence of non-ideal edges will have negative impacts on both mask fracturing and the optical proximity correction process. In addition, e-beam writing time could be significantly prolonged with the presence of non-ideal edges. Currently, most design rule check tools are able to locate non-ideal edges. However, there is not any generic solution available for those non-ideal edges. In the present study, an algorithm was developed to renovate those non-ideal edges of chip layouts. A major success criterion that must be fulfilled is that any additional data process is not allowed to alter the original device behavior. Therefore, the renovation process must be made of as minimal change as possible. The present algorithm is implemented in the C language, which makes it generic to be easily incorporated into most layout tools. Several illustrative cases were used to examine the present algorithm. Finding the best solution with the minimal edge movement among those non-unique solutions was also addressed with theoretical discussion.
Fragmentation, cutting polygon edge into piecewise of small segments that are later allowed to move individually, has been widely accepted as the work-around methodology in modern model-based optical proximity correction (OPC) tools. As tuning a model-based OPC recipe, most engineers spend much time on the model fitting to make simulated curves a better fit to empirical data (CD measurements). Most failure cases, however, do not result from a model with bad fitting. Instead it has been frequently found that undesired OPC outcomes were derived from fragmentation process. Tuning fragmentation parameters may not be sufficient to resolve some failure cases since it could be intrinsic issues of the current fragmentation mechanism. An illustrative example is the poor correction of a hammerhead line end, in which current fragmentation mechanisms fail to identify it as a line end and later improper compensation (correction) is installed. Other examples include asymmetric OPC results are frequently found. In the present study, several examples were used to assist the analysis of current fragmentation mechanisms in the aspects of effectiveness and limitations. For the coming 0.1 micron or even more advanced generations of technologies, the role of fragmentation mechanism renders its importance more profoundly. Therefore, more powerful fragmentation mechanism will be one of major factors for the success of OPC process. It is the main goal of this study to propose a new fragmentation mechanism. Edges are tagged specifically according to their environment prior to the process of cutting edge into smaller segments. The pseudo code of the new fragmentation mechanism will be given with detailed descriptions.
Data preparation of photomask layout has become a major issue of mask making. As model-based OPC becomes a compulsory technology for advanced manufacturing processes, photolithography engineers encounter the issue in data preparation of photomask layout - the file size after-OPC treatment is much larger than original file size. Consequence of large file size leads to difficult manipulation of database such as longer OPC run time and larger disk space, which challenges computer system and software tools, etc. Part of file-size expansion arises from the nature of current methodology, which is caused by fragmentation of polygon edges. However, still part of expansion is unnecessary, because some unintentional layout is sent into an OPC engine. If any given OPC engine is fed with unintentional layout features produced after OPC is applied on layouts, a systematic 'smoothing' algorithm is developed to apply on a real chip. Any algorithms that scan through polygons for each type of defects would be unavoidable to scan the whole layout many times. The algorithm introduced here does not try to fix different kinds of polygon 'defects' one by one. The key is different kinds of defects are reduced to a few categories. The performance can be expected because polygons are scanned through fewer times. After the treatment, the numbers of polygon vertices becomes less. The new database is also more OPC friendly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.