Pitch division lithography (PDL) with a photobase generator (PBG) allows printing of grating images with twice
the pitch of a mask. The proof-of-concept has been published in the previous paper and demonstrated by
others. Forty five nm half-pitch (HP) patterns were produced using a 90nm HP mask, but the image had line
edge roughness (LER) that does not meet requirements. Efforts have been made to understand and improve the
LER in this process. Challenges were summarized toward low LER and good performing pitch division.
Simulations and analysis showed the necessity for an optical image that is uniform in the z direction in order for
pitch division to be successful. Two-stage PBGs were designed for enhancement of resist chemical contrast. New
pitch division resists with polymer-bound PAGs and PBGs, and various PBGs were tested. This paper focuses on
analysis of the LER problems and efforts to improve patterning performance in pitch division lithography.
KEYWORDS: Video, Video compression, Video processing, Multimedia, Embedded systems, Data storage, Computer programming, Video coding, Multiplexing, Computing systems
Multimedia systems are required to provide proper synchronization of various components for intelligible presentation.
However, it is challenging to accommodate the heterogeneity of different media characteristics. Audio-video
synchronization is, for instance, required for presenting video chunks with audio frames where video chunk size is
generally large and variable, but audio frame size is small and fixed. Such audio-video synchronization problem has
been widely studied in the literature. The problem involves proper definition and preservation of temporal relationship
between audio and video. Moreover, it is also important to take into account the processing complexity, since the
computational resources and processing power on embedded platforms, such as cell phones and other handheld devices,
are very limited. In this paper, we present the implementation of three audio-video synchronization methods on an
embedded system. We discuss the performance as well as the advantages and disadvantages of each of these techniques.
Based on our evaluation, we reason why one of the presented techniques is superior to the other two.
TCP is one of the most widely used transport protocols for video streaming. However, the rate variability of TCP makes it difficult to provide good video quality. To accommodate the variability, video streaming applications require receiver-side buffering. In current practice, however, there are no systematic guidelines for the provisioning of the receiver buffer, and smooth playout is insured through over-provisioning. In this work, we are interested in memory-constrained applications where it is important to determine the right size of receiver buffer in order to insure a prescribed video quality. To that end, we characterize video streaming over TCP in a systematic and quantitative manner. We first model a video streaming
system analytically and derive an expression of receiver buffer requirement based on the model. Our analysis shows that the receiver buffer requirement is determined by the network characteristics and desired video quality. Experimental results validate our model and demonstrate that the receiver buffer requirement achieves desired video quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.