KEYWORDS: Servomechanisms, Holography, Objectives, Detection and tracking algorithms, Design, Actuators, Logic, Control systems, Digital video discs, Spindles
In this paper, We first analyze the challenges for the collinear holographic data system to adopt the rotating disc mechanism to decrease the complexity and improve the data throughput. Then we propose a technique design to address them. The implementaion gives the details of major control procedures and the necessary optimizations for the control algorithm. Finally, the evaluation results show the effectiveness of the design.
In the digital age, the volume of data is expanding exponentially, rendering it as a fundamental asset. Consequently, the need for storage systems that possess substantial capacity, affordability, exceptional performance, and unwavering dependability has become increasingly pressing. In response to this demand, storage systems have extensively embraced erasure codes, particularly the wide stripe erasure codes, owing to their remarkable storage efficiency and reliability. However, this paper found that the encoding and decoding performance of wide stripe erasure codes significantly deteriorates compared to narrow stripe erasure codes, when using mainstream erasure code acceleration libraries. In order to gain a comprehensive understanding of this phenomenon, extensive experimentation was conducted to test the performance of encoding and decoding process of erasure codes. Further, hardware events during erasure code calculation were analyzed. The results show that the root cause of performance degradation is the increasing L3 cache misses which saw a 240% increase given a set amount of encoding data
KEYWORDS: Image processing, Data storage, Data centers, Holography, Modulation, Data conversion, Holographic data storage systems, Signal processing, Spatial light modulators, Holograms
Optical holographic data storages (HDS) with high theoretical-capacity have been researched for more than two decades. Among them, coaxial HDS receives the most attention. Amplitude-based coding in coaxial HDS systems is fundamental and generally employs a 3:16 modulation scheme that sets merely 3 non-adjacent bright pixels within a 4*4 pixel-block, called as Three-Level Format (TLF). Unfortunately, TLF data frame recorded upon the disc is difficult to be retrieved accurately in the practice. In our previous work, we experimentally analyzed inaccurate TLF data frames recognition and its corresponding factors. The core issue is how to accurately and fast locate TLF data frame in encoded image. Thus, we have to use some image processing techniques such as Gaussian blur to assist in locating TLF data frame. However, using image processing techniques will inevitably introduce long processing-time. Therefore, we propose an efficient two-stage decoding scheme. In the first stage, we locate TLF data frame in encoded images and calculate scaling multiplier (about 1.9s). In the second stage, we calculate the coordinates of all data points and read them (about 0.08s). For the first image of a batch, we execute complete two-stage process, but the subsequence images are no longer to perform the first stage and only fine-tuning TLF data frame location. The experimental results show that the average latency of reading an image is reduced by 1.9s. The average data point error rate is 3.1%, and the average data block error rate is 7.8%, consistent with the results of executing complete process.
Optical holographic data storages (HDS) with the high theoretical-capacity density proportional to 1/λ3 have been researched for more than two decades. Among them, the coaxial HDS is current research hotspot. Amplitude-based coding in coaxial holographic storage systems is fundamental and generally employs a 3:16 encoding scheme that sets merely 3 non-adjacent bright pixels within a 4*4 pixels data block with the coding rate of 0.5. This data format is called three-level format (TLF). Unfortunately, the data frame recorded upon the disc is difficult to be recovered accurately in the practice due to the system inherent complexity such as light intensity and distribution, mechanical movement, dynamical focalizing, material distribution, servo subsystem, and component quality. This paper experimentally analyzes inaccurate TLF data frames recognition and its corresponding factors. We design an experimental approach to evaluate the quality of recorded data frames, and we also conclude several observations. First, the luminance among bright pixels within synchronization block consisting of 4×4 bright pixels are unevenly and distorted actually, making the synchronization block locating inaccurately. Second, the distance between two adjacent bright pixels heavily affects the recorded image quality. Third, the high aggregate brightness of a data frame severely reduces the frame’s signal-to-noise ratio. Forth, the recorded image suffers from the picture distortion, but the degree of distortion is slight within an acceptable range. Last, holographic material distribution upon disc also affects the brightness and pixel consistency. Based on these observations, we discuss several solutions to improve the reading accuracy for hologram data frames.
Phase-modulated collinear holographic storage is promising high storage density at cost of high raw bit error rate. We first performed a simulation to analyze the bit-error-rate characteristics of phase-modulated collinear holographic storage under different noise intensity. To ensure high storage capacity with acceptable user biterror-rate, LDPC (Low Density Parity Check Code) is introduced to ensure data reliability. We further analyze the LDPC code error correction performance under different factors and determine the appropriate hardware parameters for the LDPC decoder. Finally, we use High Level Synthesis to fast implement and optimize an LDPC FPGA-based hardware decoder, named as HDecoder. HDecoder achieves 204Mbps decoding throughput, 150x and 4850x higher than CPU-based software decoder and the HLS-based vanilla hardware decoder. Compare to HLS-based vanilla LDPC decoder, HDecoder consumes 55x lower hardware resource per Mbps.
KEYWORDS: Data storage, Data storage servers, Distributed computing, Data modeling, Computing systems, Control systems, Reliability, Computer architecture, Prototyping, Computer science
According to the role of the current large-scale storage system, this paper introduces a novel metadata service strategy
that separates read from write for the larger proportion of read. It centralizes control and reduces the access workload
through the multi-metadata server architecture of the master/slave mode. Through the sharing of original metadata server
to read the response to the large scale to reduce the load on the reading pattern to form a viable structure of the system. It
not only can bring the system better scalability and usability, but can well control metadata consistency. Finally,
compared to Active/Active structure, Read/Write strategy has shown a relatively good result in the aspect of the access
efficiency and System costs. As a service management strategy, it effectively reduces the load of the data access of
metadata service.
As a new generation of high-performance distributed storage system, object-based storage system is being
developed to support high-performance computing environments. In the petabyte-scale object-based storage
system, reasonable data distribution and parameters configuration can improve system performance and
availability. To make the system performance evaluation work easier, we propose an approximate parameters
analysis method to build performance model. We firstly model the whole storage system's architecture based
on closed Fork-Join queue model; using our system architecture model, we then deduce an approximately
analytical expression with erasure codes and replicas to predict the storage system's mean response time
under various workloads simulating the real-world condition. Finally, a large number of comparison
experiments validate our approximately analytical expression of system performance, and proved that our
analytic method is appropriate to build performance model for object-based storage system.
KEYWORDS: Data storage, Data backup, System integration, Reliability, Data storage servers, Distributed computing, Data processing, Lithium, Optoelectronics, Logic
Recent advances in large-capacity, low-cost storage devices have led to active research in design of large-scale storage
system built from commodity devices. These storage systems are composed of thousands of storage device and require
an efficient file system to provide high system bandwidth and petabyte-scale data storage. Object-based file system
integrates advantage of both NAS and SAN, can be applied in above environment. Continuous data protection
(CDP) is a methodology that continuously captures or tracks data modifications and stores changes independent of the
primary data, enabling recovery points from any point in the past. All changes to files and file metadata are stored and
managed. A CDP method in Object-based file system is presented in this thesis to improve the system reliability. Firstly,
we can get detail at byte level of every write request because data protection operates at the file system level. It can
consume less storage space. Secondly, every object storage server can compute the recovery strip data object
independently to decrease the recovery time. Thirdly a journal-like metadata management way is introduced to provide
metadata optimization for CDP.
With the explosive growth of digital information, storage systems are becoming larger and more complicated. Especially
in large scale storage systems, they must be able to deliver satisfactory I/O performance (availability, reliability, etc.)
under both expected and unexpected workloads, and handle data with real timeliness requirements. However, typical
management methods of mass storage systems cannot meet these requirements. One solution is using Quality of Service
(QoS) approaches in object-based storage systems. These QoS approaches include Quality of Storage Service (QoSS)
framework and QoSS improvements. After researching the QoS requirements and the various data access
characterizations in different applications, we designed a QoSS management framework. And then according to this
framework, we gave some improvements to efficiently realize QoSS in object-based storage systems. To demonstrate the
effectiveness of our proposed solution, we developed a prototype of Attribute-managed Storage System with QoSS
(AM-QoSS) by extending both OSD and iSCSI protocols. The measurement results finally show that the proposed QoS
management approach achieves optimization of storage systems, such as higher performance and better guaranteed QoS.
In the past few years considerable demand has developed for oceanic data storage system that are able to store Petabytes data. Volume holographic recording has the potential to offer high density, fast data readout rate, and associative content addressable storage as compared with other conventional mass data storage technology. However, its total cost and size impedes to go into market. If the volume holographic device could join network and be shared by more and more customers, the average cost for each customer should be reduced. On the other hand, the growing network storage technologies shake off the traditional storage architecture (DAS) limit including physical topologies and access mode, and provide high scalability, availability and flexible for storage system, moreover are widely applied to many fields. Obviously, it is valuable for building network storage device based on volume holographic. In this paper, we design the architecture and relative software of volume holographic device to enable common storage network, which can support the SAN based on Fibre channel and iSCSI. Our experiment shows the HSD (Holographic Storage Device) has excellent performance for network storage.
Both the requirement of the application and the development of the technology have promoted the research of the new network protocols in network storage, and currently IP-based SAN has become new focus of study. The main network protocols used in IP-based SAN are: iSCSI, FCIP, iFCP and mFCP. They all implement the transmitting of block level storage data over TCP/IP. To understand and master the protocols deeply, this paper elaborates the latest development of these protocols, and analyzes and compares them intensively in regards of protocol stack, implementation model, naming, addressing, discovery and routing etc.
KEYWORDS: Multimedia, Network architectures, Prototyping, Internet, Data storage, Local area networks, Video, Switches, Chemical elements, Computing systems
Due to the excitement of Internet and high bandwidth, there are more and more multimedia applications involving digital industry. However the storage and the real-time of the conventional storage architecture cannot cater for the requirements of continuous media. The most important storage architecture used in past is Direct Attached Storage (DAS) and RAID cabinet, and recently, both Network Attached Storage (NAS) and Storage Area Networks (SAN) are the alterative storage network topology. But as for the multimedia characters, there need more storage capacity and more simultaneous streams. In this paper, we have introduced a novel concept 'Unified Storage Network' (USN) to build efficient SAN over IP, to bridge the gap of NAS and SAN, furthermore to resolve the scalability problem of storage for multimedia applications.
An optical fiber pH sensor which has the immobilized pH sensitive indicator dye reagents on the tip of the optical fiber has been studied. The probe is made by covalently immobilizing the phenol red, bromine phenol blue, or bromothymol blue on the polyacrylamide microsphere fixed by polyterafluoroethylene (PTFE) film. A gap between the dye and optical fiber was used to make the diffusion of the hydrogen ions easier. The parameters of the optical fiber pH sensor have been given completely. The ranges of measurement are 3.0 - 5.0 pH, 7.0 - 8.5 pH, and 8.0 - 10.0 pH for bromine phenol blue, phenol red, and bromothymol blue, respectively. The sensitivity is 66.6 mV/pH. The probe has a precision of better than 0.55 pH. The linear correlation coefficient is 0.999. The response time is 1 - 2 min. The hysteresis is 0.52%. The repeatability is 0.013 mV, while the stability is 0.015 pH/h.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.