PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The thiol-ene polymerization between a tetrafunctional thiol and a cyclic allylic sulfide (CAS) monomer is used to generate a crosslinked structure with high concentration of sulfur atoms, which allows for the manufacturing of volume phase holographic gratings with high refractive index modulation (▵n). The writing chemistry is dispersed in a cellulose acetate butyrate (CAB) matrix together with Irgacure® 784 as free radical photoinitiator, making the system sensible to blue light. After the writing step, the performances of the grating can be further improved by means of a thermal treatment, which induces a strong enhancement of the refractive index modulation. This effect could be explained as a reorganization of the polymeric chains that leads to a marked separation between the writing chemistry and the polymeric binder and to local variations of the material density. This hypothesis is supported by different experimental evidences. Values of ▵n up to 0.0346 are reached, which are among the largest reported in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayfol® HX photopolymer films prove themselves as easy-to-process recording materials for volume holographic optical elements (vHOEs) and are available in customized grades at industrial scale. Their full-color (RGB) recording and replay capabilities are two of their major advantages. Moreover, the adjustable diffraction efficiency, tunable angular and spectral selectivity of vHOEs recorded into Bayfol® HX as well as their unmatched optical clarity enables superior invisible “off Bragg” optical functionality. As a film product, the replication of vHOEs in Bayfol® HX can be carried out in a highly cost-efficient and purely photonic roll-to-roll (R2R) process. Utilizing thermoplastic substrates, Bayfol® HX was demonstrated to be compatible to state-of-the-art plastic processing techniques like thermoforming, film insert molding and casting, all enabled using a variety of industry-proven integration technologies for vHOEs. Therefore, Bayfol® HX made its way in applications in the field of augmented reality such as Head-up-Displays (HUD) and Headmounted- Displays (HMD), in free-space combiners, in plastic optical waveguides, and in transparent screens. Also, vHOEs made from Bayfol® HX are utilized in highly sophisticated spectrometers in astronomy as well as in narrow band notch filters for eyeglasses against laser strikes. Based on a well-established toolbox, Bayfol® HX can be adopted for a variety of applications. To further offer access to more applications in sensing and continuously improve the performance in existing applications, we recently extended our chemical toolbox to address both the sensitization beyond RGB into the Near Infrared Region (NIR) and increase the achievable index modulation ▵n1 beyond 0.06. In this paper, we will report on our latest developments in these fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A holographic wavefront sensor based on a spatial light modulator (SLM) for displaying computer-generated holograms (CGH) is a flexible and simple method for analyzing the wavefront. This article discusses an algorithm for the synthesis of holographic structures based on a blazed diffraction grating (Echelette grating). As a result of the reconstruction of such CGHs the light diffracts predominantly into one diffraction maximum. The experiments carried out confirm the effectiveness of the proposed algorithm when measuring the wavefront described by one or several Zernike polynomials simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of augmented reality displays and integrated photonics devices, researchers are faced with the task of developing new methods for recording the waveguide holograms and creating the diffractive waveguides. Recording of such type diffraction elements is associated with increasing the complexity level since large convergence angles of the recording beams must be provided. The object wave must be inside the substrate during recording. A manuscript demonstrates a stable and simple method for multiplexed recording of the Bragg diffraction gratings for AR displays using phase masks. This recording technique is accompanied with no need in strong vibration isolation because of interferometric branches absence in the optical scheme. Presented research is distinguished by conical illumination of the phase mask with a single recording laser beam to manufacture the slant volume gratings for AR waveguide displays. An important result for the research is in experimental confirmation of the beneficial application of non-selective surface phase mask in comparison with volume selective one. Diffraction waveguides in this experiment made of photo-thermo-refractive (PTR) glass — unique material for integration of phase diffraction elements into waveguide platform. Creation of substrate-mode multiplexed Bragg gratings in planar waveguide made from PTR glass is the challenge of the research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Collinear holography data storage (CHDS) is a promising solution for “cold data” storage in the big data age. Studies adopt “amplitude type” and “phase type” orthogonal reference have been sequentially reported for the performance improvement of CHDS. Data from different users can be storage and readout separately by different orthogonal reference, which is meaningful for the application of security data storage. In this paper, a newly “phase type” orthogonal reference specified by a Hadamard orthogonal matrix is proposed for identity information storage. Each one Hadamard vector on behalf of a “phase type” reference, and the symbols “1” and “-1” in Hadamard matrix stands for the phase of 0 and pi of the reference pixel. Several different data pages are recorded using different orthogonal reference in advance, and there is only the specific data page which is matched to the orthogonal reference can be reproduced in the process of reconstruction. The action mechanism of orthogonal reference is analyzed, and the feasibility of the system is verified by numerical simulations and primary experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a compact dynamic polarization-modulation system using Liquid Crystal on Silicon Spatial Light Modulators (LCOS SLM) which can be used for data storage in glass. This new design uses two cascade-connected holograms on a single LCOS SLM in conjunction with a half-wave plate, to modulate multiple linear polarization states and simultaneously and generate desired holographic images. Furthermore, a zero-order suppression method is developed based on a computed holographic lens capable of on-axis and off-axis controls for the compact system to improve the quality of the data storage patterns generated by a laser light in glass.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a phase retrieval method based on deep learning is proposed and applied to the phase-modulated holographic storage system. The phase-modulated holographic storage system has become a research hotspot because of its higher encoding rate and higher signal-to-noise ratio (SNR). Since the phase data cannot be detected directly by the detector, the intensity image is used to retrieve the phase. The traditional interferometric phase retrieval method is not suitable for the storage system because its optical system is complex and is easily affected by environmental disturbances. The non-interferometric phase modulation storage system uses iterative methods to solve the phase data, and the number of iterations will affect the data transmission rate in the holographic data system. In this paper, a simulated non-interferometric phase retrieval system based on deep learning is established, which uses a convolutional neural network to directly establish the relationship between phase and intensity images captured by CCD. The neural network is trained by learning the dataset of intensity images and phase data images. After training, the phase can be obtained by a single calculation, which greatly improves the data transmission speed. In the process of deep learning training, we introduced embedded data to improve the precision of phase reconstruction and reduce the bit error rate. According to our investigation, this is the first application of deep learning in phase retrieval of optical holographic storage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use a holographic method to generate structured illumination which of spatial frequency and phase can be easily modulated. First, a black and white stripe with a certain spatial frequency is generated according to the period and phase of structured illumination, the black area is 0 and the white area is 1. The spatial frequency of the black and white fringe is determined by calculating the spatial frequency of the structured illumination to be finally generated and the magnification of the optical system. A prism phase that can cause lateral movement is applied to the black part of the black and white stripes, so that the light that incident on the black area tilts from the optical axis to the first order of diffraction. While, the light that incident on other parts is not modulated and does not move laterally, and continues traveling along with optical axis. In this way, the tilted beam is bright and dark, and the diaphragm is used to block the 0- order diffracted light and retain the diffracted first-order light, thereby obtaining cosine structured light. Theories and experiments verify the effectiveness of the method. The method to generate cosine structured illumination is simple, fast and accurate, and the spatial frequency and phase can be conveniently adjusted, which is very helpful for the research of structured illumination microscope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recording of computer-generated holographic optical elements (HOEs) via the concept of holographic wave front printing has been a topic of rising interest in many research groups over the last years. Especially for applications in augmented reality (AR), holographic wave front printing has the potential to realize HOEs with complex optical transformations and high diffraction efficiencies while maintaining excellent transmittance. Here, we present a novel immersion-based holographic wave front printer setup, which allows the recording of reflection volume holographic optical elements (vHOEs) in both on-axis and off-axis configurations. HOEs fabricated via our wave front printing process are made up of individual sub-holograms, so called Hogels. Each sub-hologram is recorded via two phase-only reflective spatial light modulators (SLMs). Large-area vHOEs are achieved by adjacent recording of multiple Hogels in a step-wise fashion. Our immersion-based holographic printer setup ensures a high numerical aperture for the recording configuration, that is directly linked to a wide angular range in which recorded wave fronts can be replayed in air configuration. As a possible AR application, we demonstrate the recording of a holographic combiner for retinal projection. A single eye box is projected in the user's field of view (FOV) by means of a scanned laser projector source. Each Hogel of the holographic combiner performs an individual wave front transformation of large off-axis to on-axis angles, which contributes to the global holographic transfer function of the vHOE. Haze and clarity analysis of the recorded vHOE confirm high transmittance, which is crucial for AR applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the computational technique for automatic numerical refocusing in coherent lensless Gabor microscopy (digital in-line holographic microscopy). It is based on the adaptive filtering of the recorded on-axis Gabor hologram to eliminate its incoherent background term and extract interference fringes determined by the light scattered on the sample. Numerical propagation of such filtered hologram, based on the angular spectrum method, yields the computationally generated dark-field imaging realized in amplitude channel of the complex propagated f ield. As the focus metric we calculate the variance of the dark-field gradient - it attains maximum value in the focal planes for all types of objects (phase, amplitude and mixed phase-amplitude). Demonstrated autofocusing technique is positively validated using experimental data exhibiting significant variation of the confluence for double focal plane scenarios (two closely located sample planes filled with microbeads). Described technique compares favorably with other well-established automatic numerical refocusing methods (e.g., based on the high-pass filtered complex amplitude and edge sparsity) mainly in terms of higher axial resolution and better robustness to hologram low signal-to-noise ratio and object non-uniformity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main limitations of optical microscopy is the fact that while increasing the resolution of microscopes, due to objectives mechanical constructions (large numerical aperture objectives has also larger magnifications), their field of view (FOV) is decreasing thus it is not possible to observe sample both in high resolution and large FOV. One of the techniques that can overcome this limitation is Fourier ptychographic microscopy (FPM), which combines information from many illumination angles to increase object image resolution. FPM hardware can be straightforwardly implemented (e.g. by placing LED array instead of illuminator in classical brightfield microscope), but it lacks proper open-source software, which is a barrier for non-expert users. In order to make FPM more accessible we present our recently proposed, simple, universal, semi-automatic and highly intuitive, open source graphical user interface (GUI) FPM application called the FPM app. Apart from implementing the FPM in approachable GUI app, we also made several modifications in the FPM image reconstruction process itself, that will make FPM more automatic, noise-robust and faster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light field displays depict the real world by emitting light rays corresponding to the 3D space of the scene or object that is to be represented. Therefore, such displays are suitable for utilization in a multitude of use cases, including the cinema. Cinematography is the art encompassing motion picture photography. It describes a great number of methods for working with physical cameras. For example, a camera can be mounted on a car in a typical chase scene, attached to a spring, suspended or mounted on a colliding object and many more. Although these camera rigs may produce exceptional visuals, achieving similar results by using light field cameras will be extremely challenging. In order to capture light fields, two sets of possible solutions can occur. These include capturing light fields by means of an actual light field camera or via an array of cameras, where the latter may be planar or curved. Using one way or the other is rather demanding, as each method has its own limitations and difficulties. When using a light field camera, the captured light field should map to that of the light field display, on which it shall be visualized. Whereas capturing a scene with a camera array, can unfortunately lead to self-capture. Moreover, the portability of the camera arrays is evidently problematic, due to the sheer size and weight. In addition to these challenges, light field rendering on its own is far from being trivial. While rendering to conventional 2D displays from camera arrays may use image-based rendering, where many views of the scene can be set up from pre-captured images, light fields are represented by 5D plenoptic functions that are not easy to capture with conventional camera arrays. Moreover, image-based rendering techniques often fail to produce convincing results for light field displays. For some use cases, they can be reduced into 4D for horizontal- parallax-only light fields, since our eyes are horizontally separated and horizontal motion is more frequent than vertical. In practice, the creation of a light field scene from a set of images requires injecting each 2D image into a 4D light field representation. In this paper, we visualize different simulations for realistic physical camera motions on a real light field display. In order to overcome the aforementioned problems, virtual cameras were used to simulate a set of different physical camera motions used in cinematography for light field displays. In applications, physics simulation libraries include algorithms for the dynamics of soft as well as rigid bodies. Moreover, collision detection is also accounted for. Many tools have been devised to simulate physics, among which is the Bullet Physics library. In our work, we used the Bullet Physics library in order to generate realistic physical camera motions as well as physical environment simulations for light field displays. The limitations and challenges imposed by light field displays when simulating physical camera motions are discussed, along with the results and the produced outputs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many image processing applications require a quantitative estimation of underlying correspondence (of relative position in space) between two or more images. Here, we propose a novel, less numerically intensive method, which we refer to as the Subtraction Method, to measure in-plane object motion with sub-pixel accuracy. This method can be usefully employed in conjunction with correlation to provide fine motion information very quickly (100 times faster). In this paper we demonstrate, explain, and examine the proposed method using simulated digital image data. Subpixel motion of discrete Gaussian function, four different digital images and speckle image are determined. Performance is compared to that of correlation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Here we demonstrate the fabrication processes and working parameters of tunable phase retarders based on photoaligned Liquid Crystal (LCs) cells by combining the photo-pattering and self-assembly processes. The proposed LC devices were assembled by Indium Tin Oxide (ITO) transparent conductive layers deposited on a glass and quartz substrates and spin coated with thin polyamide (PI) layer as photo-alignment material. We study the voltagetransmittance and phase retardation behavior of assembled LC cells and demonstrate polarization sensitive spatial patterns, that open promising features for next generation optical elements as waveplates, lenses, phase retarders, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a common-path configuration of digital holography offering several advantages in comparison to two-channel configurations of digital holography. This system shows a very high temporal phase stability which makes it more appropriate for real-time and fast transient dynamic measurement applications, the configuration is very simple and the system is very compact in size, and it is less vibration sensitive. Moreover, with the full use of the reference beam, the proposed system provides the field of view (FOV) as wide as that of a two-beam interferometer. The proposed system is used for the measurement of the refractive index and temperature inside the candle flame. A complete analysis of the temperature measurement from the recorded digital hologram in the presence of a candle flame is demonstrated. The advantages and limitations of the systems are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have proposed a new configuration of the single-shot, dual-wavelength, common-path, off-axis DHM system. The proposed system is used for decoupling the refractive index and thickness of a biological specimen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the time of this paper, the advances in light field technology already offer 3D displays that immerse the users without the need for additional viewing devices. Despite the numerous advantages and attractive capabilities of such glasses-free 3D displays, their user interface methods are quite complicated and they are currently underwhelming when compared to conventional 2D displays, due to the fact that visual feedback can only be rendered sharply on the emission surface of light field displays. The sharp rendering of user interfaces is a necessity, as blur may hinder their fundamental functions. When it comes to 2D displays, many user interaction techniques and interfaces have been devised. Rendering a user interface on a 2D display could be done in various ways, such as rendering overlays on top of the rendered scene, or by using billboards. These are extensively used in modern video games. User interaction methods have proven their importance and added efficiency to virtual environments throughout the years. Due to their overall value and usefulness, interaction techniques develop immediately as new types of displays arise. With the recent advancements in visualization technologies, user interfaces have been redesigned for use in AR, VR and MR visualization. This includes on-screen augmentation, which enables interaction with visual content on the screen. Although light field displays contain immense potentials, only basic user interfaces have been devised thus far, including FOX (Focus Sliding surface), which grants users the option to scale and to rotate 3D objects. In this paper, we visualize the theater model on real light field displays and we test the different interactions by means of a monitor room. The theatre model is analogous to real-life theatres, where viewers may observe the theatrical presentation on the stage from various angles. The motivation to choose the theater model was the fact that light field visualization similarly allows multiple simultaneous viewers within its field of view, in which the content can be observed in an angle- dependent manner. Moreover, from the users’ perspective, the theater model is thus familiar and it provides high-quality visual feedback. Furthermore, theater stages encompass a lot of interactions, including rigging and flying systems, pulleys, rotating stages, lights, curtains etc. In order to test the different interaction methods on light field displays, a theater model depicting the virtual environment was implemented. Methods for rendering the monitor room and the results of the interactions are discussed in the paper, illustrated by images of the actual visualization on light field displays. It is shown that producing plausible results with no noticeable visual artifacts is challenging, yet possible. The scientific contributions of the paper also highlight the various novel user interfaces for future light field systems and services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camera motions are essential to numerous fields of arts and science, such as cinematography, video games, 3D simulations and many more. Some of these applications rely on physical cameras, while others use virtual cameras. Despite the efficiency of virtual cameras, their motion misses some of the subtle details of real camera movements. Since hand-held camera motion accounts for the noise factor produced by hand tremors and muscle fatigue, the resulting output is more realistic than what virtual cameras produce in general. Much research has been done to produce simulations for realistic hand-held cameras; however, implementing these techniques for light field rendering has not been investigated yet. Among the different solutions to produce realistic hand-held camera motions, databases could be set up by collecting data from real hand-held cameras, although this method requires extensive data to be recorded from different hand-held cameras in order to produce reliable results. In addition to generating databases for hand-held camera simulation, jitter could be used. Since hand tremors and muscle fatigue result in adding slight details to the camera motion paths, jitter models could be used to simulate deviations. A camera motion path is defined by a set of curve functions, which could be taken into account when adding noise models to produce the hand-held camera effect. In addition to the defined camera motion path, camera orientation, location, speed and acceleration can be also considered when adding jitter. For example, if the camera accelerates, jitter shall increase accordingly. Even though the work on hand-held camera simulation techniques is an ongoing research and valid solutions are already available, applying those techniques to light field visualization have not been performed yet. Contrary to rendering light fields from physical camera arrays, virtual scenes can be rendered for each ray of the display's light field. Rendering can be performed with both ray tracing and rasterization techniques. Both of these techniques involve camera (region of interest) placement and, therefore, they allow us to perform camera motion simulations. Rendering in such a manner also eliminates all sampling and conversion artifacts, thus making it more suitable for the evaluation of visual comfort. In this paper, we introduce the scientific considerations regarding our on-going long-term work on the simulation of hand-held camera motions for light field displays by means of jitter. The technical discussion covers every aspect of the procedure, from research goal and measurement utilities to visualization and quality assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proposed convolutional neural network based, fast and accurate local fringe density map estimation by DeepDensity was developed to significantly enhance full-field optical measurement techniques, e.g., interferometry, holographic microscopy, fringe projection or moiré techniques. The use of neural networks to determine the final result of the optical measurement may raise legitimate metrological concerns and therefore for the sake of versatility and independence from measurement technique we still recommend the use of fully mathematically sound solutions for both fringe pattern prefiltration and phase retrieval. It is worth to acknowledge that proposed DeepDensity network does not supersede mathematically rigorous phase extraction algorithmic solutions, but it only supports them. For that reason during the neural network learning process it was assumed that the data fed to neural network will be prefiltered so background and amplitude modulation should be successfully minimized. Nevertheless, it is still interesting how sensitive to the prefiltration accuracy is proposed DeepDensity. In this contribution we present a thorough analysis of the DeepDensity numerical capabilities in the case of insufficient fringe pattern (interferogram, hologram, moiregram) background and/or amplitude filtration. The analysis was performed with the use of simulated data and then verified using experimentally recorded fringe pattern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a space division technique to multiplex communication channels in a regular step-index multimode fiber using holographic correlator. We consider a multimode fiber with a large diameter of core as highly scattering medium. Thus, the focusing laser spot at different position on the incident plane of the fiber excites different sets of modes, which gives a different speckle pattern at the output of the fiber. Hence, each focusing spot can be considered as a communication channel for data transmission. By combining the volume holographic techniques to form channel multi/demultiplexer in a transmission system, we demonstrate conceptually transmission of multichannel optical information by using a regular step-index multimode fiber for data transmission application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work it is shown the first characterization of holographic solar concentrators recorded in Biophotopol - one of the greenest photopolymers. Biophotopol is an acrylate-based and water-soluble photopolymer with good recycling properties. The composition of this photopolymer and their thickness are easily changeable, which implies an important advantage vs. others commercialized photopolymers. Good diffraction efficiency and wide acceptance angles are achieved on phase volume transmission holograms by using an optimized composition and thin layers. A curing stage with a white incoherent light has been performed to obtain high temporal stability together with a good diffraction efficiency. Finally, the performance of the holographic lenses as holographic solar concentrators has been evaluated with an electronic setup connected to a polycrystalline silicon photovoltaic cell and a high intensity solar simulator emitting a standard solar spectrum (AM1.5G).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We proposed a novel, fast, sub-pixel accuracy digital image motion estimation method, in Opt. Lett, 45(24), 6611- 6614 (2021). In this paper, we experimentally examine the method’s performance in more detail. Images are captured using both optical and THz imaging systems. A high-resolution megapixel camera is used in the optical system and a low-resolution camera with 16×16 pixel is used to perform the THz measurements. Both 1D x and y direction motion measurement data are examined and analysed. In all cases sub-pixel resolution is achieved. The results demonstrate that the method can be flexibly used in different areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polymer nanocomposites are designed and engineered on a nanometer scale with versatile applications including optics and photonics. During the last two decades, different photopolymerizable nano-compounds were introduced and developed to modify polymer properties. In this sense, inorganic and organic nanoparticles have been introduced to increase the refractive index modulation and/or to reduce the shrinkage. Liquid crystal polymer composites have been added to the category of active photopolymer materials with an electrically switchable option. Nowadays, in the design of smart glasses some problems remain on the table, like power consumption, the limitation of the resolution, the wide field of view, etc. The inclusion of holographic optical element has provided some possible solutions. In particular the photopolymers have been reported a good system to bring the photons produced in the image creation to the eye. Our group proposed an alternative scheme using transmission holographic elements. The fabrication architecture was tested with different photopolymers in order to optimize their chemical composition, and we proposed three schemes adapted to each material properties. In this paper we study the influence of the initiator concentration, for Holographic polymer dispersed liquid crystal photopolymer, on the refractive index modulation and on the tunable properties of these holographic optical elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of new holographic recording materials and the optimization of existing ones is a fundamental field of study, since many of the applications of holography depend directly on the photomaterials properties, as is the case of holographic mirrors, holographic lenses, HUDs as well as augmented reality systems. In this poster we present the results obtained in the optimization of the photochemical process used in the processing of silver halide sensitized gelatin (SHSG). Previous work has shown that excellent results can be obtained in the case of transmission diffraction gratings in BB plates (Colour Holographic Co.), however when the spatial frequency of 3000 l/mm is exceeded, the process is not so good. In the works carried out previously, it has been shown that this photochemical process allows to obtain excellent efficiency values in diffraction efficiency and low level of noise, with excellent spectral sensitivity and energetic sensitivity, but the results that have been obtained in the case of reflection holographic gratings have not been so good, depending on the type of photographic emulsion that has been used as a support. In this work we have made reflection gratings (Denisyuk geometry), using the Color Holographic plate BB640. Given that the gelatin in this plate has a high degree of hardening, we have modified this situation by means of suitable presensitization processes and pH of the developer used. In this way we have achieved good diffraction efficiency results, higher than 80%, and excellent optical quality. Holographic gratings of 5000 l / mm were made, with a He-Ne laser at 633 nm, obtaining a diffraction efficiency of 80%, and a bandwidth of 25 nm., generating a spectral shift of 80 nanometers, due to the Thickness variation during the processing, so it has been possible to obtain excellent reflection filters in the view zone of the visible spectrum. The developer used was AAC, although its composition and pH values have been adjusted to modify the level of differential hardening bias level of the gelatin that supports BB640 plate. These results allow us to make excellent holographic reflection systems, in different areas of the visible spectrum, with good optical quality and sizes that can exceed 10x10 cm2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.