Recent Lightspace invention of multi layer prismatic optical light shaping devices enable development of miniature laser light based image engines. New optical architecture can be used to create highly effective DLP or LCOS based image modulators usable with multi-focal and waveguide near eye AR displays. Novel architecture substantially increases image engine light efficiency and reduces dimensions.
Since 2019 Lightpace has been working on development of multi focal near eye displays. Multi layer liquid crystal optical diffusing switch has been invented by a company few years back. During speech newest design of multilayer near eye flat display module will be presented that enables the development of more miniature AR glasses.
For decades it has been expected that near-eye displays such as augmented reality (AR) and virtual reality (VR) glasses and headsets will eventually take over conventional displays. Nevertheless, these technologies currently have barely penetrated everyday life. This hinderance can be explained by a lack of true next-generation near-eye display architectures that overcome the critical issues of stereoscopic wearables – notably vergence-accommodation conflict (VAC). The lack of such display architectures is directly related to the slow evolution and reorientation of image source industry. A major issue is the light transmission efficiency from an image-source towards the eyes of a viewer, directly impacted by the emission angle of light sources versus the need for collimated light. Collimation is a wasteful process, therefore, there is a limit to image brightness achievable with the currently available solid-state light sources. Inevitably, designers turn to more collimated light sources – lasers. While this approach yields improvements in size, it comes at the cost of image fidelity by introducing speckle patterns. Other alternatives (such as OLED microdisplays) are possible but are also not without issues. Thus, there needs to be a breakthrough in available image-sources for AR displays to reach at least a comparable image to what the 2D display counterparts can currently offer. Be it a full-color solid state uLED microdisplay, superluminescent LEDs, or developments in photonics by integration of RGB light sources into compact packages, the key-challenge is to leverage these advancements enabling a next-generation near-eye display architecture.
In this work we present reactive sputtered SiOxNy films with a variable refractive index as a convienent solution for contrast improvement of liquid crystal diffuser multi stacks in near-to-eye AR/VR displays. The focus concerns minimization of light reflections between internal structures, in particular ITO, by optimizing internal layers through tailored properties of thin film coatings, as well as subsequent laser patterning of thin film stack. Inorganic thin films have been deposited on glass by physical vapor deposition. Corresponding refractive index, thickness, uniformity and dielectric characteristics and other electro-optical properties have been measured and their impact on the resulting optical performance of the final integrated element stack has been compared against counterparts utilizing traditional polyimide and SiOx films.
Volumetric display implementations come in different forms. One of the most robust ways is an entirely solid-state volumetric architecture based on cholesteric liquid crystal optical diffuser elements. Stacking these elements enables formation of a volumetric screen – a projection volume, which is scanned time-sequentially. In this way two crucial components are working in conjunction – an image projector and an electronically switchable diffuser-element stack. LightSpace Technologies have been researching this concept of solid-state volumetric display technology since 2014. Improvements to the key enabling component – optical diffuser element – have been achieved over this period and include improved responsiveness, as well as enhanced optical characteristics – haze and transparency over visible spectrum. This work overviews and discusses key aspects of diffuser elements as well as a large-scale volumetric screen as a whole. Key characteristics of diffuser elements have been discussed and studied in regards to application in high refreshrate 3D display systems. Methods of improving optical performance of diffuser stacks have been analyzed and supported by experimental results. The base of a volumetric screen element within this work was a polymer-free chiral nematic liquid crystal filled in a homeotropic cell. A typical switching time of a full image cycle was around 1.5 ms. Influence of cell gap and driving voltage on the switching characteristics have been analyzed. In terms of volumetric screen improvements, viability and gains of lamination approach has been investigated. It has been found that even a non-ideal refractive index matching improves an overall light transmittance through a stack of diffuser elements significantly.
AR headset developer have failed to achieve the promised results. But why? Lack of accommodation is the fundamental flow of current AR headset technology. In this presentation you will hear how the design of AR optics work together with the tools you already have—your eyes.
It is foreseen that the most convenient hardware for depiction of augmented reality (AR) will be optical seethrough head-mounted displays. Currently such systems are utilizing single focal plane and are inflicting vergenceaccommodation conflict to the human visual system – limiting wide acceptance. In this work, we analyze an optical seethrough AR head-mounted display prototype which has four focal planes operating in time-sequential mode thus mitigating limitation of single focal plane devices. Nevertheless, optical see-through nature implies requirement of very short motion-to-photon latency not to cause noticeable misalignment between the digital content and real-world scene. The utilized prototype display relies on commercial visual-SLAM spatial tracking module (Intel realsense T265) and within this work we analyzed factors improving motion-to-photon latency with the provided hardware setup. The performance analysis of the T265 module revealed slight translational and angular jitter – on the order of <1 mm and <15 arcseconds, and velocity readout of few cm/s from a completely still IMU. The experimentally determined motion-tophoton latency and render-to-photon latency was 46±6 ms and 38 ms respectively. To overcome IMU positional jitter, pose averaging with variable width of the averaging window was implemented. Based on immediate acceleration and velocity data the size of the averaging window was adjusted. To perform pose prediction a basic rotational-axis offset model was verified. Based on prerecorded head movements, a training model reduced the error between the predicted and actual recorded pose. The optimization parameters were corresponding offset values of the IMU’s rotational axis, translational and angular velocity as well as angular acceleration. As expected, the highest weight for the most accurate predictions was observed for velocities following angular acceleration. The role of offset values wasn’t significant. For improved perceived experience and motion-to-photon latency reduction we consider further investigation of simple trained neural networks for more accurate real-time pose prediction as well as investigation of content-driven adaptive image output overriding default order of image plane output in a time-sequential sequence.
In this work we investigate design parameters of a stereoscopic head-worn augmented reality display that would facilitate a wider uptake of technology by enterprise and professional users. The emphasis is put on mimicking a way of how naturally the ambient world is perceived by human visual system. To solve this, we propose a solid-state multi-focal display architecture, which is tailored for near-work oriented tasks. The core of the proposed technology is a solid-state multi-plane volumetric screen, with four physical image depth planes which form the secondary image source. The volumetric screen utilizes electrically controllable liquid-crystal based diffuser elements, which receive the image information from the primary source – a pico projection unit. The volumetric screen is coupled with a bird-bath type optical image combiner/eyepiece to yield a 40-degree horizontal field of view covering a representable depth space of 0.35m to infinity where no effects of vergence-accommodation conflict are experienced.
In the field of 3D display technologies for a long-time accommodation-based depth cues have been dismissed. On one hand they are treated as weak depth cues, but on other hand their inclusion has been technologically challenging. Either way, accommodation depth cues are essential in ensuring natural image perception; they add realism to the 3D scene and help overcoming technologically inhibiting effects of vergence-accommodation conflict. In this work we examine implementation and associated considerations of optical diffuser technology via spatial volume demultiplexer chip (SVDC) within a stereoscopic Augmented Reality (AR) wearable display. The role of SVDC is to demultiplex series of two-dimensional image depth planes into a perceivably three-dimensional scene with said focus depth cues. The SVDC chip is designed to be entirely solid-state solution, requiring only voltage driving signal for the image demultiplexing action. In case of using an SVDC for multi-plane display architecture, the image source is a rear image projection unit ensuring high refresh-rate stream of required 2D image depth planes. The SVDC technology is scalable, it facilitates improved light efficiency due to controlled internal reflections which allows for diverse optical design in AR as well as VR settings. Provided is indicative evaluation and comparison of different optical image combiner solutions in respect to using a SVDC display architecture for near-eye stereoscopic AR display systems. Considered designs of optical image combiners include flat beam splitter with a refractive eyepiece, “bird-bath” optics, and single curved (free-form) reflective image combiner.
LightSpace Technologies have developed a prototype of integrated head-mounted stereoscopic display system based on a proprietary multi-plane optical diffuser technology. The system is entirely solid-state and has six focal planes which covers ~3 diopters (from 32 cm to 8 m). For the operation no eye-tracking is utilized. The new display system virtually entirely eliminates vergence-accommodation conflict and adds a monocular accommodation as an important depth cue for improved 3D realism. In regards to content rendering the processing load in contrast to conventional single-focalplane stereoscopic displays with similar image resolution is only slightly increased. The differences in terms of comparative performance are the worst in the case of simple 3D scenes, while for high-complexity scenes this difference has a tendency to slightly decrease. On average the processing burden for multi-plane stereoscopic displays is no more than 1.5% higher than for conventional stereoscopic displays. Furthermore, increasing a number of physical focal planes doesn’t notably worsen the image rendering performance allowing the display device to be efficiently driven by already readily available hardware – including high-performance mobile platforms. Overall, the user feedback about the developed multi-plane stereoscopic 3D display prototype confirms prior proposed assumptions of multi-plane architecture yielding higher acceptance rate due to improved 3D realism and eradicated vergence-accommodation conflict, thus currently being one of the most noteworthy advancements in the field of 3D stereoscopic displays.
KEYWORDS: Diffusers, Optical components, Image processing, 3D displays, Visualization, Data processing, 3D image processing, 3D volumetric display, Chemical elements, 3D volumetric displays
For the visualization of naturally observable 3D scenes with a continuous range of observation angles on a multi-plane volumetric 3D display, specific data processing and rendering methods have to be developed and tailored to match the architecture of a display device. As one of the most important requirements is a capability of providing real-time visual feedback, the data processing pipeline has to be optimized for effective execution on general consumer-grade hardware. In this work technological aspects and limitations of volumetric 3D display based on a static multi-planar projection volume have been analyzed in the context of developing an effective real-time capable volumetric data processing pipeline. Basic architecture of data processing pipeline has been developed and tested. Initial results showed a very slow performance for the execution on central processing unit. Based on these results, the data processing pipeline was optimized to utilize acceleration of graphics processing unit (GPU), which resulted in a substantial decrease of execution times, reaching the goal of real-time capable volumetric refresh rates.
KEYWORDS: Diffusers, Optical components, 3D volumetric display, 3D displays, 3D image processing, Projection systems, 3D volumetric displays, Chemical elements, Liquid crystals, Transparency
In this work a detailed analysis of technologies and methods required for a construction and operation of passive multiplane volumetric 3D display based on the arrangement of electrically controllable optical diffuser elements has been provided. Current methods of displaying 3D images have been compared. Challenges and solutions of representing realistic looking 3D content with associated physical depth cues in regards to multi-plane approach have been highlighted. The main focus has been devoted to consideration of improving user experience when viewing and interacting with the 3D content on a multi-plane volumetric display by utilizing various task-specific computational methods in the data processing pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.