This paper proposes a novel display technique that reconstructs an aerial image just around viewing eyes to realize ultrawide field-of-view that cover entire field of view with images. Although it is known that getting closer to the information screen usually makes images larger and covering the entire field of view with images, the closest distance is limited by our nose. However, aerial display overcome this hardware limitation and has a possibility to present information just around the viewing eyes, even behind eyes, because aerial display has no hardware around the aerial image. When observer stands between the display hardware and the aerial image, reconstructed aerial image is behind the observer. To clarify what can be seen in this situation, we have developed a prototype aerial display to form an image just around viewing eyes by using aerial imaging by retro-reflection (AIRR). The image size for observer and binocular disparity have been estimated using this prototype. Even when aerial image is reconstructed behind viewing eyes, inverted image to aerial image can be observed, , whose size and negative disparity become very large toward infinity. Furthermore, our proposed method can cover entire field of view with images. This method is promising for new possibilities for aerial displays, such as one providing an immersive sensation with ultra-wide field-of-view.
Aerial display, which forms a real image in mid-air by converging light from a wide aperture, enables us to realize non-contact touch interfaces and glass-free augmented reality. Optical performance of an aerial display depends on its real-image-forming optics. Typical optical systems for aerial display are optical systems by use of a dihedral corner reflector array, crossed slit mirror arrays, layered micro-lens arrays, and a retro-reflector. This paper reviews optical systems for aerial display and reports the line-based MTF measurement results.
This paper presents an experiment to realize an automatic identification system of tiger puffer (torafugu) using Deep Learning. To meet the operation of growing and selling aquaculture fish, we tried to use Transfer Learning to reduce the operation cost to identify torafugu. In this trial, we used three torafugu. We took a video of them swimming in an aquaculture tank then extracted figures of each torafugu. Moreover, to increase the number of data for learning and testing, we finally got 150 pictures of each torafugu by rotating and flipping. As the result of the experiment, 60 to 80 pictures for each torafugu are enough for automatic identification. The generated learning model can identify torafugu under severe conditions such as unclear photos by waves and change of brightness, and movement of back born during swimming.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.