autostereoscopic display screens factory
In addition, the next generation of 3D technology with improved features is already in the development pipeline. 3D Global products are state-of-the-art autostereoscopic 3D displays that provide a unique, direct experience of true 3D viewing or mixed 2D/3D viewing without glasses or other devices.
New York, Jan. 27, 2022 (GLOBE NEWSWIRE) -- According to our new research study on “3D Display Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Type (Stereoscopic 3D Display and Autostereoscopic 3D Display), Technology (Digital Light Processing, Organic Light Emitting Diode, and Light Emitting Diode), and Application (Consumer Electronics, Automotive, Medical, Advertising, Retail, Military and Defense, and Others)”, published by The Insight Partners.
AU OPTRONICS CORP.; Innolux Corporation; LG Electronics; Mitsubishi Electric Corporation; Panasonic Corporation; Samsung Group; Sharp Corporation; Looking Glass Factory Inc.; Light Field Lab, Inc.; Leia Inc.; Sony Corporation; Toshiba Corporation; and Fujifilm Corporation are among the key players that are profiled during this market study. In addition to these players, several other essential market players were also studied and analyzed to get a holistic view of the global 3D display market and its ecosystem.
In April 2021, AUO launched a stunning series of ALED Displays at Touch Taiwan 2021 with world-leading micro LED technology and applications on showcase.
There is a heavy adoption of holographic 3D displays in the media and entertainment field. The holographic display was first used in 2012 at the Coachella Valley Music & Arts Festival, where a hologram of Tupac Shakur—an American rapper—was projected on the stage for a 3D music performance. Later, holograms of Michael Jackson, BTS Suga, and several other artists were recreated in musical concerts. Thus, due to heavy adoption of holographic displays across the world, the 3D display market is likely to accelerate in the coming years. Moreover, an increase in the demand for 3D visualization in entertainment, gaming, defense, and medical industries drives the market growth. Rapid development of smartphone models with curved display, proliferation of gaming industry worldwide, and incorporation of AR/VR in consumer electronics products are anticipated to bring commercial opportunities to 3D augmented reality (AR) head-mounted display in the coming years. Besides, the mounting investments in advanced technologies in automotive to provide better efficiency and safety are also likely to surge the adoption of 3D displays.
In 2020, the COVID-19 outbreak negatively impacted the growth rate of the global 3D display market due to business shutdowns and a decline in demand from end users, especially in retail and advertising sectors. The cancellation of events, exhibitions, and restrictions on mass gathering events are among the factors that affected the demand for 3D displays worldwide. The increase in the number of COVID-19 confirmed cases and rising reported deaths in the country have affected manufacturing and sales of materials associated with 3D displays. The factory and business shutdowns across the US, Canada, and Mexico negatively impacted the adoption of 3D displays. The world is expecting market recovery and economic improvement with COVID-19 vaccination drives. However, companies are prone to risks with the market uncertainties from tough business environment associated with unfavorable foreign exchange rate, raw material price, and logistics cost.
The present 3D display market is in its nascent stages of growth cycle, and companies operating in this market are investing heavily in R&D to bring successful 3D display systems in the commercial market. The current key application areas of 3D displays are marketing and advertising sectors. Medical, automotive, and defense are expected to be some of the largest growth potential areas for 3D displays. The prospective application areas of 3D display technologies could be unprecedented depending on the positive growth and technology development in the market. For instance, in 2020, Continental announced the launch of its volume-production display featuring an autostereoscopic 3D technology in its HMC Genesis GV80 variant.
Holographic display technology generates arbitrary wavefronts that can be considered as an ultimate 3D experience for end users. In comparison to 2D image-based stereoscopic displays that are used to create 3D perception, which can create issues such as headache, visual discomfort, eyestrain, and fatigue in some users, the holographic 3D displays are quite comfortable for users who want to experience realistic 3D. However, the requirement of high volume for these displays makes them difficult to use in many potential applications. Thus, holographic 3D displays that use a 2D surface by exploiting the wave nature of light to develop 3D images are considered a more viable option for potential 3D display applications in fields such as marketing, advertising, medical, automotive, education, entertainment, gaming, retail, hospitality, events, sports, and digital signage.
Display Technology Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Type (Cathode Ray Tube, Liquid Crystal Display, Light Emitting Diode, Plasma Display Panel, Organic- LED, AMOLED); Application (Television Display, Mobile Display, Computer/Laptop Display, Head mounted Display, Advertisement/Signage Display); Display (Conventional Display, 3D Display, Flexible Display, Transparent Display); End - User Industry (Automotive, Consumer Electronics, Media and Advertisement, BFSI, Retail, Military, Industrial, Medical) and Geography
3D Technology Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Products (Sensors, Integrated Circuits, Transistors, Printer, Gaming, Imaging, Display, Navigation, Animation and Cinema); & End-Users (Healthcare, Entertainment & Media, Education, Government, Industrial, Consumer Electronics and Others)
Head-Up Display Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Type (AR-Based HUD, Conventional HUD); Technology (Cathode Ray Tube, Light-Emitting Diode, Others); Application (Civil Aviation, Military Aviation, Passenger Cars, Commercial Vehicles) and Geography
Head Mounted Display Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Type (Integrated HMD, Discrete HMD, and Slide-On HMD), Application (Training & Simulation, Sports & Leisure, Imaging, Defense & Security, and Others), Component (Display Screens, Controllers, Sensors, Cameras, and Others), Technology (Augmented Reality, Virtual Reality, and Mixed Reality), Design (Head Mounted Display and Wearable Glasses), and Connection (Wired, Wireless, and Hybrid)
Embedded Display Market Forecast to 2028 - Covid-19 Impact and Global Analysis - by Technology (LCD, LED, OLED); Application (Industrial Automation, Fitness Equipment, Scientific Test and Measurement, Wearables, Home Appliances, Others) and Geography
Commercial Display Market Forecast to 2028 - COVID-19 Impact and Global Analysis by Display Type (Video Wall, Outdoor Display, Signage, Variant Display, Interactive Whiteboard (IWB), Others); Technology (OLED, LED, LCD, Quantum Dots); Application (Retail, Automotive, Healthcare, Government, IT and Telecom, BFSI, Others) and Geography
E-paper Display Market Forecast to 2028 - COVID-19 Impact and Global Analysis By Application (E-Readers, Smart Card, Auxiliary Display, Wearable, Others); End User (Media and Entertainment, Automotive and Transportation, Retail, Healthcare, Consumer Electronics, Others); Technology (Interferometric Modulator Display (IMOD) , Cholesteric liquid crystal display (ChLCD), Electrophoretic Display, Others) and Geography
One of the most promising approaches to autostereoscopic presentation currently under development is based on the concept of integral imaging which was first proposed in 1908 by Gabriel Lippmann. The integral imaging approach places an array of spherical microlenses (similar to a lenticularcylinder array) in front of the image, where the portion of the image seen through each lens is different depending on viewing angle. Thus rather than displaying a 3D image that only works in the horizontal direction, it reproduces a 4D light field, creating stereo images that exhibit full parallax in any direction that the viewer moves. Research on versions of this approach for electronic image display are in development, however much work remains to be done before it is commercialized for use in flat panel displays (In 2013 Nvidia demonstrated some promising work on near-eye light field displays for use in lightweight HMD’s. I had a chance to try this at SIGGRAPH 2013 and while still a research prototype it was an “eye opening” experience).
In 2011 Japan’s NHK demonstrated research on a low resolution version of a spherical lens matrix integral image TV. This glasses free approach to stereoscopic display was the result of an initiative designed to explore the next technical development to follow the successful marketing of ultra high resolution television. Needless to say, that project has not yet resulted in the introduction of a commercial product. Nonetheless this is a fascinating approach to creating and displaying Integral 3D Video.
The illustrations below, taken from Michael Halle’s paperAutostereoscopic Displays and Computer Graphics, provide an excellent comparison of three essential autostereoscopic display technologies.
The lenticular and parallax barrier approaches to both still and moving images have been successfully commercialized. Most people interested in motion stereoscopy are familiar with the lenticular screen of devices such as the Fuji Finepix W3 camera display screen or the parallax barrier display screen of the Nintendo 3DS hand held gaming platform. The lenticular LCD screen of the W3 employs a vertical grid composed of alternating columns sliced from the left and right images of a stereo pair. A corresponding grid of cylindrical lenses is aligned on top of the image so that the viewers left eye sees the columns for the left image and the right eye sees the columns for the right image. The parallax barrier screen of the 3DS works in a similar way, however instead of the left and right images being separated by cylindrical lenses they are separated by a mask composed of vertical slits. The diagrams below provide two forms of visual explanation of the binocular stereoscopic processes (an extended approach employs more that two images in a multiview array).
The image below is from Walter Funk‘s comprehensive paper, History of autostereoscopic cinema, which covers the topic from the beginnings of autostereoscopy in the 1800s, to the development of motion capability and it’s subsequent evolution. Russian autostereoscopic cinema has a long history spanning several decades, peaking with the creation of numerous public stereokino theaters. Here we see the 1941 installation and calibration of a wire radial-raster and reflective screen system. The raster lines converge with the screen’s plane and viewing plane.
Tom Peterka’s 2007 Ph.D. dissertation, Dynallax: Dynamic parallax barrier autostereoscopic display, expands upon the development of static parallax barrier displays via the use of an LCD generated parallax barrier. The LCD barrier can be dynamically modified in real-time using head tracking to optimize its position –and thus the view of the interleaved left and right image segments displayed– so that the spectator is always in the ideal position to perceive the stereoscopic image.
Zecotek Display Systems is working to create high resolution autostereocopic multiview display systems based on “time-sequencing” rather than the more commonly applied multiview “space-sharing” technique. Space-sharing incorporates advanced lenticular design elements such as slanting the grid of cylindrical lenses relative to the RGB pixel structure of a video display in order to provide a uniform and regular distribution of pixels across the image (thus improving horizontal resolution relative to vertical resolution).
“Space-sharing” multiview image displays require increasing image resolution via increasing the number of pixels in a display to expand the number of views shown. Zecotek’s “time-sequencing” 3D display technology requires increases in image frame rate to expand the number of views shown. That approach is described in the following material from their website:
Unlike systems based on “space-sharing” (see this link for more details on space-shared 3D), which must share pixels and therefore their native resolution among views, Zecotek’s system uses a proprietary “time-sequencing” technology combined with a patented dynamic system of multiple lenses. This results in a display with more than 90 views with full native (base) resolution in each perspective. It is this large number of views and the extremely narrow angle of each view, which give a complete freedom of position for the viewers within large viewing zones. Zecotek’s prototype offers now ~50° continuous viewing angle.
Zecotek’s “time-sequencing” (see this link for more details) approach also means that the HD resolution does not need to be divided between views. Each view has exactly the same HD resolution as the base screen one. This is because it is display time, as opposed to space (see Space-Sharing Auto-Stereoscopic Displays), that is shared. We therefore require a frame rate that is approximately 2,000 Hz for about 40 views which is readily available using existing and well known DLP back-projection elements.
Back-projection 2D monitors and TV’s have been available for many years and deliver high resolution, high quality images. Their only trade-off is that projection units have more physical depth in their form factor than flat panel displays. (This depth can also be significantly reduced to almost flat panel form factor with the use of special optics).
Zecotek’s 3D multiple-view auto-stereoscopic display with its “time-sequencing” approach can provide the most natural 3D experience, as it allows for a freedom of head movement similar to that required for seeing objects in the real world.
The marketing limitation of the DLP-based back-projection 3D TV for the consumer market is that of the form factor – DLP’s are not perceived as a flat panel (even though with optical modifications form factors can approach flat panel depths). While Zecotek’s technology is fully adaptable to flat panel configuration, this will require matching flat displays with frame rates exceeding 2,000 Hz. Such flat panel speeds are not yet available (as there has been no demand to date), however many industry players have these in development for other applications. With rapid advances in OLED’s (Organic Light Emitting Diodes) resulting in frame rates over 2,000 Hz, and as manufacturing costs of these panels go down, Zecotek’s patented technology will yield a flat panel configuration highly suitable for consumer markets well in advance of those using space sharing systems requiring greater pixel density, in particular as pixel density is directly related to production yield and, therefore, panel cost.
Not surprisingly, autostereoscopic movies are an outgrowth of earlier work with still imagery. As with motion pictures, autostereoscopic still imaging began with the capturing and display of left and right binocular pairs and progressed from the two images of a stereoscopic pair to as many as sixty-four in-between images in a multiview display. A larger number of multiplexed images enhances the perception of motion parallax as the spectator moves their head and can allow them to peer around the objects represented in the image.
Since the 1960’s the idea of the hologram has held an important place in the popular imagination as the epitome of 3D imaging technology. This has led to some misunderstandings as to what a hologram actually is –and the false attribution of the term to any form of apparent 3D projection. A true hologram is a cameraless recording of wavefront interference patterns which makes use of wavefront reconstruction to display a 3D image. Recent developments in the technology used to create the illusion known as Peppers Ghost have been promoted as holographic (even though the floating image effect is based on a 2D video image projection rather than the live 3D actors used with the original Peppers Ghost). The 2D illusion is very convincing from a distance, even though it lacks the actual parallax cues of a true 3D image. The following CNN interview with Uwe Maass provides a look into his inventive work developing the patented digital Peppers Ghost process marketed as Musion Eyeliner.
Jason Geng’s paper, Three-dimensional display technologies, provides a comprehensive overview of a much wider range of approaches to stereoscopic viewing.
Cees Van Berkel C (1999) Image preparation for 3D-LCD. In: Proceedings of SPIE vol 3639, stereoscopic displays and virtual reality systems VI, San Jose, pp 84–91, May 1999
Im H-j, Jung S-m, Lee B-j, Hong H-k, Shin H-h (2008) 20.1: Mobile 3D displays based on a LTPS 2.4 VGA LCD panel attached with lenticular lens sheets. SID symposium digest of technical papers, vol 39, Los Angeles, p 256–259
Woodgate GJ, Harrold J (2003) LP-1: High efficiency reconfigurable 2D/3D autostereoscopic display. SID symposium digest of technical papers, vol 34, Baltimore, pp 394–397
de Zwart ST, IJzerman WL, Dekker T, Wolter WAM (2004) A 20” switchable auto-stereoscopic 2D/3D display. In: Proceedings of 11th international display workshop (IDW), Niigata, pp 1459–1460
Hiddink MGH, de Zwart ST, Willemsen OH, Dekker T (2006) 20.1: Locally switchable 3D displays. SID international symposium digest of technical papers, vol 37, San Francisco, pp 1142–1145
Ren H, Fox D, Wu S-T (2007) 62.1: Liquid crystal and liquid lenses for displays and image processing. SID symposium digest of technical papers, vol 38, Long Beach, pp 1733–1736
Kao Y-Y, Huang Y-P, Yang K-X, Chao PCP, Tsai C-C, Mo C-N (2009) 11.1: An auto-stereoscopic 3D display using tunable liquid crystal lens array that mimics effects of GRIN lenticular lens array. SID symposium digest of technical papers, vol 40, San Antonio, June 2009, p 111–114
Moseley RR, Woodgate GJ, Jacobs AMS, Harrold J, Ezra D (2002) ‘Parallax barrier, display, passive polarization modulating optical element and method of making such an element’, US Patent 6437915
Eichenlaub JB, Hollands D, Hutchins JM (1995) A prototype flat plane hologram-like display that produces multiple perspective views at full resolution. In: Proceedings of SPIE vol 2409, stereoscopic displays and virtual reality systems II, San Jose, pp 102–112
Ezra D, Woodgate GJ, Omar BA, Holliman NS, Harrold J, Shapiro LS (1995) New autostereoscopic display system. In: Proceedings of SPIE, vol 2409, stereoscopic displays and virtual reality systems II, San Jose, p 31
Brott R, Schultz J (2010) 16.3: Directional backlight lightguide considerations for full resolution autostereoscopic 3D displays. SID Symposium digest of technical papers, vol 41, Seattle, pp 218–221
Sakai H, Yamasaki M, Koike T, Oikawa M, Kobayashi M (2009) 41.2: Autostereoscopic display based on enhanced integral photography using overlaid multiple projectors. SID symposium digest of technical papers, vol 40, San Antonio, pp 611–614
Travis ARL, Lang SR (1990) A CRT based autostereoscopic 3-D display. In: Eurodisplay 1990, 10th international display research conference, 26–28 September 1990, Amsterdam, LP10
Cossairt O, Møller C, Travis A, Benton SA (2004) Novel view sequential display based on DMD technology. In: Proceedings of SPIE vol 2591, stereoscopic displays and virtual reality systems XI, San Jose, pp 273–278
Møller CN, Travis AR (2005) Time multiplexed autostereoscopic flat panel display using an optical wedge. In: Proceedings of SPIE vol 5664, stereoscopic displays and virtual reality systems XII, San Jose, pp 150–157
Since Charles Wheatstone first invented stereoscopy, the research interest in three-dimensional (3D) displays has extended for 150 years, and its history is as long as that of photography (Charles, 1838). As a more natural way to present virtual data, glasses-free 3D displays show great prospects in various fields including education, military, medical, entertainment, automobile, etc. According to a survey, people spend an average of 5 h every day watching display panel screens. The visualization of 3D images will have a huge impact on improving work efficiency. Therefore, glasses-free 3D displays are regarded as next-generation display technology.
Generally, we assign glasses-free 3D displays into three main categories: holographic 3D displays, volumetric 3D displays and autostereoscopic 3D displays (Geng, 2013). A holographic 3D display is a technology that records both the amplitude and phase information of a real object and reproduces it through specific mediums (e.g., photorefractive polymers) (Tay et al., 2008; Blanche et al., 2010). Furthermore, by using a spatial light modulator that directly modulates the coherent wave, computer-generated hologram systems can be implemented via numerical simulation. (Hahn et al., 2008; Sasaki et al., 2014). Currently, powerful acceleration chips or video processors have enabled the reproduction of high-quality 3D holograms at video rates (An et al., 2020; Shi et al., 2021). In the future, real-time holographic 3D displays will have wide applications in mobile displays and AR displays (Peng et al., 2021; Lee et al., 2022). Volumetric 3D display is another technology that generates luminous image points (i.e., voxels) in space via special media, such as trapped particles and fluorescent screens. These image points form 3D graphics that can be observed within 360° (Kumagai et al., 2015; Kumagai et al., 2018; Smalley et al., 2018; Hirayama et al., 2019). Both the holographic 3D display and volumetric 3D display require a large amount of data to provide 3D content, which brings challenges to data processing and transportation.
In contrast, autostereoscopic 3D displays reduce computing costs by discretizing a continuously distributed light field of 3D objects into multiple “views”. The properly arranged perspective views can approximate the 3D images with motion parallax and stereo parallax. Moreover, by modulating the irradiance pattern of each view, only a small number of views are required to reconstruct the light field. A typical autostereoscopic 3D display only needs to integrate two components: an optical element and an off-the-shelf refreshable display panel (e.g., liquid crystal display, organic light-emitting diode display, light-emitting diode display) (Dodgson, 2005). With the advantages of a compact form factor, ease of integration with flat display devices, ease of modulation, and low cost, autostereoscopic 3D displays can be applied in portable electronics and redefine human-computer interfaces. The function of the optical element in an autostereoscopic 3D display is to manipulate the incident light and generate a finite number of views. To improve the display effect, the optical elements also need to modulate the views and angular separation between views, which is called the “view modulator” in this paper. View modulators represent a special class of optical elements that are used in glass-free 3D displays for view modulation, such as parallax barriers, lenticular lens arrays, and metagratings.
One of the most critical issues in autostereoscopic 3D displays is how to design view modulators. When we design view modulators, several essential problems need to be considered that are directly related to 3D display performance (Figure 1): 1) To minimize crosstalk and ghost images, the view modulators should confine the emerging light within a well-defined region; 2) To address the vergence-accommodation conflict, the view modulators need to provide both correct vergence and accommodation cues. Vergence-accommodation conflict occurs when the depth of 3D images induced by binocular parallax lies in front of or behind the display screen, whereas the depth recognized by a single eye is fixed at the apparent location of the physical display panel because the image observed by a single eye is 2D (Zou et al., 2015; Koulieris et al., 2017); 3) To achieve a large field of view (FOV), the view modulators need to precisely manipulate light over a large steering angle; 4) For an energy-efficient system, the light efficiency of the view modulators needs to be adequate. In addition to these four important factors that affect the optical performance of 3D displays, there are some additional features that should be addressed in applications; 5) To maintain a thin form factor and be lightweight for portable electronics, the design of view modulators should be elegant with as few layers or components as possible; 6) To solve the tradeoff between spatial resolution, angular resolution, and FOV, the view modulators should manipulate the shape of view for variant information density. 7) In window display applications, the view modulators should be transparent to combine virtual 3D images with physical objects for glasses-free augmented reality display.
Depending on the types of adopted view modulators, autostereoscopic 3D displays can be divided into geometrical optics-based and planar optics-based systems. With regard to geometrical optics-based 3D displays, the most representative architectures are parallax barrier or lenticular lens array-based, microlens array-based and layer-based systems (Ma et al., 2019). The parallax barrier or lenticular lens array was first integrated with flat panels and applied in 3D mobile electronic devices because of the advantages of utilizing existing 2D screen fabrication infrastructure (Ives, 1902; Kim et al., 2016; Yoon et al., 2016; Lv et al., 2017; Huang et al., 2019). For improved display performance, aperture stops were inserted into the system to reduce the crosstalk by decreasing the aperture ratio; however, this strategy comes at the expense of light efficiency (Wang et al., 2010; Liang et al., 2014; Lv et al., 2014). Microlens array-based 3D display, i.e., integral imaging display generates stereoscopic images by recording and reproducing the rays from 3D objects (Lippmann, 1908; Martínez-Corral and Javidi, 2018; Javidi et al., 2020). It can present full motion parallax by adding light manipulating power in a different direction. Recently, a bionic compound eye structure was proposed to enhance the performance of integral imaging 3D display systems. With proper design based on geometric optics, the 3D display prototype can be used to obtain a 28° horizontal, 22° vertical viewing angle, approximately two times that of a normal integral imaging display (Zhao et al., 2020). In another work, an integral imaging 3D display system that can enhance both the pixel density and viewing angle was proposed, with parallel projection of ultrahigh-definition elemental images (Watanabe et al., 2020). This prototype display system reproduced 3D images with a horizontal pixel density of 63.5 ppi and viewing angles of 32.8° and 26.5° in the horizontal and vertical directions, respectively. Furthermore, with three groups of directional backlight and a fast-switching liquid crystal display (LCD) panel, a time-multiplexed integral imaging 3D display with a 120° wide viewing angle was demonstrated (Liu et al., 2019). The layer-based 3D display invented by Lanman and Wetzstein (Lanman et al., 2010; Lanman et al., 2011; Wetzstein et al., 2011; Wetzstein et al., 2012) used multiple LCD screen layers to modulate the light field of 3D objects. This display can provide both vergence and accommodation cues for viewers with limited fatigue and dizziness (Maimone et al., 2013). Nevertheless, its FOV is limited by the effective size of the display panel. Moreover, layer-based 3D displays also suffer from a trade-off between the depth of field and the complexity of the system (i.e., the layer number for the display devices). In general, geometrical optics-based autostereoscopic 3D displays have the advantages of low cost and thin form factors that are compatible with 2D flat display panels. However, we still have a fair way to go due to the tradeoffs among the resolution, FOV, depth cues, depth of field and form factor (Qiao et al., 2020). Alleviating these tradeoffs and improving the image quality to provide more realistic stereoscopic vision has opened up an intriguing avenue for developing next-generation 3D display technology.
Fast-growing planar optics have attracted wide attention in various fields because of their outstanding capability for light control (Genevet et al., 2017; Zhang and Fang, 2019; Chen and Segev, 2021; Tabiryan et al., 2021; Xiong and Wu, 2021). In the field of glasses-free 3D displays, planar optical elements, such as diffraction gratings, diffractive lenses and metasurfaces, can be used to modulate the light field of 3D objects at the pixel level. With proper design, planar optical elements at the micro or nano scale provide superior light manipulation capability in terms of light intensity, phase, and polarization. Therefore, planar optics-based glass-free 3D displays have several merits, such as reduced crosstalk, no vergence-accommodation conflict, enhanced light efficiency, and an enlarged FOV. Figure 2 shows the developing trend for 3D display technologies with regard to the revolution of view modulators. Planar optics are becoming the “next-generation 3D display technology” because of outstanding view modulation flexibility.
FIGURE 2. Schematic of the development of glasses-free 3D displays with regard to the revolution of view modulators. LLA: Lenticular lens array; MLA: Microlens array.
In this review, the critical challenges for glasses-free 3D displays are analyzed. Planar optics-based 3D displays suggest a variety of solutions for 3D displays, which will be reviewed in the section Glasses-Free 3D Display Based on Planar Optical Elements. As a specific application and an appealing feature, augmented reality (AR) 3D displays enabled by planar optics will be comprehensively introduced in the section Glasses-Free augmented reality 3D display based on planar optical elements. In addition to the design of view modulators, the fabrication of view modulators is another challenge that hinders the development of 3D displays. Therefore, in the section Fabrication of Large-Scale Micro/Nanostructures on View Modulators for 3D Displays, we will highlight multiple micro/nanofabrication methods for view modulators in 3D displays. In the section Conclusions and Outlook, the current status for glasses-free 3D displays and glasses-free AR 3D displays will be summarized. Finally, future directions and potential applications are suggested in the section Conclusions and Outlook.
Diffraction gratings are unique components that can split incident light into many spatial directions simultaneously and have been widely used in steering devices, such as spectrometers, optical waveguides and laser resonators (Zola et al., 2019; Cao et al., 2020; Görrn et al., 2011; Zhang et al., 2019; Liu et al., 2020). Fattal et al. employed diffraction gratings in a 3D display and proposed a directional diffractive backlight to produce full parallax views within a wide FOV (Fattal et al., 2013). The key elements in the backlight were pixelated grating patterns fabricated by electron-beam lithography. Both passive and active prototypes provided 64-view images within a FOV of 90°. The diffractive wide-angle backlight is regarded as a revolutionary 3D display (https://www.technologyreview.com/innovator/david-fattal). It has opened up rich opportunities for planar optics-based glasses-free 3D displays.
On this basis, a holographic sampling 3D display was proposed by combining a phase plate with a thin film transistor-LCD panel (Wan et al., 2017) (Figure 3A). The phase plate modulates the phase information, while the LCD panel provides refreshable amplitude information for the light field. Notably, the period and orientation of the diffraction gratings in each pixel are calculated to form converged beams instead of (semi)parallel beams in a geometrical optics-based 3D display. As a result, the angular divergence of target viewpoints (1.02°) is confined close to the diffraction limit (0.94°), leading to significantly reduced crosstalk and ghost images (Figures 3B,C). The researchers further presented a holographic sampling 3D display based on metagratings and demonstrated a video rate full-color 3D display prototype with sizes ranging from 5 to 32 inches (Figure 3D) (Wan et al., 2020). The metagratings on the view modulator were designed to operate at the R/G/B wavelength to reconstruct the wavefront at sampling viewpoints with the correct white balance (Figure 3E). By combining the view modulator, a LCD panel and a color filter, virtual 3D whales were presented, as shown in Figure 3F. To address the vergence-accommodation conflict in 3D displays, a super multiview display was also proposed based on pixelated gratings. Closely packaged views with an angular separation of 0.9° provide a depth cue for the accommodation process of the human eye (Wan et al., 2020).
FIGURE 3. (A) Schematic of the proposed holographic sampling 3D display. (B) Photograph of 4 views and the light intensity distribution at 4 views. (C) 3D images of a car running through trees. (D) Schematic of the full-color video rate holographic sampling 3D display. (E) The radiation pattern measured from a 16-view point view modulator. (F) 3D images of whales and logos. [(A–C) Reproduced from Wan et al. (2013). Copyright (2021) with permission from Optica Publishing Group. (D–F) Reproduced from Wan et al. (2020). Copyright (2021) with permission from Elsevier B.V.].
To summarize, diffraction grating-based 3D displays have the advantages of minimum crosstalk, reduced vergence-accommodation conflict, tailorable view arrangement, continuous motion parallax and a wide FOV. Nevertheless, the experimental diffraction efficiency of binary gratings is approximately 20%, leading to inevitable high-power consumption. On this basis, diffractive lenses and metasurfaces are employed for 3D displays.
Light efficiency is a crucial parameter in glass-free 3D display systems. Diffractive lenses with blazed structures can be used to focus light together, thereby showing higher light efficiency in 3D displays than diffraction gratings. As shown in Figures 4A,B, pixelated blazed diffractive lenses are introduced in a 3D display to form four independent convergent views, while the amplitude plate provides the images at these views. The system has the following benefits. First, each structured pixel on the view modulator is calculated by the relative position relationship between the pixel and viewing points. These accurately calculated aperiodic structures can improve the precision of light manipulation, thereby eliminating crosstalk and ghost images. Second, the 4-level blazed diffractive lens greatly increases the diffraction efficiency of the grating-based 3D display from 20 to 60% (Zhou et al., 2020). In another work, a view modulator covered with a blazed diffractive lenticular lens was proposed in a multiview holographic 3D display (Hua et al., 2020). This system redirected the diverging rays to shape four extended views with a vertical FOV of 17.8°. In addition, the diffraction efficiency of the view modulator was increased to 46.9% using the blazed phase structures. Most recently, a vector light field display with a large depth of focus was proposed based on an intertwined flat lens, as shown in Figures 4C,D. A grayscale achromatic diffractive lens was designed to extend the depth of focus by 1.8 × 104 times. By integrating the intertwined diffractive lens with a liquid crystal display, a 3D display with a crosstalk below 26% was realized over a viewing distance ranging from 24 to 90 cm (Zhou et al., 2022).
FIGURE 4. (A) Schematic of a glass-free 3D display based on a multilevel diffractive lens. (B) 3D images of letters or thoracic cages in a blazed diffractive lens-based 3D display. (C) Schematic of a vector light field display based on a grayscale achromatic diffractive lens. (D) Full color 3D images of letters and the thoracic cage produced by the intertwined diffractive lens-based 3D display. [(A,B) Reproduced from Zhou et al. (2020). Copyright (2021), with permission from IEEE. (C,D) Reproduced from Zhou et al. (2022). Copyright (2022), with permission from Optica Publishing Group.].
In summary, coupled with various design approaches, an optimized diffractive lens can enable the realization of a high-quality full spectrum in imaging applications (Peng et al., 2015; Heide et al., 2016; Peng et al., 2016; Peng et al., 2019). The design of diffractive lenses in 3D displays bears similarities to the design in imaging. This solves the problem of light efficiency in diffractive grating-based 3D displays. The optimized lens features a high light efficiency, wide spectrum response and large depth of focus, which benefits glasses-free 3D displays in terms of brightness, color fidelity, and viewing depth. However, the minimum feature size of diffractive lenses is generally larger than that of nanogratings due to the fabrication limit, resulting in a reduced viewing angle.
We believe that metasurfaces can be used in 3D displays because of their unprecedented capability to manipulate light fields. In 2013, 3D computer-generated holography image reconstruction was demonstrated in the visible and near-infrared range by a plasmonic metasurface composed of pixelated gold nanorods (Huang et al., 2013) (Figures 5A,B). The pixel size of the metasurface hologram was only 500 nm, which is much smaller than the size of the hologram pixels generated by spatial light modulators or diffractive optical elements. As a result, a FOV as large as 40° was demonstrated. To correct chromatic aberration in integral imaging 3D displays, a single polarization-insensitive broadband achromatic metalens using silicon nitride was proposed (Fan et al., 2019) (Figures 5C,D). Each achromatic metalens has a diameter of 14 µm and was fabricated via the electron beam lithography technique. The focusing efficiency was 47% on average. By composing a 60 × 60 metalens in a rectangular lattice, a broadband achromatic integral imaging display was demonstrated under white light illumination. To address the tradeoff between spatial resolution, angular resolution, and FOV, a general approach for foveated glasses-free 3D displays using the two-dimensional metagrating complex was proposed recently (Figure 5E) (Hua et al., 2021). The dot/linear/rectangular hybrid views, which are shaped by a two-dimensional metagrating complex, form spatially variant information density. By combining the two-dimensional metagrating complex film and a LCD panel, a video rate full-color foveated 3D display system with an unprecedented FOV up to 160° was demonstrated (Figure 5F). Compared with prior work, the proposed system makes two breakthroughs: First, the irradiance pattern of each view can be tailored carefully to avoid both crosstalk and discontinuity between views. Second, the tradeoffs between the angular resolution, spatial resolution and FOV in 3D displays are alleviated.
FIGURE 5. (A) Schematic of a plasmonic metasurface for 3D CGH image reconstruction. (B) Experimental hologram images for different focusing positions along the z direction. (C) Schematic of the broadband achromatic metalens array for a white-light achromatic integral imaging display. (D) Reconstructed images for the cases that “3” and “D” lie on the same depth plane or on different depth planes, respectively. Scale bar, 100 µm. (E) Schematic of a foveated glasses-free 3D display using the two-dimensional metagrating complex. (F) “Albert Einstein” images in the foveated 3D display system. [(A,B) Reproduced from Huang et al. (2013). Copyright (2021), with permission from Springer Nature. (C,D) Reproduced from Fan et al. (2019). Copyright (2021), with permission from Springer Nature. (E,F) Reproduced from Hua et al. (2021). Copyright (2021), with permission from Springer Nature.].
To summarize, metasurfaces provide a solution that maintains both a large FOV and reasonable light efficiency. Moreover, the superior light manipulation capability provides an inspiring foveated glasses-free 3D display solution for an intrinsic tradeoff between resolution and viewing angle in 3D displays. Like all metamaterial-based photonic devices, the mass fabrication of metasurfaces is the major issue that prevents industrial application of this technology.
As mentioned above, we have reviewed the research progress for planar optics-based glass-free 3D displays: diffraction grating-based, diffractive lens-based and metasurface-based. Compared with geometric optics-based 3D displays, these displays all have common advantages, such as high precision control at the pixel level, high degrees of freedom in design, and compact form factors. On the other hand, they have their own properties in terms of light efficiency, FOV, viewing distance, and fabrication scaling, as listed in Table 1. The diffraction grating-based method has both a large FOV with continuous motion parallax and large fabrication scaling. Although the bandwidth of the diffraction grating is limited, a full-color display can still be realized by integrating a color filter. As a result, the problem of selective bandwidth operation is trivial in 3D displays. However, the low light efficiency of binary gratings can be problematic because of the increased power consumption, especially in portable electronics. The diffractive lens-based approach greatly improves the light efficiency. Moreover, through proper design, an intertwined diffractive lens can be used to realize a large viewing distance and broadband spectrum manipulation. Nevertheless, the viewing angle of a diffractive lens-based 3D display is limited by the numerical aperture. The metasurface-based technique has the advantages of medium light efficiency, a large FOV and broadband spectrum response. Therefore, metasurfaces can provide better 3D display performance in terms of color fidelity. Furthermore, the subwavelength dimensions of metasurfaces ensure their flexibility for view manipulation. However, the complexity and difficulty in nanofabrication hinders the application of metasurfaces in large-scale displays.
Most recently, augmented reality (AR), as an interactive display that fuses the virtual world with reality, has become an aggressive research field that attracts broad attention from researchers, investors and scientists (Chang et al., 2020; Xiong et al., 2021). Glasses-free AR 3D displays are of special interest because of the huge demand in many applications, such as head-up displays in vehicles, education, and exhibitions. Although near-to-eye displays for AR technologies based on wearable devices can be implemented by various methods, including free-form optics, holographic optical elements, surface relief gratings, or metasurfaces, the realization of glasses-free AR 3D displays is a much harder task because of the uncertain spatial relationship between the display screen and observers. Glasses-free AR 3D displays can be assigned to either reflection-type and optical see-through type displays. Li et al. adopted a mirror-based pinhole array to demonstrate a reflective AR 3D display system based on an integral imaging display (Li et al., 2019). Recently, they improved the performance of the reflection-type AR 3D system with high definition and high brightness based on the use of a reflective polarizer (Li et al., 2021). However, in the reflection-type AR 3D display, virtual images are fused with mirror images of the real scene rather than the real scene itself.
The optical see-through glasses-free AR 3D display permits people to perceive real scenes directly through a transparent optical combiner (Hong et al., 2016; Mu et al., 2020). Generally, it occupies the mainstream for various AR 3D display technologies and can be realized by using geometric optical elements, holographic optical elements (HOE) and metagratings. In 2020, a lenticular lens-based light field 3D display system with continuous depth was proposed and integrated into AR head up display optics (Lee et al., 2020). This integrated system can generate stereoscopic virtual images with a FOV of 10° × 5°.
The HOE is an optical component that can be used to produce holographic images using principles of diffraction, which is commonly used in transparent displays, 3D imaging, and certain scanning technologies. HOEs share the same optical functions as conventional optical elements, such as mirrors, microlenses, and lenticular lenses. On the other hand, they also have unique advantages of high transparency and high diffraction efficiency. On this basis, the integral imaging display can be integrated with an AR display based on a lenticular lens or microlens-array HOE (Li et al., 2016; Wakunami et al., 2016). Moreover, the HOE can be recorded by wavelength multiplexing for full-color imaging (Hong et al., 2014; Deng et al., 2019) (Figure 6A). A high transmittance was achieved at all wavelengths (Figures 6B,C). A 2D/3D convertible AR 3D display was further proposed based on a lens-array holographic optical element, a polymer dispersed liquid crystal film, and a projector (Zhang et al., 2019). Controlled by voltage, the film can switch the display mode from a 2D display to an optical see-through 3D display.
FIGURE 6. (A) Work principles for a lens-array HOE used in the OST AR 3D display system. (B) Transmittance and reflectance of the recorded lens-array HOE. (C) 3D virtual image of the lens-array HOE-based full color AR 3D display system. (D) Schematic for spatial multiplexing metagratings for a full-color glasses-free AR 3D display. (E) Transmittance of the holographic combiner based on pixelated metagratings. (F) 3D virtual image of the metagratings-based glasses-free AR 3D display system. (G) Schematic of the pixelated multilevel blazed gratings for a glass-free AR 3D display. (H) Principles of the pixelated multilevel blazed gratings array that form viewpoints in different focal planes. (I) 3D virtual image of the blazed gratings-based glasses-free AR 3D display system. [(A–C) Reproduced from Hong et al. (2014). Copyright (2021), with permission from Optica Publishing Group. (D–F) Reproduced from Shi et al. (2020). Copyright (2021), with permission from De Gruyter. (G–I) Reproduced from Shi et al. (2021). Copyright (2021), with permission from MDPI.].
In fact, AR 3D displays based on lens arrays form self-repeating views. Thus, both motion parallax and FOV are limited. Moreover, false depth cues for 3D virtual images can be generated due to the image flip effect. Correct depth cues are particularly important for AR 3D displays when virtual images fuse with natural objects. On this basis, a holographic combiner composed of spatial multiplexing metagratings was proposed to realize a 32-inch full-color glass-free AR 3D display, as shown in Figure 6D (Shi et al., 2020). The irradiance pattern for each view is formed as a super Gaussian function to reduce crosstalk. A FOV as large as 47° was achieved in the horizontal direction. For the sake of correct white balance, three layers of metagratings are stacked for spatial multiplexing. The whole system contains only two components: a projector and a metagrating-based holographic combiner. Moreover, the transmittance is higher than 75% over the visible spectrum (Figures 6E,F), but the light efficiency of metagrating is relatively low (40% in theory and 12% in experiment). To improve the light efficiency, pixelated multilevel blazed gratings were introduced for glasses-free AR 3D displays with a 20 inch format (Figures 6G,H) (Shi et al., 2021). The measured diffraction efficiency was improved to a value of ∼53%. The viewing distance for motion parallax was extended to more than 5 m, benefiting from the multiorder diffraction light according to harmonic diffraction theory (Figure 6I).
We introduce a summary of various methods for realizing glasses-free AR 3D displays. As shown in Table 2, the optical see-through combiner outweighs the reflection type method for a more natural fusion with the physical world. In all optical see-through combiners, holographic optical element-based combiners have the advantages of high diffraction efficiency and high transparency. However, they suffer from a limited FOV and motion parallax. The metagrating-based combiner offers an accurate depth cue over a large FOV. The multilevel blazed grating-based method further improves the light efficiency and viewing depth due to multiorder diffraction.
The development of high-throughput micro/nanofabrication methods is essential for large view modulators. To fabricate the diffraction gratings or metagratings at a high throughput, a flexible lithography system was proposed (Figure 7A) (Wan et al., 2016). The nanogratings in this system were fabricated pixel by pixel. Through one exposure, a nanograting pixel with a size on the scale of tens of microns was formed. Therefore, the throughput can be much faster than that obtained by an electron beam lithography system that works via a sequential writing process. In addition, the periodic tuning accuracy of the fabricated gratings can be less than 1 nm. Using the proposed lithography system, a 32-inch view modulator with a minimum feature size of 300 nm was successfully prepared for a glass-free 3D display (Figures 7B,C). This view modulator has a total of 24,883,200 pixelated metagratings.
To efficiently fabricate multilevel microstructures, a grayscale laser direct writing system can be employed, as shown in Figure 7D. The system mainly contains a laser, an electronically programmable spatial light modulator device and an objective lens. The spatial light modulator device loads the hologram patterns that are refreshed synchronously with the movement of the 2D sample stage. The objective lens reduces the pixel size of the spatial light modulator device by 20 times or 50 times. Furthermore, the proposed laser direct writing system has a high throughput of 25 mm2/min, which supports the fabrication of a large-scale view modulator for display purposes. It took only 30 min to fabricate a 40 mm2 view modulator fully covered with a four-level blazed diffractive lens (Figures 7E,F).
In this paper, we mainly focused on the exciting achievements of planar optics-based glass-free 3D displays and glass-free AR 3D displays (as summarized in Figure 8). Planar optics opens up the possibility to manipulate the beam steering pixel by pixel, rather than an image with many pixels as in a microlens array-based architecture. There are several benefits to modulating individual pixels. First, the views can be arranged freely either in a line for horizontal parallax, a curve for table-top 3D displays, or a matrix for full parallax. As a result, the views can be arranged according to the application. Second, when imaging with many pixels, many pixels are wasted, especially at large viewing angles. Therefore, severe resolution degradation is always criticized. In the pixel-to-pixel steering strategy, however, every pixel contributes to the virtual 3D image. Third, planar optics offers superior light steering capability for a large FOV. Fourth, the light distribution of each view can be tuned from a Gaussian distribution to a super-Gaussian distribution to minimize crosstalk and ghost images. Fifth, the view shape can be tuned to dots/linear/rectangular shapes for information density variant 3D displays. The tradeoff between resolution and viewing angle can be alleviated. Sixth, a super multiview display can be realized with closely packaged views to address vergence-accommodation conflict problems. Seventh, multilevel structures, such as blazed gratings, diffractive lenses, and metasurfaces, offer solutions for high light efficiency and reduced chromatic aberration. Eighth, planar optics possess the features of a thin form factor and light weight, which are compatible with portable electronics. Finally, a glass-free AR 3D display can be achieved with a large FOV, enhanced light efficiency and reduced crosstalk for window displays.
FIGURE 8. Schematic of the emerging planar optical elements applied in glasses-free 3D displays and glasses-free AR 3D displays. There are various merits for planar optical elements compared with geometric optical elements. DG: Diffraction gratings.
To summarize, planar optics-based 3D displays have the advantages of a thin form factor, light weight, flexible design, and precise light manipulation. They hold great promise to tackle the critical challenges for geometric-based 3D displays, especially for the applications of portable electronics and transparent displays.
Future research in planar optics-based 3D displays should focus on the improvement of display performance and enhancement of practicality. From the system level, some strategies can be used to further improve display performance. First, a time-multiplexed strategy enabled by a high refresh rate monitor can be used to increase the resolution by exploiting the redundant time information (Hwang et al., 2014; Ting et al., 2016; Liu et al., 2019). For example, a projector array and a liquid crystal-based steering screen has been used to implement a time-multiplexed multiview 3D display. An angular steering screen was used to control the light direction to generate more continual viewpoints, thereby increasing the angular resolution (Xia et al., 2018). In another work, a time sequential directional beam splitter array was introduced in a multiview 3D display to increase the spatial resolution (Feng et al., 2017). When equipped with eye-tracking systems, a time-multiplexed 3D display can provide both high spatial resolution and angular resolution for single-user applications. Second, a foveated vision strategy can be utilized to compress the image processing load and improve the optical performance of the imaging system and near-eye display (Phillips et al., 2017; Chang et al., 2020). For instance, a multiresolution foveated display using two display panels and an optical combiner was proposed for virtual reality applications (Tan et al., 2018). The first display panel provides a wide FOV, and the second display panel improves spatial resolution for the central fovea region. This system effectively reduces the screen-door effect in near-eye displays. Moreover, a foveated glasses-free 3D display was also demonstrated with spatially variant information density. This strategy offers potential solutions to solve the trade-off between resolution and FOV (Hua et al., 2021). For foveated display systems, liquid crystal lens technology is also significant (Chen et al., 2015; Lin et al., 2017; Yuan et al., 2021). Under polarization control, liquid crystal lenses with tunable focal lengths are able to provide active switching of the FOV. This technology was demonstrated in a foveated near-eye display to create multiresolution images with a single display module (Yoo et al., 2020). The system maintains both a wide FOV and high resolution with compressed data. Third, the development of artificial intelligence algorithms can improve the optical performance of planar optical elements (Chang et al., 2018; Sitzmann et al., 2018; Tseng et al., 2021; Zeng et al., 2021). For example, an end-to-end optimization algorithm was introduced to design a diffractive achromatic lens. By jointly learning the lens and an image recovery neural network, this method can be used to realize superior high-fidelity imaging (Dun et al., 2020). Therefore, in planar optics-based 3D displays, algorithms such as deep learning can be incorporated with hardware for aberration reduction and image precalibration.
In addition to the aforementioned improvement in display performance, several techniques need to be implemented that can promote the practical application of 3D displays. First, a directional backlight system with low divergence and high uniformity should be integrated into planar optics-based glass-free 3D displays (Yoon et al., 2011; Fan et al., 2015; Teng and Tseng, 2015; Zhan et al., 2016; Krebs et al., 2017). The angular divergence of the illumination greatly affects the display performance in terms of crosstalk and ghost images. An edge-lit directional backlight based on a waveguide with pixelated nanogratings was proposed (Zhang et al., 2020). The directional backlight module provides an angular divergence of ±6.17° and a uniformity of 95.7 and 86.8% in the x- and y-directions, respectively, at a wavelength of 532 nm. In another work, a steering-backlight was introduced into a slim panel holographic video display (An et al., 2020). The overall system thickness is < 10 cm. Nevertheless, the design and fabrication of a directional backlight is still a difficult task. Second, several challenges in nanofabrication should be overcome for planar optics-based 3D displays (Manfrinato et al., 2013; Manfrinato et al., 2014; Chen et al., 2015; Qiao et al., 2016; Wu et al., 2021). For example, the patterning of nanostructures over a large size, the fabrication of multilevel micro/nanostructures with a high aspect ratio, and the realization of high-fidelity batch copies of micro/nanostructures remains challenging. We believe that numerous micro/nanomanufacturing techniques and instruments will be developed to meet the specific needs of 3D displays. Last but not least, planar optics-based 3D displays will benefit from the rapid development of advanced display panels. To enhance the brightness while ensuring low system power consumption, a spontaneous emission source can be introduced into planar optics-based 3D displays (Fang et al., 2006; Hoang et al., 2015; Pelton, 2015). By constructing plasmonic nanoantennas, large spontaneous emission enhancements were realized with increased spontaneous emission rates (Tsakmakidis et al., 2016). As a result, light-emitting diodes can possess a faster modulation speed than typical semiconductor lasers, providing a solution with high brightness and a high refresh rate. Ideally, the space-bandwidth product needs to be larger than 50 K for glasses-free 3D displays. MicroLED and nanoLED displays can effectively expand space-bandwidth products and fundamentally solve the problem of resolution degradation in the future (Huang et al., 2020; Liu et al., 2020). We believe that advances in directional backlights, nanofabrication, spontaneous emission sources, and microLED displays will lead to innovative and ecological development of the 3D display industry.
An, J., Won, K., Kim, Y., Hong, J.-Y., Kim, H., Kim, Y., et al. (2020). Slim-panel Holographic Video Display. Nat. Commun. 11 (1), 1–7. doi:10.1038/s41467-020-19298-4
Chang, C., Bang, K., Wetzstein, G., Lee, B., and Gao, L. (2020). Toward the Next-Generation VR/AR Optics: a Review of Holographic Near-Eye Displays from a Human-Centric Perspective. Optica 7 (11), 1563–1578. doi:10.1364/OPTICA.406004
Deng, H., Chen, C., He, M.-Y., Li, J.-J., Zhang, H.-L., and Wang, Q.-H. (2019). High-resolution Augmented Reality 3D Display with Use of a Lenticular Lens Array Holographic Optical Element. J. Opt. Soc. Am. A. 36 (4), 588–593. doi:10.1364/JOSAA.36.000588
Fan, H., Zhou, Y., Wang, J., Liang, H., Krebs, P., Su, J., et al. (2015). Full Resolution, Low Crosstalk, and Wide Viewing Angle Auto-Stereoscopic Display with a Hybrid Spatial-Temporal Control Using Free-form Surface Backlight Unit. J. Display Technol. 11 (7), 620–624. doi:10.1109/JDT.2015.2425432
Fattal, D., Peng, Z., Tran, T., Vo, S., Fiorentino, M., Brug, J., et al. (2013). A Multi-Directional Backlight for a Wide-Angle, Glasses-free Three-Dimensional Display. Nature 495 (7441), 348–351. doi:10.1038/nature11972
Feng, J.-L., Wang, Y.-J., Liu, S.-Y., Hu, D.-C., and Lu, J.-G. (2017). Three-dimensional Display with Directional Beam Splitter Array. Opt. Express 25 (2), 1564. doi:10.1364/OE.25.001564
Hirayama, R., Martinez Plasencia, D., Masuda, N., and Subramanian, S. (2019). A Volumetric Display for Visual, Tactile and Audio Presentation Using Acoustic Trapping. Nature 575 (7782), 320–323. doi:10.1038/s41586-019-1739-5
Hong, J.-Y., Park, S.-G., Lee, C.-K., Moon, S., Kim, S.-J., Hong, J., et al. (2016). See-through Multi-Projection Three-Dimensional Display Using Transparent Anisotropic Diffuser. Opt. Express 24 (13), 14138–14151. doi:10.1364/OE.24.014138
Hua, J., Hua, E., Zhou, F., Shi, J., Wang, C., Duan, H., et al. (2021). Foveated Glasses-free 3D Display with Ultrawide Field of View via a Large-Scale 2D-Metagrating Complex. Light Sci. Appl. 10 (1), 1–9. doi:10.1038/s41377-021-00651-1
Hua, J., Yi, D., Qiao, W., and Chen, L. (2020). Multiview Holographic 3D Display Based on Blazed Fresnel DOE. Opt. Commun. 472, 125829. doi:10.1016/j.optcom.2020.125829
Huang, T., Han, B., Zhang, X., and Liao, H. (2019). High-performance Autostereoscopic Display Based on the Lenticular Tracking Method. Opt. Express 27 (15), 20421–20434. doi:10.1364/OE.27.020421
Huang, Y., Hsiang, E.-L., Deng, M.-Y., and Wu, S.-T. (2020). Mini-LED, Micro-LED and OLED Displays: Present Status and Future Perspectives. Light Sci. Appl. 9 (1), 1–16. doi:10.1038/s41377-020-0341-9
Hwang, Y. S., Bruder, F.-K., Fäcke, T., Kim, S.-C., Walze, G., Hagen, R., et al. (2014). Time-sequential Autostereoscopic 3-D Display with a Novel Directional Backlight System Based on Volume-Holographic Optical Elements. Opt. Express 22 (8), 9820–9838. doi:10.1364/OE.22.009820
Javidi, B., Carnicer, A., Arai, J., Fujii, T., Hua, H., Liao, H., et al. (2020). Roadmap on 3D Integral Imaging: Sensing, Processing, and Display. Opt. Express 28 (22), 32266–32293. doi:10.1364/OE.402193
Kim, S.-U., Kim, J., Suh, J.-H., Na, J.-H., and Lee, S.-D. (2016). Concept of Active Parallax Barrier on Polarizing Interlayer for Near-Viewing Autostereoscopic Displays. Opt. Express 24 (22), 25010–25018. doi:10.1364/OE.24.025010
Koulieris, G.-A., Bui, B., Banks, M. S., and Drettakis, G. (2017). Accommodation and comfort in Head-Mounted Displays. ACM Trans. Graph. 36 (4), 1–11. doi:10.1145/3072959.3073622
Krebs, P., Liang, H., Fan, H., Zhang, A., Zhou, Y., Chen, J., et al. (2017). Homogeneous Free-form Directional Backlight for 3D Display. Opt. Commun. 397, 112–117. doi:10.1016/j.optcom.2017.04.002
Kumagai, K., Suzuki, D., Hasegawa, S., and Hayasaki, Y. (2015). Volumetric Display with Holographic Parallel Optical Access and Multilayer Fluorescent Screen. Opt. Lett. 40 (14), 3356–3359. doi:10.1364/OL.40.003356
Kumagai, K., Yamaguchi, I., and Hayasaki, Y. (2018). Three-dimensionally Structured Voxels for Volumetric Display. Opt. Lett. 43 (14), 3341–3344. doi:10.1364/OL.43.003341
Lee, J.-h., Yanusik, I., Choi, Y., Kang, B., Hwang, C., Park, J., et al. (2020). Automotive Augmented Reality 3D Head-Up Display Based on Light-Field Rendering with Eye-Tracking. Opt. Express 28 (20), 29788–29804. doi:10.1364/OE.404318
Li, G., Lee, D., Jeong, Y., Cho, J., and Lee, B. (2016). Holographic Display for See-Through Augmented Reality Using Mirror-Lens Holographic Optical Element. Opt. Lett. 41 (11), 2486–2489. doi:10.1364/OL.41.002486
Li, Q., Deng, H., Pang, S., Jiang, W., a