We have seen the history and technicality of Augmented Reality in first two part of the series and got a complete understanding of what augmented reality is and which components are participating in the entire phenomena. To go a step ahead in the journey, the current post is devoted to describing how augmented reality works.
Objective of AR
When we are going to understand how anything works, we must have its objectives in mind. The objectives of Augmented Reality is to bring computer-generated virtual objects into the real world by accurate simulation and allow interactions in real-time.
Real World Experiences
In the real world, the natural or human-generated light rays are bouncing by real objects and enter into the human eyes where retina of eyes are sensing it, and we see an image of the object.
Digital World Experiences
In the digital world, we need to create a source of artificial light and redirect the artificial light rays to illuminate the virtual object. In the real world, it takes place with real 3D objects and in the real 3D environment. For the virtual world, it is happening on the flat 2D screen or an optical device.
When we wear such optical device in front of our eyes and try to see both worlds simultaneously, virtual as well as real, we need to combine the experiences. Of course, not any combination takes place in the real world, but it is on the retinal of human eyes.
Augmented Reality Simulation Process
Therefore, the optical device that combines the both experiences is ‘Combiner’ and act as a platform for Augmented Reality experiences. Thus, the entire AR process takes place into three main steps.
- Recognition of an image or object.
- Tracking of an image or object in space.
- Mixing (Combining) of virtual media with real-world image or object by superimposition.
The first step, recognition requires sensors and cameras to recognize an object or space in the real world. Sensors are gathering data regarding interactions of the real world users such as head movements, directions, dimensions, and coordinates/locations.
Tracking software determines the place of virtual object/media into real-world 3D space by running algorithms. The depth sensors help to detect the depth perception and 3D modeling technologies for rendering.
The mixing of the virtual world with real world takes place in the optical devices or AR are simulating devices using the combiners, advanced processing powers of AR hardware, and capabilities of AR software by running AR algorithms.
Traditional Combiners for AR Process
The traditional combiners cover two implementations:
- The polarized beam combiners or flat combiners
- The off-axis combiners or curved combiners
Polarized Beam Combiners:
The pros of polarized beam combiners are that they are lightweight, small in size, and relatively affordable in the market. Moreover, they have developed decades of the supply chain so easily available in the market for mass production of AR applications.
The cons of polarized beam combiners are that they have very limited FOV (Field of View) and display capacities as well as tough to improve. The examples of polarized beam combiners are Google Glasses and Smart Glasses from Epson, Rockchip, and ITRI.
However, to overcome the constraints of polarized beam combiners, off-axis or semi-spherical combiners have developed. They are giving high FOV and resolutions with relatively affordable price tags.
The cons of off-axis combiner include bulky or heavy hardware to wear, low angular resolution, and a bit lower quality of material to cut the cost. The best example of the off-axis combiner is Meta 2, and older example is Advanced Helmet Mounted Display by Link.
Non-Conventional Combiners for AR Process
The new technologies have implemented to address the hard tradeoff problem of the traditional combiners for AR. It is with non-conventional techniques such as holographic and diffractive optics.
These techniques use waveguide grating or hologram concepts and technologies accordingly. The theory is that it is progressively extracting a collimated image guided by TIR (Total Internal Reflection) in a waveguide pipe.
Therefore, just like a router, a waveguide transmits the image to your eyes and act as the most sophisticated optics to see through. The pros of waveguide AR devices include potentially better FOV and resolutions with lightweight and med-size devices.
Unfortunately, technologies are under development and expensive to bring it into mass production market.
How User Interactions Take Place in AR?
Today, most of the AR devices are using touchpad and voice commands to provide user interactions facilities. Fortunately, smartphones and tablets are the excellent candidates to interact with AR applications.
Therefore, most of the AR applications in the market are handheld devices based whether they use traditional or non-traditional AR techniques and technologies. Moreover, smartphones are advancing with processing, power, and storage technologies. Thus, just like wearable technologies, AR technologies also depending on the modern handheld devices or mobile devices and their applications.
We have acquired some basic understandings regarding how augmented reality works and what involved in the process including hardware and software with this and previous posts.
In the next part, we try to know the ‘usefulness of Augmented Reality’ and some applications of AR.
Hemant Parmar is a veteran mobile app consultant. He is co-founder of the company. Thanks to his prolonged exposure to mobile application development projects for myriads of niches and industries, he is capable of providing high-end mobile app development consultancy. He is devoted to providing honest and transparent consultancy services for clienteles looking for righteous guidance to augment their niche services/products using the latest mobile technologies.