- Hardware for AR
- Software for AR
- Algorithms of AR
Hardware for Augmented Reality
Augmented Reality becomes possible with the evolution of its hardware components and advancements in software and various algorithms to shape it in present status. There are numerous hardware components to list, but we will explore only chief hardware components used in present days, and those are:- Display devices
- Sensors
- Input devices
- Processors like computing devices ranging from desktops to mobiles
Display of AR
Rendering of AR demands various technologies and technologies such as optical projection systems, multiple types of monitors, modern handheld devices like smartphones and tablets and body attached or worn on the human body system. We can list various display hardware in the following ways:Head Mounted Display (HMD):
- Harness or Helmets loaded with sensors with six degrees of freedom
- Eyeglasses loaded with cameras to intercept the natural world and re-display it as AR through eyepieces
- Head-Up Display (HUD) provides a transparent display with data that saves users from sticking with usual viewpoints
- Contact Lenses consisting of IC (Integrated Circuit), LED, and antenna
- Virtual Retina Display (VRD) is a scanning display onto the human eye retina, and it seems to float array in the space
- Eye Tap captures light rays going to the centre of lens and substitutes it with synthetic light rays, which are computer-controlled
Spatial AR:
As its name suggests, SAR augments real-world objects in spaces without relying on display devices such as HMD, monitors, or handheld devices. For instance,- Shade Lamps
- Mobile projectors
- Virtual tablets
- Smart projects
Sensors
Tracking sensors & Networking Hardware need to work in combination to provide an AR system with mobility. For example, modern smartphones and tablets have both. Mobiles consisting of:- Digital cameras and other optical sensors
- Accelerometer
- Gyroscopes
- GPS hardware
- Solid-state compasses
- RFID
- Wi-Fi sensors
- Native mobile connectivity hardware and sensors
- Wired and wireless networking sensors and hardware
Input Devices
To make an AR system interactive, various kinds of user inputs are mandatory and different types of input devices used, such as- Keyboards for textual inputs.
- Speech recognition systems like Siri, Cortana, Google Voice, and so on.
- Gloves stylus, pointers, and other body wear with sensors to provide body gesture inputs.
- Eye movement detection sensors and hardware.
Software & Algorithms for Augmented Reality
Please find the best Augmented Reality Software capable of carrying an image registration process where software is working independently from camera and camera images. It drives real-world coordinates to accomplish the process.How AR Software Works?
AR software can achieve augmented Reality using two-step methods:- It detects Interest Points, fiduciary marker, and optical flows in-camera images or videos.
- Now, it restores the real-world coordinate system from the data collecting in the first step.
The First Step
#1 – Interest Point: The exciting point is well defined and has a well-defined position in image space. The image structure around the exciting fact is rich in information content and simplify the vision system. It also computed with a high degree of reproducibility. It includes an attribute of scale to compute interest points from real-life images and with the scale of changes. #2 – Fiduciary Marker: It is an object placed in the field of view of an imaginary. It uses as a point of reference, a measure, or it can put on the imaginary subject. #3 – Optical Flow: it is a pattern of apparent motion of various objects, edges, and surfaces in a visual scene.Moreover, the first step can use various feature detection methods, such as:
1: Corner Detection: It is an intersection of two edges and used to extract specific features as well as infer the content of the image. Moravec algorithm used in the corner detection where algorithm tests each pixel in the image to check the corner presence. It defines the strength of the corner using SSD (Sum of Squared Differences) between the patch and neighbours. 2: Blob Detection: Blob is a region in the digital image that differs from the rests in terms of brightness, colours, and material. 3: Edge Detection: It identifies points in a digital image where brightness sharply changes and display discontinuities. Edges are corresponding to discontinuities in-depth, surface, material, and scene illumination. Thus, we can catch significant events and changes in the properties of the world.The Second Step:
It is a process of restoring real-world coordinates from the data, which we have obtained in the first step. In due course, we have to employ some methods such as:- SLAM (Simultaneous Localization and Mapping)
- Structure from Motion methods including-Bundle Adjustment
- Mathematical methods like-
- Projective or Epipolar Geometry
- Geometric algebra
- Rotation representation with:
-
- Exponential map
- Kalman & particle filters
- Non-linear optimization
- Robust statistics
-
AR Programming Technologies
Augmented Reality Markup Language (ARML) has developed to define and interact with AR scenes. ARML consists of both XML syntax and ECMA scripts. XML used to help in describing the location and appearance of virtual objects in AR scenes. ECMA scripts binding permits dynamic access to properties of the objects in the virtual world.ARML Object Model:
It was built on three main concepts:- Features: Represents physical objects in the AR scene.
- Virtual Assets: Represents virtual objects in the AR scene.
- Anchor: Define the spatial relationship between a physical and virtual object in an AR scene.
Anchors are four different types:
- Geometries
- Trackables
- Relative to
- Screen anchor
Augmented Reality SDKs
Just like other technologies, AR application development kits are available for rapid development process in the form of SDKs, including:- CloudRidAR
- Vuforia
- ARToolKit
- Catchoom CraftAR
- Mobinett AR
- Wikitude
- Blippar
- Layar
- Meta
- ARLab