Adeia Blog

All Blogs

July 12, 2022

Mapping Out the Next Wave of Augmented Reality

Mapping Out the Next Wave of Augmented Reality

Be honest: did you predict back in 2009 that mobile video would be such a big thing today? I still remember having conversations with people who said things like, “nobody is going to want to watch a whole movie on their cell phone!”

But people do it every day now. Was this because we underestimated the influence of video in our lives? Or, perhaps, did we overestimate the centrality of bigger screens? These are debatable questions, but today I want to look ahead to what I believe is the next big thing in our personal technology experience: augmented reality (AR).

It’s no secret that Facebook is making a huge push to deliver the Metaverse. I’ve written about that in this blog before. The Metaverse promises to be an immersive experience, accessible to all.

But what will exist between the video experiences of today and the fully immersive VR (virtual reality) experiences of the more distant future? There are a lot of exciting developments taking place in augmented reality.

Compute Will Need to Move Off-Device

Facebook has teased new VR and AR devices in the past year, and I’m particularly interested in talking about the implications of the AR side of the discussion. We don’t know many details yet about Facebook’s AR project (dubbed Project Nazaré), which is aimed at making true augmented reality glasses. But it has been announced, and a few early demonstrations have been leaked.

We know they plan to fit sensors, cameras, batteries, a 5G modem and some computing power directly into a pair of normal-looking glasses that will be comfortable to wear for extended periods of time. The biggest challenges in the past with AR glasses have been the clunky appearance and lack of performance. Nobody wanted to wear them for very long, and they didn’t do very much.

It’s obvious that Facebook has a clear idea of how AR technology will fit into their overall platform. They’ve shown that their technology will be capable of creating sophisticated spatial maps and outputting sophisticated, real-time 3D renderings of those spaces.

What we don’t know yet is exactly how they will handle the required computing. The ultra-small form factor of a pair of glasses doesn’t leave much space for powerful processors. But there are some possible ways around this.

There’s a development in the 3GPP standard called SideLink, a device-to-device communications protocol originally developed for LTE, but which will carry forward into 5G standards. SideLink enables extremely high bandwidth and low latency communications between devices. It will essentially be a replacement for Bluetooth in certain situations, and AR glasses is one such potential example.

In such a scenario, a SideLink connection would be made between the glasses on your face and the computing horsepower available elsewhere in your home (on a smartphone or tablet, or even a PC). This would enable the processing and rendering of the AR application data to be conducted in near real time, without the processing occurring on the glasses themselves.

Quite simply, in order for AR glasses to have a battery that lasts for any significant amount of time, there should be minimal computing on the headset itself. The glasses should contain a power-efficient modem, sensors and cameras – and not much else.

The Future of AR Comes Down to Performance

The adoption of the technology is going to depend largely on usability and performance. We’ve already seen that people don’t want to wear AR glasses that aren’t fashionable. Beyond aesthetic concerns, if I can only use the AR glasses when they are connected to the network, they become less useful, unless I am connected to an ultra-fast network 100 percent of the time.

What are some use cases for AR technology, beyond social interactions? I can think of several, but let’s focus on one: online retail.

Say you are interested in buying a new sofa. A furniture retailer could develop an AR-enabled e-commerce application that would allow you to virtually “place” a variety of sofas in your living room and see how they look in different colors, upholstery patterns, and locations in the room.

Using AR, you’d be able to walk around each sofa and view it from any angle you choose. This experience could be paired with interactions with a design consultant who could make color and upholstery recommendations based on your decor and style preferences.

The Tech That’s Required

Now that we’ve painted the picture, what will it take for us to get there?

First of all, AR is going to require extremely high bandwidth. There’s a lot of data transfer involved in rendering virtual and augmented spaces accurately in real time, so high bandwidth is a must. Providers can offer a vastly better experience to the user if they can leverage the greater processing power that lives at the edge of the network.

Edge computing is also what enables another requirement of this technology: ultra-low latency. Components of AR like object detection and tracking require extremely low latency. For example, once an object is detected (and a bounding marker is placed on it by the system), if that object moves in space, all the other virtual objects in that space will have to interact with it.

Without extremely low latency, this experience will not feel very good for the user, because the object will be at one point in the field of view while the marker is at another. If the user is turning their head at the same time, it will become disorienting unless the latency is very close to zero.

Using our online furniture retail example, the retailer will need all of their inventory fully rendered in 3D in order to power the AR aspect of that transaction. Your glasses will take a full spatial map (a 3D mesh, essentially) of your living room, and the retailer will be able to serve up their inventory so you can see how that sofa will look in your space.

The dimensions of the sofa will be scaled appropriately to the dimensions of your room, and you’ll be able to interact with the virtual object and move it around in your space to make a better decision before buying. Retailers will likely also be communicating with you using AI-powered avatars as salespeople and consultants, to help you make your decision.

All of these components ­– individually and collectively – require tremendous bandwidth and computing resources, in addition to ultra-low latency.

One of the last considerations is the privacy aspect of this AR-based retail model. We’ve effectively shared a detailed spatial map of our home with a retailer. This begs several questions: Who owns that data? What policies need to be in place to ensure that information isn’t used inappropriately? Does the retailer get to keep the 3D model of the inside of my home after the transaction is over?

There are obvious legal and technological aspects to answering these questions, but there are ethical concerns as well as we move into an era where we begin to invite companies into our homes using AR technology. Privacy concerns will need to be an important part of the conversation as this technology evolves.

Taking AI to the Edge: How Innovation Is Bringing AI Into Your Hands

Ecosystem-wide Integration, Collaboration and Optimization offers Key to Accelerating the Path to XR Experience Adoption

Exploring Vision-Based Interfaces

The Role of Vision-based Interfaces in Redefining Digital Entertainment User Experiences

Christopher Phillips

Senior Director Advanced R&D, Media IP at Adeia

Christopher Phillips is responsible for supporting the Adeia CTO in defining the future technology roadmap and research strategy as well as leading advanced R&D projects. Mr. Phillips’ current focus is on eXtended reality, the metaverse, and cloud gaming research topics. Prior to Adeia, Mr. Phillips was a Master Researcher at Ericsson Research where he led Ericsson’s eXtended Reality research. His eXtended Reality research topics focused on device/network-edge split compute for environment understanding, media processing, remote/split rendering, and transport optimization over 5G and future 6G mobile networks. Prior to Ericsson, Phillips held research positions at AT&T Laboratories and the former AT&T Bell Laboratories focused on network load balancing and routing research. Mr. Phillips is an inventor on over 300 worldwide patent applications, over 100 worldwide granted patents and a coauthor on numerous publications. He was a 3-time innovation winner in Ericsson’s Media Innovation Competitions and won Ericsson’s Media Innovator of the year one time award in 2014. Phillips has been active in the 3GPP, VRIF, MPEG, DASH-IF, Streaming Video Alliance and OpenXR organizations. Phillips holds degrees in Computer Science from the University of Georgia.