Adeia Blog

All Blogs

February 23, 2022

Is Metaverse the Future of VR?

Is Metaverse the Future of VR?

From Science Fiction to Commercial Reality

Ever since Facebook rebranded as Meta last year, there’s been a lot of buzz about The Metaverse, and a renewed interest in virtual worlds. It’s looking like a hot topic for the next several years, because despite the long history of the concept of virtual reality (VR), the technology is only now beginning to really deliver on its promise.

Defining the Metaverse & Virtual Reality

First, there’s The Metaverse (with a capital “M”, which is Facebook/Meta’s term and likely a forthcoming product). Then there are metaverses (small “m”, a generic term to describe virtual worlds), a concept that has been a regular feature of science fiction dating back at least as far as 1984, when William Gibson wrote his novel, Neuromancer.

VR has been a fixture in pop culture for decades, showing up in films like The Matrix, Tron and Ready Player One. Ever since the launch (and occasional failure) of consumer devices like Oculus Rift, the PlayStation VR, Valve Index, HoloLens and even Google Glass, the technology has been moving steadily out of the realm of science fiction into a commercial reality. But there’s still a long way to go.

For the purposes of this discussion, I’ll talk about virtual and augmented reality technologies as a spectrum that consists of three main categories: fully immersive virtual reality at one end, augmented reality (AR) at the other, and, in the middle, a variety of approaches to what I call, “merged reality” (MR).

This spectrum can be thought of as a broader category called “extended reality”, which encompasses the three categories defined above plus supporting technologies like haptics and spatial audio.

In the future, extended reality may include things like brain-computer interfaces, smell and temperature feedback, and potentially even taste. These future concepts aren’t here yet for a variety of reasons, but primarily because there’s still a lot of R&D that needs to be done on the devices. It’s also unclear what the data will look like for sensory interfaces. But we do have devices and data for AR/VR, haptics and spatial audio, so those are moving ahead.

The big questions we are often asked are, “Why hasn’t extended reality taken off? Why isn’t it everywhere?” To answer these, and to talk about how we work toward a future that includes metaverse experiences, we need to look at some of the limitations that exist today.

The Limitations of Extended Realities

For AR, the glasses have been bulky, awkward and there’s been basically one style. Remember Google Glass or Snapchat Spectacles? If you liked the style, great. If not, you probably wouldn’t wear them, no matter how cool the technology was. People want style choices, so to create real adoption, the technology will need to be able to accommodate a range of choices.

As for VR headsets, the simple fact is that most people don’t want to wear a headset for extended periods of time. They’re heavy and get warm, so you get hot and sweaty, and it just becomes uncomfortable.

They’re great for short durations, like simulating jumping out of a plane or free diving with great white sharks. But they’re not the sort of devices most people will use to watch a feature film or play a video game for three hours. And when you’re talking about AR or mixed reality devices, they can get even bulkier. For example, you’ll never see most people wearing a HoloLens in public. But that may change as devices get smaller and more comfortable.

Mixed reality/merged reality devices of the future will also need more capability and a wider field of vision to enable more advanced see-through displays for AR applications. Achieving that would require more (and better) cameras, infrared (IR) cameras or other sensors to create accurate spatial maps that improve the overall quality of the experience. These challenges are known to device manufacturers, and solutions are already being researched.

Building Virtual Worlds Requires Moving Processing off Devices

Regardless of the device you’re using, what does a virtual/augmented/merged reality world actually look like? Is it AR, overlaying different skins onto a real-world environment, making a modern city look medieval or altering people’s clothes? Or are we talking about a truly virtual representation of the real world, like a digital twin of your city?

There’s also the more fantastical: a fully immersive virtual environment that doesn’t exist in the real world at all. Whichever we’re talking about, there is a lot of computing that needs to be done, and the devices themselves will be far too small to hold all the processing capabilities needed to render those experiences.

For glasses and headsets to get smaller, lighter, and less bulky while also becoming capable of handling the required functionality, mobile networks must improve. To make devices smaller, with increased battery life and less heat buildup, we need to offload processing capabilities to the edge of the network. And this will have to be done in such a way that latency stays at or below the twenty-millisecond threshold, because at above twenty milliseconds latency in VR, people experience nausea. Some advanced AR applications where the device tracks and identifies fast-moving objects will require even lower latency, down to the five-millisecond range.

Over time, we’ll see less computing done on the head-worn devices themselves. To make devices mobile, our 5G (and 6G) networks will need to handle the capabilities for network throughput, edge computing and latency; we’ll need transport networks that are very low latency, very low jitter, high bandwidth and ultra-reliable, with no packet loss. We’re getting there, but networks can’t do all this just yet.

Technologies to Offload Processing

We need networks that are more capable not just because of the edge computing requirements driven by the need to shrink devices, but also because virtual worlds require a tremendous amount of graphics processing and rendering. This rendering needs to be done at the edge, with the rendered worlds served right back to the device and wearer in near-real time.

Shifting processing to the edge opens the door for devices to get smaller and lighter, but it also sets the stage for new innovations in complex rendering to happen remotely and be served back to the device. It’s one thing for the rendering of a relatively linear virtual world like a video game to be done remotely, but it’s quite another to do it in real time for a live experience.

Some devices have experimented with different models of offloading computing horsepower: Valve Index is a VR device that connects via a wired connection to a high-performance computer and is mainly used for gaming.

Then there’s a company from China called Nreal offering a set of AR glasses that use a wired connection to leverage the processing capabilities of your smartphone. While these two examples are using wired connections, both are moving us toward applications, devices and metaverses that can be accessed, processed and rendered over wireless networks.

There’s also a technology called SideLink that’s being standardized in 3GPP to allow certain cellular devices to communicate with each other without going through the core network at all. This has the potential to be very useful for VR and AR rendering, because short range wireless technologies like Bluetooth are too slow to effectively handle the high bandwidth requirements of those applications. These innovations invite the potential for devices like glasses to one day replace phones.

Interoperability Is Key

Will Facebook/Meta “own” the metaverse? I think they will own a virtual world that they may call The Metaverse, but they won’t own all metaverses any more than they own the Internet today. The metaverse (small m) will be a collection of virtual worlds we can access, very much like the Internet, with a myriad of sites available for every imaginable purpose. Some parts of the metaverse may be digital twins of the real world, some may be merged versions of the real world with a virtual world and others still may be entirely virtual.

The metaverse will eventually become decentralized and device independent. And, just as with the Internet, it will need a series of standards, protocols and common APIs in order to make sure it works and has a high degree of interoperability. Once this happens, you’ll be able to use your Apple device to access the Facebook Metaverse over the Verizon 5G (or 6G) network just as easily as you’ll be able to access Google’s virtual world using a Sony device over AT&T’s network.

If the devices and worlds stay largely proprietary as they are today, growth potential will be limited. Interoperability standards for metaverses will be essential, just as MPEG has been for video compression and 3GPP has been for cellular communications. In a metaverse, regardless of the provider you use for access, you will be able to enter different areas where each business will have their own brand-specific experiences in the virtual world, just as they do in the real world.

To deliver the highest quality experience for the greatest number of users, interoperability of devices and networks is critical, and must be standardized. Once such a standard is developed, no one company will own that standard, just like no one company owns 3GPP or MPEG.

What Will the Metaverse Look Like?

So how will extended reality be used, once we get there? We expect that gaming will still be a big driver, just as it is today. But there are many other ways we can see this technology taking shape.

What if we could design a virtual sports bar, where you could watch any number of games through a VR device and change the channel by moving your head to look in a different direction? Or if, while watching a car race, you could change the view of your immersive experience from the driver’s seat to the pit lane, or to the grandstand? What if you could simulate diving with sharks, skydiving, or touring a world-class museum? The possibilities seem nearly endless.

We are probably fifteen to twenty years or more from a truly standardized, open metaverse. In the meantime, I think we’ll see companies experimenting with their own metaverses, the way Facebook is proposing with their big-M Metaverse. But will Facebook/Meta eventually own it all? Certainly not. There may be a “branded” Metaverse owned by Facebook, but there will be many metaverses to explore and enjoy.

Exploring the Impact of Artificial Intelligence Applications on the Semiconductor Sector

Technology-enabled Strategies Form Basis for Differentiation for Video Entertainment

Clickable Objects: A Convergence of Video and e-Commerce

Leveling Up: The Evolution of Video Gaming Monetization

Christopher Phillips

Senior Director Advanced R&D, Media IP at Adeia

Christopher Phillips is responsible for supporting the Adeia CTO in defining the future technology roadmap and research strategy as well as leading advanced R&D projects. Mr. Phillips’ current focus is on eXtended reality, the metaverse, and cloud gaming research topics. Prior to Adeia, Mr. Phillips was a Master Researcher at Ericsson Research where he led Ericsson’s eXtended Reality research. His eXtended Reality research topics focused on device/network-edge split compute for environment understanding, media processing, remote/split rendering, and transport optimization over 5G and future 6G mobile networks. Prior to Ericsson, Phillips held research positions at AT&T Laboratories and the former AT&T Bell Laboratories focused on network load balancing and routing research. Mr. Phillips is an inventor on over 300 worldwide patent applications, over 100 worldwide granted patents and a coauthor on numerous publications. He was a 3-time innovation winner in Ericsson’s Media Innovation Competitions and won Ericsson’s Media Innovator of the year one time award in 2014. Phillips has been active in the 3GPP, VRIF, MPEG, DASH-IF, Streaming Video Alliance and OpenXR organizations. Phillips holds degrees in Computer Science from the University of Georgia.