Adeia Blog

All Blogs

March 9, 2022

The Future of Interactive Video: From Zoom to the Metaverse

The Future of Interactive Video: From Zoom to the Metaverse

We probably don’t think of it this way, but interactive video is already a big part of our lives. Most of us can’t go more than a couple days without joining a Zoom call or Teams meeting. And while you may think of that as just video conferencing, these technologies – which have become features of daily life – are forms of interactive video.

The experiences we have on Zoom and Teams every day can accommodate one-on-one conversations, or they can scale up dramatically with multiple video streams, all synchronized and capable of facilitating real-time conversation. But the future of interactive video is much more than video conferencing, of course, and we’ll discuss some of that here.

What Makes Interactive Video More Engaging?

By deploying data collection, machine learning and AI technologies, interactive video providers can make video experiences even more engaging. In the simplest terms, once you interact with something, it becomes more memorable, because you were an active participant. Interactive video providers find appropriate ways to increase interactivity of video experiences, thereby making it more engaging and memorable for the user.

YouTube does this already. We’ve all been asked to take a short survey or watch an ad before viewing a video. Whether we complete the survey or not, whether we view the ad or not, YouTube gains some insight into our preferences, and is able to use machine learning and AI to make our experience a little better each time.

Netflix has interactivity built in as well. When Netflix asks, “are you still watching?”, how and when you respond gives them some useful information. Both providers know when you pause, rewind, fast forward, skip, and re-watch. Interactive video is what’s powering the analytics behind these platforms. Without that data, providers cannot personalize their recommendations, and personalization is what delivers a better viewing experience.

Beyond YouTube and Netflix, there are other examples. In the case of Massively Multiplayer Online (MMO) games, many players are competing in the same virtual environment, where real-time interactivity is enabled. Online gaming environments are interactive by nature, but they create different challenges for providers, especially regarding latency.

To some degree, latency is an issue when you’re watching Netflix too, but much less so than in a game. That’s because your ability to have a good experience in an online game suffers significantly if the latency is too high.

Even in our Zoom example, latency matters a lot for the quality of the experience. A high-latency experience in Zoom can disrupt the conversation. Innovations in edge computing are helping reduce latency and make all of these experiences better.

Newer Models for Interactive Video

On-demand video streaming and online games are comparatively well known and well understood. But what are some newer types of interactive video? In the realm of 2D video, the prominent type today, one rather exciting trend is “shoppable video,” which exists at the intersection of video and e-commerce.

Shoppable video can essentially be looked at as an extension of the home shopping channels we’ve all seen on cable TV for years. It sprinkles in details about a product at relevant points in the video to help viewers decide whether they want to make a transaction right there, while watching.

The evolution from linear video broadcasting to OTT (over-the-top) and internet-based streaming has enabled capabilities for more sophisticated video experiences like this. Providers can embed tremendous amounts of data into video streams (down to the pixel level) about a product or service, adding a much greater degree of interactivity to the experience. The provider can add hot spots, clickable areas, buttons or even pricing information for viewers.

All of these experiences can be personalized as well. If a provider knows (based on viewing behavior) that you watch a lot of videos on a certain topic, they can put more identification about that topic or related topics into the personalization algorithms to make it easier for you to find and purchase things you’re looking for.

The more interactive a video becomes, the more information that providers are able to capture in order to make your experiences better. More interactivity means more data collection, more machine learning and more AI, all of which can be used to improve the experience for entertainment as well as for applications like education and corporate training.

Interactive video technology has also helped to bring forth some other new models for online commerce, one of which is known as real-time bidding. Most people are familiar with online auctions from sites like eBay, and real-time bidding is similar to that. In this model, there are interactive enhancements that make the auction more like a live auction you might attend in person, only with much more information available to you.

Let’s say you were auctioning off a rare musical instrument using a real-time bidding method. You’d be able to embed information into your video stream about the instrument, its history, what makes it valuable, how much it’s worth, and so on. Prospective buyers would be able to access this data in the same view as the real-time auction stream, and decide to place their bids there, too. All of this can help speed up the auction process for the seller, while giving buyers a greater degree of confidence in what they are buying. This level of interactivity is an enhancement to the online buying process.

There are also some exciting examples of interactive video technology in the world of sports. For example, one company has developed an interactive video analysis tool specifically for sports viewing. Their solution for tennis, for instance, features a video feed where a player’s movements can be analyzed to determine placement of the ball or the trajectory of various shots in different situations.

This allows the player and coach to work together after a game or in a training session to review the video and understand angles and strike points, which can be very helpful for player improvement. This company has developed solutions for a variety of other sports as well, including soccer, basketball, hockey, volleyball and many other court and field sports.

On the spectator side, interactive content will make it possible for viewers to decide the angle from which they want to watch, and for those experiences to be tailored to each user. One viewer might prefer to watch basketball courtside, while another prefers to view baseball from up in the stands where they can see the plays at a wider angle. Yet another wants to study their favorite hockey goalie, so they watch from a camera above the net. In the future, we will be able to customize the ways we watch sports on TV.

There Are Some Technological Challenges to Solve

First, let’s look at synchronized experiences. Apple recently launched SharePlay, which allows iOS users to simultaneously watch a video stream using FaceTime on their iPads or iPhones. For this technology to broaden to users of other devices, we’ll need adaptive bitrate streaming protocols, which will help create synchronized interactive video experiences at scale.

Beyond devices, synchronized video experiences also need greater optimization at the transport level, regardless of the network. This would mean that if one user is on cellular data and another is on Wi-Fi, everyone would see the same thing at the same time. That isn’t always the case today.

The next big step in video gaming is cloud gaming, which will allow any user to play any game on any platform at any time. Especially with MMO gaming, latency is a critical factor. If one player is experiencing ten millisecond latency and their opponent experiences twenty millisecond latency, the game won’t feel fair, and interactivity will suffer. We need to be able to make latency consistent across a wider range of platforms and networks.

Edge computing is a key component to solving these challenges. An edge-based cloud gaming system would help create a fair playing ground, regardless of user device or platform, and regardless of the complexity of the game. In addition to its significant benefits for overall latency, edge computing will also help in the delivery of complex video rendering.

As VR/AR devices get smaller and online cloud gaming begins to give way to more immersive virtual experiences like the metaverse, edge-based systems will be a vital component of delivering a high-quality immersive video experience.

ACM Mile High Video 2024 Recap: User Experience is the Research Wave of the Future

Exploring the Impact of Artificial Intelligence Applications on the Semiconductor Sector

Technology-enabled Strategies Form Basis for Differentiation for Video Entertainment

Clickable Objects: A Convergence of Video and e-Commerce

Serhad Doken

Chief Technology Officer

Serhad Doken is responsible for the technology roadmap, research strategy and advanced R&D projects. Mr. Doken previously was the Executive, Director of Innovation & Product Realization at Verizon where he drove new 5G and Mobile Edge Computing powered services for Consumer and Enterprise Businesses. Prior to Verizon, Mr. Doken was VP, Innovation Partners at InterDigital focused on technology strategy and external R&D projects and partnerships. Prior to InterDigital, Mr. Doken worked on emerging mobile technology incubation at Qualcomm. Prior to this, Mr. Doken held positions at Cisco Systems, Nortel Networks and PSI AG. Mr. Doken is an inventor on 30 issued worldwide patents over 90 worldwide applications. Mr. Doken has a Computer Engineering degree from Bosphorus University and has completed the M&A Executive Education Program at The Wharton School and the New Ventures Executive Education Program at Harvard Business School.