Intel is about to release one of their biggest new technologies at the XXIII Winter Olympics from PyeongChang, South Korea – it’s called immersive media and it’s about to revolutionize how we film sports using volumetric video.

The announcement came early last week at the Consumer Electronics Show (CES) in Las Vegas, along with talks on quantum computing, driverless cars and artificial intelligence.

“Data is going to redefine how we create experiences for all audiences in the future, not just sports,” — Brian Krzanich, CEO Intel

The US television viewing audience is massive. According to Statista.com, the online statistics, market research and business intelligence firm, numbers don’t lie.

  • 90 percent of US households watch television
  • The average per day, per person viewing time amount is 4-hours
  • 29 percent of households watch a smart TV

For those of us previously frustrated by the lack of ultra-high definition viewing options, immersive media is about to make that 4K or 5K television you bought recently start to make a lot more sense.

Olympic Games Coverage

Intel, the International Olympic Committee, and the Olympic Broadcasting Services (OBS) are producing coverage for 30

events/over 50 hours of live virtual reality coverage. Viewers can watch live and on demand. NBC is partnering with OBS to broadcast VR Live events this year.

How to view VR olympics

It will be the largest ever VR experience.

Viewers in the US can also watch through the NBC Sports VR app. In order to view the events broadcast in VR, you will need:

  • Windows Mixed Reality headsets
  • Samsung Gear VR
  • Google Cardboard and Google Daydream compatible with iOS or Android

What Is Immersive Media?

Immersive media is “the opportunity to use data to deliver the most immersive, realistic content possible”, says Brian Krzanich, Intel CEO.
Intel has brought megadata to the small and big screens by solving a massive computing problem. By combining artificial intelligence with advanced camera technology, they have seamlessly blended billions of data points and millions of HD images into a technology that essentially blurs the lines between reality and television using volumetric video.

 

Intel’s True VR

To capture enough data to produce footage of this quality, production companies and venues must install dozens of 5k cameras around the perimeter. The simultaneous capture of these billions and billions of data points divide into what Intel has named voxels.

A voxel takes us a step further than standard 3D by adding both depth and volume. It allows the viewer to see the “inside of a 3D screen” from any angle he or she wants.

How Volumetric Technology Works

“The viewer is the director. You choose your own shot,” says 8i cofounder Linc Gasking. “The director chooses the environment and the people and their actions, but doesn’t choose the shot, because that’s left to the viewer.”

Volumetric video captures every single shot in an environment mapped with special, high-resolution cameras that are able to process billions of data points and, in turn, output this data into video. This allows viewers to manipulate the viewing angle and see the video from the exact area they want to view it from.

To understand the volumetric technology better, let’s delve a little further into each part:

  1. Artificial Intelligence
  2. Advanced 5K Cameras

Here’s a quick look at each.

1. Artificial intelligence

To manipulate, much less interpret billions of datasets, requires AI. Humans don’t have the time or the brain power required to remember that much detail.

Because the data sets are so large, nuances of increasing granularity can emerge making generated images appear much more realistic.
With VR, the ability for viewer interaction may also increase. So along with the production of these images, the viewing will become more personal.

News programming itself, may actually change with these new, personalized capabilities for viewer participation.

Facebook just did a study on using AI for new visual storytelling techniques in VR. One fundamental block used to construct realistic images is generative adversarial networks (GANs).

Adversarial networks train machines to predict real world activity through observation. They basically employ 2 different algorithms:
A generator, to produce an image from a random input

A discriminator, to get input either from the generator or real data to distinguish the real from the fake

These two neural networks self-optimize. The result is highly realistic with plausible, real world renderings.

 

2. Advanced 5K cameras

You might recall last year, the Superbowl was shot using 5K resolution pilon cameras and a string of robotic cameras encircling the stadium to provide Eyevision 360.

RED is a widely known manufacturer of the ultra-high resolution camera. These days, the camera industry is really way past 5k, and actually focusing on 8K resolution development.

Volumetric video requires state of the art image processing to capture the necessary data needed to render 3D imagery and recreate the “virtual” experience of the Olympics.

Final Thoughts

Intel’s announcement about the release of their volumetric data video at the XXII Olympics is exciting news. It’s fun to watch technology bridge the gap between 2D dimensional to 3D/VR entertainment.

We’d like to acknowledge all of our customers, especially those within the television sports production community. PLANET and our sister company Versatek thank you for your business throughout the years.