An acted emergency scenario in the 4DR volumetric capturing studios

Wonderland is a zyntroPICS/Spatial Cowboy Studios Production

 

Volumetric storytelling is fundamentally different from what we are used to in 2D productions. We are in the early days of the technology and talking with creators who are learning these differences in practice is important for all of us exploring this new realistic 3D world. We interviewed Eric Weymueller from zyntroPICS, and Jason Yang from DGene, who partnered to create one of the first volumetric stories in the world, called Wonderland.

 

Volumetric Storytelling – The Birth Of Wonderland

 

Wonderland was born from experimentation. Weymueller started to test with footage from old captures, trying to compose with multiple holograms, creating transitions, and exploring how editing could actually become the means for volumetric storytelling. 

 

Editing is essential because most hologram experiences are limited by what happens inside the shooting space, a 6-foot diameter area. Because of this limitation, nowadays we have a lot of hip-hop holograms, for example, as they don’t demand a lot of space, action, or multiple characters.

 

“So with the Sense XR editing tool, I started to think about how we could use it to tell a story, whether it’s for advertising or music videos, even for enterprise and training — storytelling is important. To keep a user hooked into a visual element in a film, you have cuts every two to three seconds, while all these AR pieces that are three minutes long with no cuts remind you of watching a play. People’s attention spans don’t really hold for that much anymore. Yes, you can move around and look, but editing really opened up storytelling for volumetric. That was exciting.”

 

When Weymueller met with Yang from DGene, a capture solution with a stage in Baton Rouge, and when he learned that Sense XR supported static objects in the scene rendering them only once, therefore making them very lightweight, the possibilities for quality capture and for set dressing with 3D objects opened up the potential of creating something flexible and fun such as a Wonderland story remake.

 

 

Shooting Volumetrically

 

Eric Weymueller sees a lot of differences when shooting 2D or volumetrically. He has worked with feature films, television, visual effects, and 360⁰ video, but he is enjoying volumetric storytelling more than other formats. “By having a crew of 300 people when you do film and television, you are dealing with years of development work and finance, because you gotta pay those 300 people. With volumetric, we are shooting with a crew of five. Post-production with tools like Sense XR can be done with one or two people.”

 

Jason Yang explained that by investing in AI to improve capture results, DGene wants creators to focus on their creativity and not on the technical differences of capturing with this new technology. “Nowadays, an army of people is required to make a movie, and a lot of this process is actually about managing a production. There is a lot more about managing this machine instead of being creative. DGene wants to go back to the time when small teams were doing creative things.”

 

Weymueller also explains that there is a difference between capturing and shooting. Capturing is passive, while shooting is directed, with a lot of thinking about what you are going to get and how you will use it. Directing can appear in something as simple as making sure the eye lines work, because if you have multiple characters, they will most likely be shot individually and the director should put markers up and paint the scene for the actors, giving them a sense of where they are so they can react to that.

 

DGene can help in this process, as they don’t need to use green screens on their stages for post-processing. Green screens can be an issue when directing scenes because actors don’t have anything to play off of. This issue is present when filming 2D videos, and it gets even more expressive when recording volumetric videos, as actors have green screens all around them. They don’t even have a director or a cameraman there, so the actors are all alone in a totally green environment.

 

To avoid these issues, DGene uses AI to identify where the target is, being it an object or a person, which would allow them to eliminate the green screen and also skipping the common ambient reflection problems. This use of AI can also open the possibility of creators building a simple set around their actors. “They still should worry about occlusions, but they could put objects and even people in the peripheries, as the AI system would be able to know where the target is, ignoring the other people or props,” says Yang.

 

Cross-Platform Volumetric Storytelling

 

Shooting volumetrically has advantages and challenges. On the advantage side, cross-platform content creation is very much at the top. “I very much think about cross-platform, and with volumetric, the ability to go out to framed video is one, holographic displays another one, AR and VR are others. So you shoot once but you have assets for really reaching people across platforms. You have to shape the content differently, but with volumetric you can do that since you got the whole 3D piece and you can decide what you want to do with it,” says Weymueller.

 

An interesting challenge for volumetric storytellers is to think about it in AR, because everybody will have a different experience. As you don’t control the location, the experience could happen on someone’s desk in an office, or outdoors, for example.

 

 

Cross-Platform Volumetric Storytelling

 

Shooting volumetrically has advantages and challenges. On the advantage side, cross-platform content creation is very much at the top. “I very much think about cross-platform, and with volumetric, the ability to go out to framed video is one, holographic displays another one, AR and VR are others. So you shoot once but you have assets for really reaching people across platforms. You have to shape the content differently, but with volumetric you can do that since you got the whole 3D piece and you can decide what you want to do with it,” says Weymueller.

 

An interesting challenge for volumetric storytellers is to think about it in AR, because everybody will have a different experience. As you don’t control the location, the experience could happen on someone’s desk in an office, or outdoors, for example.

 

 

The Power Of WebAR

 

Distribution is something that should be a point of focus for creators as well. Wonderland is being shared in WebAR, and the advantages are many. “With webAR, the first response you get from people is ‘Thank God I don’t have to download an app,’ across the board. You don’t have to build this in iOS or Android, you don’t have to worry about that. So you’ve got this reach. There are certain limitations like file size and compression, all these technical parameters that you still have to play within, but you can reach a very large percentage of the public with it, give them a good experience, and it’s just frictionless,” says Weymueller.

 

So today, for volumetric video, webAR is what gives you the largest reach. Sense XR makes this about as easy as possible, happening with the click of a button. Creators receive a link and a QR code right away, which can be distributed to more than 3 billion devices around the world.

 

Time For Experimentation

 

Volumetric content creation is just starting. Rules have not been set yet, so creators who decide to experiment with holograms have a white canvas in front of them, and they will learn by doing. “What I really learned out of Wonderland is that I need to shoot differently, but I have to go back and shoot something new to actually figure out what that is. These are exciting times that remind us of the early days of film editing, when someone decided to cut some 35mm film and glue it together. Volumetric is its own peculiar little beast, and it’s very early days. But the potential is enormous,” summarizes Weymueller.