Making reference to virtual production is making reference to Unreal

Unreal

How virtual production will change broadcasting

Virtual production is in at an implementation stage. Studios, integrators and broadcasters are experimenting with solutions developed on green-screen techniques, video game graphic engines and large-format LED screens and a short distance between pixels. In this early stage featuring expensive, inaccessible solutions, a clearing in the woods is appearing. Because virtual production is beginning to mean Unreal. The Epic Games’ proprietary graphics engine, originally from the video game arena, has come here to revolutionize our industry. The big brands in the broadcast market, such as Vizrt, Aximmetry, Brainstorm, Zero Density, Pixotope…, are kneeling at their feet. And the professionals too. How does this solution fit in broadcasting? How will virtual production techniques grow? Why do you think the creation of multimedia content with virtual techniques is so important? Rafael Alarcón and Asier Anitua answer these questions.
Rafael Alarcón is the Broadcast Design Coordinator at Movistar Plus. He is also in charge of generating and supervising the virtual sets created by this network. Currently, he is one of the few experts in Unreal Engine for broadcast. Asier Anitua is a Business Development Manager and a great connoisseur of virtual production markets.

 

What technical infrastructure are you working on at Movistar Plus?
Rafael: We at Movistar Plus work with Vizrt. We have it within the graphic ecosystem, both for the development of virtual sets and for all the graphic part that requires working in real time; such as sports broadcasts, programs, broadcast branding.

Currently, we have a virtual set with three cameras, two of them robotized with Vinten mechanical tracking, while the third one is a six-axle robotic arm that works as a crane. In addition, we have another studio with physical sets that is equipped with a robotized, sensorized telescopic crane with Mo-sys technology to make augmented reality items show up.

 

How are virtual production infrastructure, content design, and implementation teams made up?
Rafael: There is a scenography team that is responsible for the design of the sets. Then, we have my team, which is in charge of doing the implementation in Vizrt Artist, version 4, and the features that virtual set has to have are then prepared: screens, RA elements, scene controls; all this so that they can easily operate it from Vizrt ARC. Post-processing, mask adjustment, and depth of field adjustments are also made to respond on the robotic cameras.

Once all the adjustments are finished it is time to exploit that virtual set that only requires four operators. Camera and lighting control, mixer, sound technician and a Vizrt ARC operator. Beyond that, the imagination when it comes to “tweaking” with the virtual set can run wild.

 

Nowadays, many manufacturers are strongly committed to developing virtual production software solutions. Each of these solutions has its own features and characteristics. For which projects is each of these solutions most suitable?
Rafael: This is something I’ve been studying for a while now. All virtual production software has the same purpose: to show scenarios in real time with the best possible quality and get an integration between real and virtual worlds as close as possible to the perception of something real by viewers.

Now there is going to be a big change in virtual production. Unreal is developing a tool for real-time motion graphics. It is known as Project Avalanche and is called to be a substitute for After Effects.

We live in an age of immediacy. Therefore, rendering times have already become obsolete. North American network ESPN has already integrated it into a production and has lent it to Epic Games, to broadcast the 2023 edition of the Rose Parade. In this broadcast for ABC, ESPN used Project Avalanche to develop a graphic solution that encompasses all graphic production tasks, from design to final broadcast.

The key has been, given Unreal’s ability to create video game interface graphics, -commonly referred to as HUD—, move that system, obviously in an adapted form, to virtual production tools, motion graphics in headers, curtains, labeling, etc. This will achieve capabilities to create, deploy and modify graphics in real time, without any rendering times, of course.

 

 

What is the origin of Unreal’s expansion in broadcast?
Rafael: The basis of this ability is in the operation of a video game. In this format, once the information has been uploaded it is then ready to run in real time.
Epic Games put its graphics engine -a system that had reached high levels of photorealism- at the service of other industries. It was then when interest from broadcasters and post-production companies started to grow.

Today it has been possible to convey Unreal’s photorealism to these broadcast tools, thanks also to the facts that those industry players have fostered its growth by helping Epic Games to develop it.

 

Regarding the competition that Unreal is facing, why has it triumphed so much, as there are other engines on the market?
Rafael: For me the reason behind Unreal’s success lies in the optimization it offers when creating environments and own designs within the software. Additionally, Unreal 5 has two great features. Lumen enables working with dynamic lighting that does not need to be ‘cooked’, that is, it does not require processing time. Then there’s Nanite, which is the modeling part, providing the ability to deploy infinite polygons, until the processing capabilities of the GPU are overflowed, of course. But Unreal also handles all these polygons differently. The key is how they are calculated and then interpreted. It does this through pixels and not as a geometry, so it only calculates the part that can be seen. So far in all real-time technologies, whenever there was a chair, the whole chair -and not just the portion visible on the screen- would be calculated. Unreal still does this, but only optimizes the look of the visible portion.

 

So, if we think about broadcast production environments, the key that Unreal provides here is the ability to optimize workflows and save time, right?
Rafael: Definitely. The time is of the essence. But it is also true that Unreal offers you a lot in a very short time and the initial learning curve is not a steep one. Of course, then it is necessary to continue working so that the result gets better. That is where the process becomes harder. And it is precisely there where many of the proprietary engine software on the market falters. Therefore, Unreal gives you the tools to become more creative, which ultimately is the goal for all; that imagination prevails beyond the features of the relevant technological infrastructure.

 

As you have already pointed out, most proprietary graphic systems are adding integration capabilities with Unreal. Is this the way to adapt to the trend?
Rafael: That’s right. Because they have to recognize the potential that Unreal has. They really don’t have a choice.

Asier: Certainly, but there is another factor that is also making all manufacturers position themselves in support of this graphic engine: there is already a huge library of templates and assets available. In fact, is getting bigger every day because proprietary developments can also be integrated into that array and, in addition, they can be turned into money. Let’s say that for 20 dollars you can have a Formula 1, while developing it from scratch would imply a human cost of many hours. Imagine that I am a client who wants to develop a virtual.

As we said, all manufacturers have to get on this bandwagon, but on the other hand these manufacturers will guarantee integration into broadcast environments. Epic Games is not going to get into integrating Unreal with certain technologies, it is the traditional providers who will be responsible for offering you the best tool for each production.

From a business point of view, it also opens up the fields of specialization a lot. For example, a broadcast graphics engine artist is only going to be specialized in that area. However, as Unreal specialists they can easily adapt to many other sectors. From architects to video game designers.

 

That is one of the biggest challenges at present. Is there already a professional offering of Unreal specialist profiles?
Rafael: It’s still hard to find people who specialize in Unreal. The most obvious reason is that it is a very new tool and lacks a lot of broadcasting-specialist training. The TVs are now starting to trust in Unreal.

Asier: There is a lack of broadcasting-specialist training now, but this will change. In fact, universities are already implementing these tools in their training programs. TSA has developed the virtual production studio of Universidad Rey Juan Carlos at the Vicálvaro campus. We are even pushing towards this direction in corporate training programs, such as in the Telefónica Booth space with Brainstorm’s Edison virtual production solution. In this way we want to generate a pool of talent.

 

We have followed your latest integrations and we dare to say that you are betting strongly on the Aximmetry system. What is the reason for this preference?
Rafael: Actually, because of its accessibility. There was a time when all other systems were closed. Aximmetry immediately opened its system at no greater cost than a watermark. That was by late 2019.

It was precisely then that I wanted to try tools to work with Unreal. But the truth is that there was nothing available. When, at the same time, I discovered Aximmetry, a world opened up for me. I had a playout to see if all the things I did in Unreal worked without having to program them into the system itself, which you can too, but then you get into a much bigger mess. This was an easier way to do it.

On the other hand, there is the price. Aximmetry is the cheapest of them all. It features a very stable and accessible pricing policy.

Regarding support, I also saw it very affordable. In my case I can tell you that I communicated with them to try to solve doubts and in less than 24 hours I had an answer.

Asier: They are really accessible. Both in terms of costs and support. And the quality offered is the same as the others. The way to stand out from the rest is to be more affordable. They are moving towards a business model that gives everyone the opportunity to test their solutions. It is a long-term business model that seeks loyalty.

However, if we look at other ranges, we find highly professional solutions that offer guaranteed results, such as Brainstorm or Vizrt.

Rafael: Of course, each of these applications is designed for a specific purpose. If you’re going to work on a television with a workflow that involves a lot of people, you will need a hub everyone can run on without harming the one next to it. In solutions such as Vizrt, this system is guaranteed. It is also the case with Zero Density.

In Movistar Plus, for example, we are interested in Vizrt because we keep it as a basis for everyday life. On top of that layer of routine work, we also want to create a layer of innovation on Unreal. If we are 15 people at the graphic area, I do not want all of them to learn Unreal, because the cost of learning is quite hefty. However, the fact that these people can continue working in the same way but with an Unreal background, turns out to be a very good scheme. It is a hybrid system that they are implementing in tools like Edison. It is also in Vizrt. But then many others like Zero Density, Aximmetry or Pixotope work natively with Unreal. That means that at a new TV channel with a team of 15 graphic designers, if you integrate the Unreal solution, all 15 of them must learn that solution. So it ends up depending on what your specific workflows are.

 

 

 

LED displays or Green Screens?
Rafael: One of the cons that I see in XR screens, is that contaminates the filmed subjects with lighting and that is okay. But, at present, the screens continue to generate a moiré effect that must be solved with a blur, a technique that often provides an unrealistic result.

Asier: That will be eventually solved because it is a matter of syncing the capture of the image with the screen refresh. A technical solution is available.

The big difference between a XR screen and green screens is that the actor or presenter feels really embedded into the scene. The bad news about the green screen is that you have to look at the return to see where you are and what you are doing.

Rafael: Another thing you can’t do on LED screens, and this feature is really powerful, is a virtual camera. Unless it’s through direct clipping. They are trying to introduce an iPhone technology that can perform a clipping through brightness or contrast specifications. Nowadays, virtual cameras on green screen can do a clipping, insert a character and also allow you to perform a camera flyover. You can’t do that on an XR screen.

Asier: Of course, but this is for the time being. That need to save computing power by loading only the parts of a virtual scenario that are going to appear on camera is necessary only at present.

However, when that challenge is overcome by the then existing GPUs, it will no longer be necessary to limit this so much. Therefore, camera movements will be freer and will not be constrained by blank spaces on XR screens. And that’s why I say that LED displays will end up becoming the main technology for virtual production in the future. Most televisions will have their LED-screen set equipped with Unreal technology. Because the screens will get cheaper and cheaper and Unreal assets will also gradually become more affordable. On the contrary, creating a physical set will not make sense because raw materials will get more and more expensive.

Rafael: Over time, tracking systems will also disappear. Viz Arena was used in LaLiga during the pandemic. A whole virtual audience in stadiums was created. This is a very expensive thing, but the technology is heading in that direction. The camera pans throughout the whole scene and calculates where the audience has to be placed without a need for broadcast reference systems.

Asier: Indeed. But there are some solutions that already come with that. Take Edison, from Brainstorm. In fact, with this program you can do the production by means of an iPhone. With that smartphone, which already has a built-in LiDAR (Light Detection and Ranging) sensor system, you can move through a virtual environment as if it were a steady cam for the 1,500 euros the phone costs. Because everything is going to change over time as technology evolves and undergoes democratization.

 

In addition to the democratization you mentioned, where will virtual production technology evolve to?
Rafael: The broadcast world is geared towards virtual production. The future lies in cloud hosting and everything will be made available as software as a service.
Asier: From Telefónica we are already working on offering different cloud hyper-scales that allow work with GPUs. Such processing power has not been developed and these virtual production environments will always need a lot of GPU processing.

In addition, the cloud will eventually end up being something much more powerful than it is today. In the Telefónica testing lab we have a rig of 4K cameras that are recording 360 degrees of everything in between and then generate a real-time render of a person. We haven’t taken these capabilities into Unreal yet, but that will be the next step. With automation, actors can be scanned in real time and then live-streamed in any Unreal environment. That is the future ahead of us.

Rafael: Another future of virtual production will be the metaverse. In 2017, The Future Group did a show called “Lost In Time.” The idea behind it that viewers would go beyond their usual roles and became participants in the story. It was a video game that comes very close to what our notion of the metaverse might be today. The idea is trying to bring experiences of reality to homes, while providing, in addition, an interaction. This could attract new audiences to consumption of content.

 

And is there really a demand by viewers to enjoy content that is already accessible and good for consumption through technologies that run parallel to the narrative of the content they are watching?
Asier: Think about 80 years ago, when you were a regular radio listener and then television came along. It is the same situation: in the end content creators have the possibility to serve it to their different audiences in different formats. Those who want to hear content, through radio or podcasts; those who want to see it, through television or cinema; and those who want to experience it, must approach the world of video games or the metaverse.

Rafael: A great example of this need to create a closeness with content is the success of Twitch. People are on that platform because viewers interact with the presenter.

Asier: Of course, but that’s what TV is on the OTT platform. That is, it is to carry the linear broadcast media, adding presenter-viewer interaction, but within a digital platform. The thing is that this is the only context in which you can really do something like that; on traditional television it is highly complex, technically speaking.

In fact, there is an audience that only wants to consume content passively. But there is another type of audience, growing in numbers, that prefers to enjoy another type of content and is already used to consuming it.

 

Do you think that both types of audiences will always coexist or will the scales tilt in favor of one or the other?
Asier: Just as the radio didn’t disappear, the TV won’t either, but the revenue pie will become much smaller and much more fragmented.

Rafael: It actually happened with the radio. All the advertising was there and when television came, it took revenues on advertising from the radio.

Asier: Indeed, history is repeating itself. Content consumption, on average, is 6 hours per day per person. Out of that time, traditional broadcast only gets one hour. So traditional television, which is the one that really has all the potential to exploit those 6 hours of viewing time, in many cases is losing it because it is not being able to innovate at the speed at with everything else is evolving. Therefore, they are losing a share in the pie. To overcome this, they have to focus on content. But it is no longer worth making content for 20 million people. You’re going to have to do 300 pieces of content for 100,000 people. There are three options: either you go for cheaper costs and democratize all the technical solutions behind that sectorization of content, or you produce much more at a lower cost; or else you go out of business.

Rafael: This is where virtual production comes in. People who make physical sets have to worry about handling it, providing maintenance, renovating it,; all this can involve occupational risks. This does not exist in virtual environments.

Asier: However, we must be honest. The truth is that physical sets still beat a virtual set when it comes to realism. But we’ll get to a point where there won’t be that much difference anymore. At that point, we’ll say, what’s the point of making a set? I can just deploy some XR screens and ready to go. This is how 90% of the world’s sets will end up.

 

ATG Danmon build up
New software platfor