SMPTE RIS On-Set Virtual Production

SMPTE is a global organization that seeks to help make all technological solutions involved in the content creation industry more accessible to its users. The goal is clear and, so far, so are the methods to achieve it: the creation of standards. But the industry is changing and moving faster than ever. Virtual reality and the technology that has clustered around it is proof of this. It comes from worlds outside the industry as we know it and, moreover, it is in a rapid and constant process of evolution. In such a situation, standards would slow down evolution. And what SMPTE is determined to do is to create a toolbox that facilitates the use of these technologies. We spoke to Kari Grubin, Project Leader at Rapid Industry Solution (RIS) On-Set Virtual Production, SMPTE to find out how they do it.

Kari Grubin, Project Leader at Rapid Industry Solution (RIS) On-Set Virtual Production



What is the origin of Rapid Industry Solution (RIS) On-Set Virtual Production and what is it important?

With the idea of getting feedback from the general community, the SMPTE executive board came about with this program last year, on 2021. The objective of this idea was to know how SMPTE could be a better service to the community at large. The SMPTE’s board looked at several different potential areas as topics. At that time, virtual production had exploded with the pandemic and hadn’t really been discussed. They decided to trust in myself to focus in that topic.

The first thing we did was to interview over 30 different participants across the whole ecosystem. So I went and found representatives of broadcast, education, and creative storytelling as many as I could. Globally, I spoke to universities, broadcasters, streamers and motion picture studios, professional organizations like the EBU in Europe or around the world.

Also, I reached out to a lot of the companies that they may be hadn’t really been involved in the SMPTE universe before, but are really critical to this pipeline. So we got feedback from game engine companies, the compute and camera tracking companies, specific manufacturers, and LED wall manufacturers. It was very important to get a very broad picture.

And we asked them the questions about “what is it in virtual production that is difficult right now?” We also asked them where do they need help and overall, what is SMPTE doing right, what does SMPTE need to look at to actually be of service to what you need? We got really good feedback and we started develop our work.


Where did you find the most challenges at the beginning?

We encountered challenges on two points: Communication between people who, in advance, come from very different worlds and interoperability.

Around the world, and during research, we have encountered the same issue. There are not enough well-educated and knowledgeable professionals to perform these tasks. Many of the end users and developers were concerned that they had to go to universities to look for specific profiles, and many times the educators did not know what they were talking about. This education is necessary for communication to be established.

Sin embargo, desempeñamos un papel crucial en cuanto a la comunicación entre agentes distantes tradicionalmente. It is normal that video game developers do not share the same language as people who work in film. Finding that common ground was also very complex.

On the other hand, interoperability issues, as usual, do not only pertain to virtual production. But especially in the world of content creation, many innovative technologies are emerging. What happens is that the tools that allow us to integrate them into other systems emerge later.

It has happened with everything: with high dynamic range, with standard definition and high definition, with the transition from 2K, to UHD, to 4K, and so on. But in this case the process is software-based and is different because in the past things didn’t evolve so fast. So because software tends to evolve faster, the tools that are needed to facilitate interoperability are needed sooner.

That interoperability comes from data. It has to do with all those systems that have to interact and work together. For example, how do you get enough computing power to have enough refresh rate that allows you to avoid problems in the LED like flashing or jitter. To avoid this, you will also need a much smaller pixel and thus get a much more photorealistic look. However, if you are doing a close-up and the background is out of focus, you will not need that pixel quality.

Making all these pieces fit together and preserving those actual technical roadmaps was really important.


In addition to facilitating communication and preserving the development of interoperability, how has SMPTE been introduced into this process?

In short, we wanted to develop a different approach where we could be counted on when we encountered a problem.

The request was for the SMPTE to be more proactive. That it get involved, ask how it can help, but at the same time, not redo the work that other groups are doing. We need SMPTE to really listen to what is needed here and provide that glue and support to bring things together.

This is our effort. To make SMPTE, through action, more involved in the evolution processes of this industry. No one doubts that SMPTE is of service, but this is our response to the request that the industry is making.


What difficulties have you encountered in bringing together so many companies from such distant worlds?

At first, there was some hesitation. The biggest reaction I got from manufacturers and companies was concern that we were telling them they could no longer own their own systems. We can understand that, because content creators have always asked, especially if they have enough power, for everything to be open source, easy and interoperable. In a perfect world you could do that, but the technical innovation happens because there is research and development taken on by these companies, and they look for to generate benefits.

The position that SMPTE has taken is to ensure that they are not going to stop being proprietary. We want people to understand how users are going to use their technology. We want to create a way, a hub, a way of transportation. The goal is to provide the knowledge to the user so that they know that if they use a certain system from a certain manufacturer, they will get facilities in the process because they have worked out interoperability.

We’re going through a process like this right now. We’re developing a camera interoperability program where, in the conversation, we have lens manufacturers, camera manufacturers, there’s Movielabs, there’s the American Society of Cinematographers, etc. The goal is to put property aside and work on sharing the data that needs to be maintained from image capture to the end.



Some of the people we have asked for this feature have told us that some of the problems had to do directly with this property issue. How can you intervene in something like this without being detrimental to the interests of the companies?

Let’s use an example to illustrate this. Imagine you are an author and you create a work of art. That work of art is your property because you created it. If, what you do is create an NFT, for example, that has the same quality of originality as that work of art and can be considered as such, some of the software you used to create it is intrinsically linked to that NFT as well. Let’s say you have created a NFT through a game engine. And let’s say someone buys the NFT you created to use it in another ecosystem; to use it in another game engine than the one it was created with. That’s where those interoperability problems linked to ownership occur.

If you want to help, you can tell the software you created it with, but it’s no longer up to you, it’s up to the program itself. And they may not want or be able to give much information. This is what happens in the case of many of the technologies involved in virtual production.

Our role cannot be to step in and develop standards. A lot of what we have been told by people in this industry has been not to do that, because that would choke innovation. And they are right. We have to find the middle ground to get everyone on the same page without constraining them.

We all know that robust and rigorous standards are necessary. But what we want to do is help the industry in another way. We can provide information so that people who use virtual reality can do so in a more efficient and more accessible way. Not everything needs standards; technology can evolve so fast that it would be counterproductive to set standards. I don’t mean that we won’t do it at some point; standards will always be necessary, but in this case, the most necessary thing is to provide a toolbox. That is why RIS was born.


Therefore, what are the objectives of this program and the means to achieve them?

The main objective is to create space for collaboration in the industry. We also aim to make information available to anyone who needs it, young people who want to be trained to access this technology and also for existing professionals who need to understand and use it.

To achieve this education-related goal, we are developing a grant program that links manufacturers and suppliers with educational institutions to lay the groundwork for specialized education in virtual production.

Related to facilitating access to information, last October we launched the creation of a large wall chart whose purpose will be to identify each of the roles and technology involved in virtual production. It is a flexible document that will be updated over time. It will not only be able to be checked to see which employees are needed, but also to know the set of skills and knowledge that are required to perform specific tasks. The goal is to make it accessible through an interactive platform that is accessible to all.

We are now in the process of gathering information and cataloging it to facilitate profiling. This will help many creatives and technicians to know where to start and avoid mistakes resulting in increased budgets. How do I choose the right volume LED? What difference does it make and what should I use higher or lower pixel pitch for? What is the minimum I need? What is overkill? This information will facilitate decision making and help to meet budgets and avoid mistakes.


After this development, what is the next objective of this program?

The truth is that we have planned a three-year program and we want to complete the interactive wall chart that we mentioned before the end of this year. At the end of the next six months we will also be able to confirm the conditions on which our educational program will be based.

On the interoperability side, we are very focused now on developing and clarifying that level of metadata we were talking about earlier. We’ve started from the camera perspective, from capture all the way through. On this side, we are also reviewing the documents that were developed by the SMPTE community to see if we can apply already developed standards to these processes. Although it may be in a minimal way, but in a way that can help to start developing guidance.

We also have an SMPTE subgroup that spreads the education and interoperability efforts we are working on at market fairs.


How can the industry help?

Everyone who is interested in this area, either just being part of it, developing it or supporting it financially can help. But the truth is that we have to do it little by little because there is a lot to cover.

As an example, every day we talk to more manufacturers of LED walls and, at the moment we are not talking specifically about LEDs, but we engage in conversations about the importance of metadata and how it works when it comes to your wall. It is important to know how it can affect the frame rate of what has been captured when it is played back on an LED display. That’s why we go back to the beginning of the process, because it’s necessary to make the end look good.

Today we are in the middle of getting groups and companies that are not part of this ecosystem to start thinking about internal conversations that will allow them to develop research so that, when their turn comes, they will be ready. In fact, it’s fascinating to see how all these technology experts outside the content creation industry are already preparing for the future.

This is necessary because at any moment a creative will come along and say, “Hey, I created this on TikTok, and I wanted want to put it up on a giant screen. But I also want to create this interactive world where I’m going to make something that’s going to be on YouTube, but it’s also going to be available for Oculus. And with all that I want to create a NFT.” Everything is going to be just one thing, all together. And the whole industry has to be prepared.


Is this technology at an early stage? How will it grow?

I will respond in two ways. The first is that initial virtual production is nothing new. We’ve been doing it since silent films. It used to be a rotating backdrop behind a car, a painted canvas. Then it became matte paintings and became rear screen projection. Then, granting more flexibility, it became green screen. And, after all that, this is what we’re talking about.

Nevertheless, virtual reality will not substitute green screens. Because not everything is appropriate for this technology. And, considering how it may evolve, I would say that the technology will reach a degree of refinement that will only come through a learning process. The people who manipulate and develop it are the people who will learn to get the most out of it.

On the other hand, this technology will change the paradigm. Creators will come to understand that nothing is destined for a single use anymore. In a given location you can have a LIDAR that scans everything and stores all the data in case at some point a sequel or an immersive experience through virtual reality, for example, is developed.

In its growth, the sky is the limit, really. The possibilities offered by this technology are amazing. We can have the Colosseum to shoot a scene without having to travel to Rome. We can recreate a romantic scene at sunset as many times as we want because we have an element that will put you in that natural moment without any time limit. This fact also implies that everything is going to change from an environmental perspective. We are not going to destroy natural environments, we are not going to move large crews, we are not going to create film sets and then destroy them.

Qvest implements the
Atomos Shogun Connec