Nant Studios

Under the visionary leadership of Vice President of Virtual Production Gary Marshall, Nant Studios exemplifies the rapid evolution of the virtual production landscape. Born in 2014 in Culver City, Los Angeles, as a traditional studio rental, Nant Studios has since transitioned into a vanguard of virtual production, integrating cutting-edge technologies like LED volumes and motion capture systems. This transformation was catalyzed by strategic collaborations with industry pioneers such as Epic Games and Animatrix, propelling Nant Studios into high-profile projects including “Avengers: Endgame” and “Gears of War.”

The studio’s foray into virtual production was significantly influenced by the innovative work on “The Mandalorian” at Manhattan Beach Studios, prompting Nant Studios to delve into LED technology and Unreal Engine capabilities. This led to the establishment of a state-of-the-art facility in El Segundo, equipped with a large LED volume, setting new standards for immersive content creation.

Nant Studios’ journey from its inception to becoming a hub for virtual production excellence is marked by continuous adaptation and embracing new technological frontiers. This ethos is reflected in their recent projects and ongoing advancements in virtual production techniques, promising a future where the boundaries of storytelling and content creation are endlessly expanded.

 

Gary Marshall, Vice President of Virtual Production at NantStudios

Can you tell us about how and when Nant Studios were born and how has been the ride until now?

Nant Studios was conceived in 2014 in Culver City, Los Angeles, initially operating as a conventional studio rental facility. Our first venue offered a black box studio space along with production offices, catering primarily to local productions seeking high-quality, well-appointed facilities.

Our evolution began two years later when we formed a partnership with Animatrix, a Los Angeles-based performance motion capture company known for using the same advanced motion capture systems as seen in major productions like “Avatar” and “Planet of the Apes”. This collaboration transformed our Culver City location into a hub for cutting-edge motion capture projects, contributing to high-profile works such as “Avengers: Endgame” and the “Gears of War” video game series.

The pivotal moment for Nant Studios came in 2019, following our exposure to the virtual production techniques being tested for “The Mandalorian” at Manhattan Beach Studios. Recognizing the transformative potential of virtual production, we were eager to explore this technology further. Our ambition led to a collaboration with Epic Games, aimed at creating a space in Los Angeles to demonstrate and develop Unreal Engine capabilities.

Thanks to the support of Dr. Patrick Soon-Shiong and Michelle Soon-Shiong, who are deeply invested in healthcare and media respectively, we identified a former shoe factory in El Segundo, near Los Angeles International Airport, as the ideal site for our expansion. This new location was envisioned not just as a studio but as a pioneering facility equipped with a large LED wall and space for Epic Games to establish their Los Angeles lab.

By the summer of 2020, we formalized our plans and I joined Nant Studios, becoming one of the initial team members tasked with constructing our state-of-the-art LED volume amidst the challenges of the COVID-19 pandemic. Our El Segundo volume, comparable in size to the original “Mandalorian” set, features a dynamic, 360-degree environment with an LED ceiling, setting a new standard for virtual production.

Despite the uncertainties brought by the pandemic, our venture proved successful. In our inaugural year, we hosted a diverse range of projects, from commercials to music videos, and episodic content, culminating in the production of the “Westworld” season finale. This project, notably shot on film, added a layer of complexity and showcased the versatility and appeal of LED virtual production across various budget levels and formats.

In essence, Nant Studios emerged from a traditional studio rental business to become a forefront of virtual production innovation, driven by a vision to redefine content creation and a commitment to embracing and developing new technologies.

 

Nant Studios’ collaborations with Epic Games and Animetrix have played pivotal roles in shaping the studio’s direction and capabilities. How have these partnerships influenced the evolution and technological advancements at Nant Studios?

The strategic collaborations of Nant Studios with Epic Games and Animatrix have been instrumental in shaping the studio’s evolution. These alliances have facilitated the development and real-world testing of virtual production features, enhancing the capabilities of the Unreal Engine used in professional production environments. The proximity of Epic’s lab to our studio enables a symbiotic relationship, allowing for the iteration of new features on their smaller LED wall before scaling tests on our larger production stage. This collaboration not only refines the software for practical application but also fosters industry education, with workshops for guilds and associations, demystifying LED and in-camera visual effects for industry professionals. This knowledge exchange is pivotal in cultivating talent within the niche virtual production field, a challenging but vital endeavour in the current competitive landscape.

 

Can you elaborate about a recent achievement for Nant Studios?

Reflecting on NantStudios’ journey up to the present day, particularly from 2020 to 2022, we reached a significant milestone around mid-April or May when NBC Universal approached us. Impressed by our work on stages in California and our strong partnership with Epic Games, they entrusted us with the ambitious project of constructing two massive LED volumes in Australia for an upcoming episodic show set to be filmed in Melbourne.

Stage one in Melbourne has since become the world’s largest LED volume, a colossal structure standing 40 feet tall, 100 feet wide, and 160 feet deep. This venture posed a complex technical challenge, requiring a sophisticated design to power the system and handle the immense computational and logistical demands. We completed construction in early 2023, followed by extensive testing.

The initial production slated to inaugurate these volumes was ‘Metropolis’, the Sam Esmail Apple TV NBCU show. However, due to unforeseen strikes, the production went on hiatus, and the studio ultimately cancelled the project. Despite this setback, by late 2023, the Australian stages began attracting a variety of projects, bolstered by a skilled local team trained on our LA stages.

Looking ahead to 2024, we’re excited about constructing two new LED volumes in Los Angeles. One will be dedicated to automotive projects, featuring an innovative design with configurable modular wall pieces, allowing us to tailor the volume to the specific needs of each production. This flexibility represents the future of virtual production, adapting creatively to the demands of diverse projects.

 

What technologies are currently implemented in your studios for virtual production? Can you describe what services does Nant Studios offer and if there’s any difference among El Segundo, Calver City (CA) and Melbourne facilities?

Every installation is similar, but El Segundo, our pioneering stage, features a horseshoe-shaped LED wall with a pixel pitch of 2.8mm, and a static ceiling that can lead to a color shift effect due to the arrangement of the LED diodes. In contrast, our Melbourne stage boasts advanced ceiling tiles with a revised LED array to minimize this issue. Additionally, the Melbourne ceiling is modular and motorized, allowing for dynamic movement and ease of maintenance, significantly enhancing the flexibility and functionality of the space for various production needs.

 

For your productions, do you designate specific studios for certain types of projects, or are all your studios equipped to handle a variety of productions?

Our versatility across global studios allows us to tailor each space for specialized productions. In Melbourne, car commercials are often allocated to Stage 3, a U-shaped venue with 2.3mm pixel pitch panels that enhance the fine details on reflective surfaces like vehicles. Meanwhile, motion capture projects are centralized in Culver City, Los Angeles, benefiting from our dedicated mocap facilities. In El Segundo, we handle a variety of virtual production projects, utilizing our disguise system for 2D media playback when a full 3D environment isn’t necessary. As we progress, our new El Segundo stage is being custom-built to focus on vehicle processing, ensuring we meet the specific demands of each production with precision.

 

Could you provide a detailed overview of the Real-Time Art Department’s functions and its significance within the context of virtual production?

Our Real-Time Art Department, composed of six multi-skilled artists, focuses on real-time interactive content creation, post-visual effects, and traditional offline rendering. Originally, the team was established with a single specialist responsible for validating 3D content’s compatibility with our LED walls, ensuring frame rate consistency, color space accuracy, and animation sequencing. Recognizing the value in this, we expanded the department to offer content creation as a full-service solution, streamlining the process for clients.

As we evolve, we’re developing a post-visual effects team to initiate asset creation, leveraging USD and open-source standards for seamless integration across all stages of production. This collaborative approach allows for an efficient pipeline where assets are created, shared, and refined by our Real-Time Art Department, then potentially re-integrated with post VFX, culminating in a versatile and streamlined content development process that addresses both virtual production and post-production needs.

That’s precisely our objective. We’re on the brink of initiating our first commercial project that will be produced entirely through this innovative workflow. This project will integrate aspects of virtual production alongside fully CG-rendered shots, utilizing V-ray for offline rendering. It’s a comprehensive approach that will blend various techniques into a cohesive hybrid workflow.

 

Could you elaborate on the key distinctions, from your perspective, between virtual production’s applications in advertising, narrative films, TV series, and possibly video games?

Certainly, the distinctions between virtual production in advertising, narrative films, TV series, and even video games primarily revolve around timelines and budgets. In advertising, there’s a noticeable agility in adopting new technologies. Creatives, agencies, and directors here have been among the early adopters, likely due to a mix of factors.

Advertising spans a broad spectrum of budget sizes, from high-budget commercials to more constrained projects like music videos. This diversity has facilitated a rapid embrace of virtual production, somewhat akin to the earlier shift from chemical film to digital video. There was significant resistance back then, especially within the traditional realms of feature films and episodic TV, rooted in a reluctance to deviate from established practices and the perceived threat to conventional roles and techniques.

Virtual production, in my view, mirrors this scenario. A portion of the narrative film industry views it as an additional layer of complexity. However, in advertising, the response is markedly different. Here, virtual production is seen as a revolutionary tool that offers unprecedented versatility; —Imagine shooting a car commercial in multiple global locations in a single day without leaving the studio. This level of efficiency and creative freedom is particularly appealing in advertising, where the turnaround times are much shorter compared to films or TV series.

The mindset in advertising is inherently more experimental and forward-looking, compared to the cautious and tradition-bound approach often seen in film and TV production. Advertisements are typically produced over a span of six to eight weeks, demanding a fast-paced and flexible workflow that virtual production can adeptly support.

In essence, the adoption of virtual production technologies has been warmly welcomed in the advertising sector, driven by the need for efficiency, innovation, and the ability to rapidly iterate creative concepts. This contrasts with the more measured and hesitant reception in narrative film and television, where the weight of tradition and concerns over the implications of new technologies on established practices and employment loom larger.

 

How do you incorporate In-Camera VFX (ICVFX) technology into your production pipeline, and what benefits does this integration offer for high-profile projects such as “Avengers,” “Game of Thrones,” or “Star Wars Jedi”?

Integrating In-Camera VFX (ICVFX) into our production pipeline fundamentally revolves around transforming our approach to asset and content creation, emphasizing extensive preplanning and preparation. The essence of employing LED technology and virtual production techniques lies in having all necessary digital assets prepared and optimized for this environment well in advance.

Our engagement with clients starts from the ground up, guiding them meticulously through each phase, from initial conceptual discussions to pinpointing precisely what elements of their project are suited for virtual production and which might not benefit as much. This discernment is crucial, as it’s as important to recognize what might not work as it is to identify what will.

For content creation, we offer our expertise to either take the helm or, if the client already has preferred content creators, we ensure they’re quickly assimilated into our specialized workflow. This involves a comprehensive set of guidelines and best practices developed by our real-time art department, tailored to ensure seamless integration of ICVFX and preparation for any post-production needs.

The transformative aspect of adopting this approach is not just in the immediate benefits to the production process itself, such as increased efficiency and flexibility, but also in the broader implications for asset utilization. Once developed, these assets can be repurposed across a variety of platforms, from print media to immersive AR/VR experiences, enhancing brand engagement and extending the lifecycle of the content far beyond its initial use.

A prime example of this is the digital showroom we developed for Toyota. Traditionally, each new commercial required constructing or re-dressing a physical showroom, a process both costly and time-consuming. By creating a digitally reconstructed version of their showroom, complete with interchangeable ‘skins’ for different campaigns, we demonstrated a significant shift towards more sustainable, efficient production practices. This not only streamlined their commercial production process but also opened their eyes to the potential for asset reuse in creative and cost-effective ways.

Our presentation to Toyota and their agency, Saatchi, was a pivotal moment, showcasing the tangible benefits of virtual production. By leveraging a pre-existing asset – in this case, a photogrammetric scan of their showroom – and adapting it for virtual production, we illustrated how to achieve greater efficiency and cost savings in commercial production. This approach, we believe, is a testament to the transformative power of ICVFX technology in not just enhancing production workflows but in redefining the potential for creative and efficient content creation across the board.

 

 

When working on “Avatar,” what were some of the challenges you encountered, and how did you address them?

Working on the original “Avatar” in 2009 was a pioneering experience in virtual production for me. At that time, the concept of virtual production was in its nascent stages, and “Avatar” served as a groundbreaking project that leveraged technologies such as virtual cameras, Simulcam, and an extensive use of performance capture.

My primary focus was on the motion capture aspect, ensuring the seamless transition of captured data into the animation pipeline. This involved developing methodologies to manage and sanitize the influx of scene files from the motion capture stage. It was crucial to maintain order amidst the hectic pace of production, where file naming and scene management could easily become chaotic.

 

The challenge lay in untangling the complex web of virtual production scene files and ensuring they were properly formatted for Weta Digital’s animation pipeline. This task required a blend of technical acumen and creative problem-solving to ensure the integrity of the data being funneled into the subsequent stages of production.

Following my work on “Avatar,” I returned to London and contributed to “Gravity” at Framestore. This project was another significant milestone in my career, particularly in the realm of LED virtual production. For “Gravity,” we constructed a light box that utilized LED panels to project real-time lighting and reflections onto the actors. This early adoption of LED technology was primarily for lighting purposes, as the panels at the time weren’t advanced enough to be used as direct backdrops for in-camera capture, a technique that has become a staple in today’s ICVFX practices.

These experiences laid the groundwork for the evolution of ICVFX technology. The journey from the pioneering days on “Avatar” to the sophisticated use of LED in “Gravity” and beyond reflects a decade-long evolution of virtual production techniques. It was a gradual but inevitable progression towards the immersive, versatile ICVFX capabilities we utilize today. Each project posed its unique challenges, but overcoming them contributed to the rich tapestry of innovation that defines our industry’s current state.

 

Could you share insights on any advancements in CGI, technology, control systems, or robotics that have significantly influenced your work?

Of course, the integration of game engines into virtual production and the blurring lines between real-time and offline rendering have been pivotal in shaping our current workflows. A decade ago, tools like MotionBuilder were at the forefront due to their ability to offer real-time playback, which was revolutionary for visualizing performances captured in motion suits. However, the visual quality, particularly in terms of lighting, shading, and texturing, was rather rudimentary compared to the detailed output achieved through offline rendering, as exemplified by the original “Avatar” film.

Fast forward to today, the evolution in real-time rendering technologies, notably with Unreal Engine 5, has significantly narrowed the gap between what we see in real-time on set and the final rendered output. Innovations like Nanite and Lumen within Unreal Engine have pushed the boundaries of visual fidelity, making real-time rendered frames nearly indistinguishable from their offline rendered counterparts. This leap in technology enables us to produce photorealistic visuals in real-time, a feat that was unimaginable just a few years ago.

Moreover, the advancements in virtual reality, spearheaded by platforms like Oculus Rift, have further propelled the capabilities of real-time graphics, enhancing the immersive experience and overall quality of virtual production. Another critical component in this evolution is motion capture technology, not only for tracking human performances but also for the precise tracking of cameras within LED volumes. The accuracy and low latency of these tracking systems are crucial for maintaining the illusion of reality within the virtual environment.

These technological advancements, each significant in its own right, have converged to create a synergistic effect that has transformed the landscape of virtual production. It’s a testament to how far we’ve come in the field, where the tools and techniques at our disposal now allow for an unprecedented level of realism and efficiency in content creation.

 

Could you discuss any current limitations in performance capture or related technologies that you wish could be overcome, and how would you address them if given the opportunity?

Yes, definitely; one aspect I’d highlight as a current challenge within the realm of performance capture and virtual production is the considerable time and effort required to craft high-quality content for LED walls. It’s not so much a limitation as it is an area ripe for innovation. We’re actively exploring the potential of generative AI and machine learning to streamline this process. Interestingly, the healthcare arm of our company is making significant strides in AI for medical imaging, which presents a unique opportunity for cross-disciplinary collaboration to enhance virtual production.

The ultimate vision, or “holy grail,” if you will, is to enable creatives to interact with LED stages in a more intuitive, real-time manner, akin to the concept of a holodeck. Imagine being able to articulate a scene—say, a grassy field with a river, or a snow-covered landscape with mountains—and having it rendered in high fidelity on demand. While real-time rendering has advanced significantly, content creation remains a premeditated process, and that’s the gap we’re looking to bridge.

However, I must emphasize the irreplaceable value of human creativity in this equation. The integration of AI and procedural generation tools like Houdini aims not to supplant artists but to augment their capabilities, allowing them to achieve a substantial portion of the work efficiently while reserving their expertise for the crucial final touches that imbue scenes with life and authenticity.

On the technical side, reducing system latency is another priority. Despite the strides in GPU performance and motion capture technology, we still face a latency of about seven or eight frames in an ICVFX LED volume. Optimizing this to achieve a latency of merely one or two frames would significantly enhance the immediacy and responsiveness of virtual environments, making the virtual production process even more seamless and intuitive for all involved.

 

What can you tell us about “Viva Las Vengeance”?

“Viva Las Vengeance” was indeed one of our earlier ventures into utilizing LED volume technology in a creative project. Zack Snyder, known for directing “Army of the Dead,” embarked on this journey following a traditional filmmaking process for the mentioned film. The transition from this conventional approach to exploring the possibilities with LED volumes was initiated when Epic Games, in collaboration with us at Nant Studios, proposed an innovative experiment to Snyder. They suggested taking a CGI asset from “Army of the Dead,” rendered by Framestore, and adapting it for real-time use in Unreal Engine on our LED stage in El Segundo.

Snyder, intrigued by the potential, embraced the opportunity, which led to a day of creative exploration on our stage. This experiment sparked the idea to incorporate a taco truck from the movie into a novel context. The concept evolved into creating content for “Viva Las Vengeance,” using the LED volume to craft a unique commercial that tied into the broader “Army of the Dead” universe, including its VR experience component.

Collaboration with Framestore was key in this process, as we worked closely to adapt their assets for real-time rendering. My personal history with Framestore, having been a part of their team for seven years, facilitated this collaboration, reinforcing the project’s creative synergy.

This venture into using LED volumes for “Viva Las Vengeance” not only delivered engaging content but also served as a precursor to our subsequent project for the Resorts World Hotel in Las Vegas. This commercial featured A-list celebrities like Katy Perry and Celine Dion and showcased the efficiency and flexibility of virtual production. By integrating practical set pieces with digital backdrops, we managed to accommodate the tight schedules of multiple celebrities in a single day, highlighting the importance of thorough pre-production and the seamless integration of virtual production techniques with traditional filmmaking practices.

These experiences underscored the transformative potential of LED volume technology in the film and advertising industries, offering a glimpse into the future of content creation and production efficiency.

 

As we conclude this interview, could you share insights into any upcoming projects or groundbreaking technologies you are currently developing or planning to introduce?

I’m excited to share some of the forward-thinking developments we’re undertaking at Nant Studios. One of the groundbreaking shifts we’re embracing involves reimagining the construction of LED walls. We’re moving towards a modular and flexible design philosophy, which stands in stark contrast to the traditional approach of installing static, immovable LED volumes within soundstages. This innovation allows us to tailor the LED setup to the specific needs of each project, offering unparalleled versatility and efficiency in our use of studio space.

Parallel to this, we’re pioneering a new vehicle motion base technology, an advancement from the conventional gimbal systems. This development is geared towards accommodating a wide range of vehicles by adjusting the wheelbase to suit the specific requirements of any given production. The integration of this hardware with real-time game engines like Unreal Engine will enable a seamless interaction between the motion base and the virtual environments, significantly enhancing the realism and dynamic possibilities for automotive commercials and action-packed narratives with intricate car chase sequences.

This initiative is supported by our collaboration with General Lift, a company with a four-decade legacy in producing motion control equipment. Since acquiring General Lift, we’ve been leveraging their expertise to advance our motion base technology, ensuring it meets the high standards of today’s film and commercial productions.

Moreover, we’re dedicated to refining our content creation pipelines, focusing on establishing a next-generation hybrid workflow with Universal Scene Description (USD) at its core. This endeavor is part of our broader commitment to continuous research and development in AI, machine learning, and generative AI technologies.

At Nant Studios, innovation is the cornerstone of our philosophy. We’re deeply invested in research, development, and experimentation, constantly exploring new frontiers to push the boundaries of what’s possible in virtual production and beyond.

 

About Nant Studios

NantStudios is a state-of-the-art full service production ecosystem comprised of traditional, broadcast and virtual production stages. The company is based in Los Angeles with two campuses in Culver City and El Segundo. The company’s virtual production stages are serviced by an expert team with decades of virtual production, visual effects, and engineering experience. NantStudio’s goal is to democratize the virtual production workflow and make it accessible to any scale project, while innovating with R&D in technologies to streamline the process.

LAMA NDI virtual sou
EBU discloses strate