Game engines, AI and VR in 3D Animation
Dada! Animation is a company dedicated to the production of bold animation and unafraid to explore unknown terrain. This French animation studio has focused on bringing the promises of new technologies such as virtual reality, video game graphics engines or AI to traditional animation workflows. Quentin Auger, Head of Innovation of Dada! Animation, tells us that the way of working in international animation has been the same for 20 years, but it is starting to change now. The process is exciting and there is still a lot to explore, experiment and invent.
Dada! Animation has recently changed its identity, what is the reason for this change?
Two years ago, seven people, including myself, re-founded an existing company. It was called Hue Dada! Productions. It’s a name we loved, very French and powerful, and the identity was already formed. So we decided to keep it.
Today we decided to change our identity to get closer to our goal. We have kept Dada! but we have lost Hue because it is a French word that is difficult to pronounce. Our new identity is closer to what we really are, a mix between a technology lab and an animation studio specialized in CGI that produces content for all audiences.
Has a change in workflows matched with this change in identity?
Our DNA is to be versatile in workflows and to be able to accommodate any capacity. But the truth is that the animation industry hasn’t changed much over 20 years. The co-founders of this company have that experience in the industry. We started in the 1990s and, although the technology has evolved and become more powerful, the ways of working have not changed over time.
However, it is only now that they are beginning to change. We are experiencing a technological acceleration that is exploding traditional ways of doing things. It’s really exciting. So many industries are colliding this time -like video games, architecture, design, manufacturing and, of course, VFX and animation- that it’s really changing the landscape. In fact, what’s going to happen is that we’re going to change workflows from now on.
Two interesting questions arise from your answer. The first is in reference to the immobility of workflows and the second to the current evolution. Let’s start at the beginning. Why haven’t workflows changed in such a long time?
This work is artistic, but it has a large component of craft. You have an idea that you want to express, but the truth is that you depend entirely on the tools that are available to apply it. So the craft is constrained by technology, in our particular case.
In our industry everything depends on time and money. It’s clear that big budgets are used to buy time. But the interesting thing about all this is that the more time, the closer the big animation studios get to artisan work. It’s like haute couture. The best quality in a designer piece of clothing is that which is handmade, not mass-produced.
Therefore, the bigger the studio and the more powerful the project, the more it is developed in this artisanal way or, as we say, pushing the pixel by hand. In smaller production companies with smaller budgets we can’t afford this way of working. We try to find ways to automate processes, prepare for the future and rationalize resources.
This is the main reason why the technology hasn’t changed in all this time: by tending to do it by hand we don’t need to change the tools. The tools were established 20 years ago and they were based in the architectural industry. They remain today. In fact and curiously, in those days there were more tools and different ways of doing things than there are today. For example, between “Toy Story 1” and “Toy Story 2”, Pixar had to change their pipelines completely because the tools that settled and became a standard were not the ones they used to make the first movie. They came from the mathematical models used by engineers and the use of polygons ended up being imposed. In fact, I myself understand this change very well. I come from product engineering and in this industry polygons are forbidden because they are very imprecise. However, in animation the technique works because the result looks good and the technique is easier to apply. This was the model that ended up being established and why Pixar had to change its pipeline.
It became an optimized industry and everyone was doing the same thing. Schools also emerged to help sediment this knowledge and these practices.
And now, what is happening, why is everything changing?
The main thing, I think, is that many different industries are sharing more and more common ground every day. In particular, there’s another way of doing things coming out of the video game world. There is an essential tool, the graphics engine of the video game industry, that is changing things. What we hear most about in our field is Epic’s Unreal Engine, but we also work a lot with Unity, especially when using Virtual Reality.
Other tools have also been developed that as, I say, come from other industries. For example, we are relying on Adobe tools that allow us to animate 2D projects in real time. Or, on the other hand, virtual reality solutions that are also giving us the possibility of changing work models thanks to technological evolution and the collision of these worlds.
My job as head of innovation at Dada! Animation is to detect, test and propose new ways of doing things. Also, part of my job is to team up with research centers to work on projects that make us improve or experiment with different tools. It’s part of our company’s R&D work.
We will soon go into all these new tools that are being tested, but to give some context: what are the established tools in the industry?
First of all, we all use a tool like Autodesk Maya, it is used in 95% of VFX and animation companies. Now a tool like Blender is appearing, but it is still the same kind of tool than Maya, only with a different user experience and business model: it is free and open source. About fifteen years ago a tool like Houdini came out. This is specialized in VFX.
All our workflows have always been based on these tools. You model and rig in the same way. I myself have been rigging for a long time and I can assure you that 20 years ago a standard was developed that is still valid today.
To give you an example, 17 years ago I worked on an American show. Our workflow was based on these tools that I have mentioned. For the tedious and meticulous tasks of enveloping and rigging, a Maya tool was needed. I developed my own tool to help me handle that shortcoming and I shared it with a few colleagues. Well, it turns out that a few weeks ago, during the last edition of the Annecy Festival, a professional who was developing a new rigging tool told me that some rigging supervisors from bigger companies like Mikros Image or Superprod asked him to integrate the same features into his software. Many tools like that have naturally emerged to complement the deficit that the program had, but I never thought that mine would be so widespread and sadly still needed.
How has the world of video games interacted with the world of animation?
It happens the other way around too. They use some of the tools we use in animation because they have to build characters and environments that are beautiful. But they don’t use the whole workflow because their limitations are different. We want to make it beautiful and control the art direction whatever the cost per frame. In games, beauty is also important. But the main concern is speed and frame rate as well as control. That goes to the detriment of artistic control.
In our world, real time game engines, Unity or Unreal, that I was talking about earlier, have been implemented. The thing is that these graphics engines are optimized to deliver a lot of images at the cost of limiting the number of polygons and the richness and artistic control. The thing is that now, thanks to technological evolution, these engines can offer both speed and artistic richness. That’s why we are implementing them into our workflows. We get the same results much faster, and you can iterate a lot more.
How do you use virtual reality in your work?
We use virtual reality techniques as a tool to create content. VR is great for manipulating data in 3D. I always give the same example. If you wanted to model a piglet’s tail with traditional techniques, even with the best tablet in the world, you would have to move in 2D interfaces, you couldn’t do it easily. Instead, thanks to virtual reality tools you can model it in a single gesture. And the process goes from taking hours to a few minutes. There are tools that are very accessible and that are developed in Unity or Unreal Engine that work very well in this section: Oculus Quill, Medium, Tilt Brush or Gravity Sketch. They allow you to “draw” 3D and export a file compatible with the traditional 3D workflow.
We use this technology on top of the graphics engines. Apart from being faster and having more processing power they provide us with a very important capability in these environments. They are good for collaborative work. When you bring animators into virtual reality for scouting, the technology has to be able to accommodate multiple people in the environment. Video game graphics engines are multiplayer natively, so they allow for collaborative work.
The truth is that, as we have seen, the boundaries between these technologies are blurring and tools from different industries —such as video games, animation, VFX or industrial design, but also web development— are increasingly coming together. But we have to be careful because the terrain is slippery and we all know that detail can be the problem. What remains to be done now is to test these tools in different workflows. This way we will know if we can implement them in our tasks or if they will just help us get through different tasks.
What is your workflow like and how did you introduce this tool into it?
The traditional way of working would start with a script, then a drawn storyboard and a 2D ‘animatics’, its animated version with template sounds. In parallel, we would design the environment and the characters. Sometimes we use the created environment and storyboard to finish this process. Then all the 3D assets are built and the next step is to do what we call the design. This is the translation of the 2D storyboard or animatic into a 3D scene. The result is a rough, unfinished 3D animation. Subsequently, we would develop all the stages of the animation and, simultaneously, the asset would be textured, shaded and, when the animation is finished, we would put the light, shadows and all the makeup. We would render and composite it. In a separate process, but at the same time, all the sound composition would be done.
Having tools like the virtual production ones provided by the videogame engine, what we do is create the environments for the storyboards and the “cameras”. These “cameras” are virtualized viewing positions that allow us to capture the artist’s vision. With these capabilities, the animator explores as he would on a movie set. The storyboarder just has to use his skills to compose whatever comes to mind.
This way of working is much better because the artist can perceive and feel the 3D environment taking into account distances and volumes in an extremely better way because of the immersive nature of the process. We have tested these techniques on the same show with the same team. The first season we did without VR and the second season we created with it. It was crazy, it was incredible what we gained. In fact, we found far fewer errors in the animation than in the traditional method.
When you perceive the space immersively you can check everything in real size. You can see if the assets are out of proportion. It allows us to react to these faults much earlier than we would react in a normal situation. Having this kind of technology that allows real-time reaction has made a huge impact on our industry.
What projects are you working on with these techniques?
We are developing “Captain Tone-up”. We are also in the process of commercialization of a preschool TV series called “The Nebulons”. Another one I would like to highlight is “Mekka Nikki”, an adaptation of the sci-fi comic book for teenagers. Finally, a project that we love very much and that I would also associate with this section is “French Patisserie”, inspired by Gaël Clavière the Prime Minister’s chef, well, actually by the chef who has served several prime ministers.
They are all 3D, sometimes with a very 2D style look. The projects I mentioned have been mostly rendered in Unity although we are doing some tests with Unreal. In addition to the fast rendering capabilities and the collaboration it allows, this tool is also capable of providing different formats for platforms outside of traditional broadcast, such as YouTube or TikTok.
On the other hand, we also do services for other people. For example, right now we are developing a VR experience called Lady Liberty. It’s about the construction of the Statue of Liberty in Paris. The episode we narrate tells the story of the visit that the famous writer Victor Hugo made to Bartholdi’s workshop. We created the design, modeling, animation of the characters, lighting and construction of the scenery.
Finally, I would like to highlight a project we are working on that allows us to explore the VFX volumetry of moving actors in virtual reality environments. This is a documentary on the West Europeans in the 18th and 19th century who emigrated in differents parts of the world. The name of the project is “They Were Millions”. We have used Unreal Engine to quickly create large environments such as cities and also to capture the movement of the characters. The interesting thing about this project is that the final rendering is heavily modified after real-time rendering with Unreal Engine, and it is done by AI. We used AI in this project in two ways: the first to transfer the style of a specific artist to the images, and the other to add animation over actual portraits of that time, with deepfake-like tools.
What AI tool do you rely on to perform these tasks?
For the deepfake-type shots, we collaborate with a company that specializes in it ; they do the first layer of work and then we modify what we need to bring it closer to our own style. We transfer the style with tools that were developed for this task. AI or Machine Learning tools are like algorithms dealing with blurry statistics, and you don’t want to confuse it too much when you interact with it. We are learning how to handle the right data.
How and how much do the tools we have been talking about have to evolve to be more adapted to the animation industry?
The short answer is: a lot. Everything has yet to be invented. So a big part of my job is to get the message out that we can experiment together on methods that already exist. Industry also has a part to play in the evolution of these techniques.
We, for example, try to be part of the whole process actively. We talk to tool developers to share information and impressions. For example, something we insist a lot on is the incorporation of animation and VFX standards into videogame engines.
The process is not only industrial, it is also cultural and educational. For example, French animation schools offer five-year training programs, but in one year all the technology has changed. We are part of a French producers’ union task force (CPNEF/AV) that has conducted an audit on how to deal with technological evolution and the impact it has on schools. We are trying to help create programs that are more flexible and able to withstand the changes.
We are also part of an institution —we French love to create institutions, especially cultural ones— that tries to collect best practices in cinema, the CST. It was created after WWII and just a few months ago they created a department on immersive technology and real-time technologies. We are part of that department.
It is only by exploring, experimenting and inventing that we will evolve the technology to the point of opening up new, unexplored markets.