Riot Games: Project Stryker. Riot’s great project to reach the world with esports

League of Legends is a game that has created a huge community around the world. Its players number in the millions and its fans in the billions. This community is hungry for content and Riot Games knows it. How can this company service and offer content 24 hours a day, 7 days a week to such a huge amount of users? Adding more challenges, how do you do it when you also multiply exponentially this community by adding more games like Valorant or League of Legends: Wild Rift?

Scott Adametz, Director of Infrastructure Engineering, Esports at Riot Games. Copyright Riot Games.

This is how Project Stryker was born. This idea was born to add the capabilities of scalability, simultaneity and professional features to all events taking place in any corner of the world remotely and without the need for hardware.

We spoke with Scott Adametz, Director of Technology at Riot Games about each and every one of the capabilities of this incredible project. Below, all the answers.

 

We have seen that the esports industry is here to stay. In fact, it has only grown. Has the form of esports broadcasting grown in parallel with the growth of esports?

Funny enough, I came from traditional sports. We’re rather new to the esports industry. I’ve been here at Riot almost five years, and when I first joined I realized how many parallels there are between traditional sports and esports. While the content’s completely different, the backend production techniques, the goals of telling the stories, it’s the same.

With that being said, I think esports has come a long way on their own, figuring it out as they went. This is something that’s pretty unique. Some of the most amazing innovations have actually come out of esports. That is why I’m still part of this industry, because this is where the innovation is happening.

 

Why the esports industry has this innovation power?

Just think for a moment about the purpose of this whole entity. Project Stryker is here to delight fans, to bring joy to billions of fans around the world. Everything we do is about delivering an incredible experience to our fans, viewers and players. That’s Riot’s mantra. When you have that as your north star, you find new and innovative ways to deliver new content experiences, new ways to produce content that are more efficient and allow us to create a lot more of this content. It’s not about innovation for innovation’s sake, and it’s not just about the broadcast and the technology behind the scenes. It’s about why that technology needs to exist for incredible experiences to be produced.

 

What is the origin of Project Stryker?

This was years ago before the pandemic, if we can all remember what that was like. I remember being in our Los Angeles Campus [Riot Games Los Angeles Campus], and I was watching two engineers on the team that I supported playing a game that had not been released. This was a pre-production version of what would become our game called Valorant.

They were completely invested in this game. They had been playing for five or six hours. I was curious what were they playing and why were they so invested in spending so much time in it. I watched them play for a bit and I could just see how emotionally invested, how excited they were, and frankly, how good the game was. At that stage, it got me thinking, “This is going to probably be a success.”

Riot had grown organically around a single game: League of Legends. And, after that, I kept thinking: “What happens if this game becomes even half as big as League? How would we create esports content around that? How would we delight fans with this additional game?” As an analogy, it’s as if FIFA suddenly added golf to its repertoire. It was groundbreaking. I went deep and started to come up with an idea of how we might service an additional title with all of the esports components behind it, the broadcast production needs, and put together a pitch and went through the channels internally to say: “We need all this capacity to produce content around this new game”. The rest of that is history because it was very quickly approved and we began working on what would become Project Stryker.

What we have here, is the first of three production facilities around the world that will have the purpose of allowing us to remotely and centrally produce content for any number of titles, any number of sports, from any number of regions, and in an efficient way.

 

So these facilities are there to produce content related with several Riot games, aren’t they? Project Stryker wont host any event, right?

Exactly, these facilities won’t host the actual tournament. This facility in Dublin [the first one built] is a content factory that allows us to have amazing events around the world. What we’re trying to do is become the backend service that allows all of our competitions to happen more efficiently and more, to be honest, cost-effectively.

 

What is this facility for?

This facility behind me is essentially a place to do as many simultaneous events as possible. It isn’t a big sound stage with merge booths and hotdogs and popcorn. That’s not what this is. This is the video control and the audio control rooms that would be behind the scenes, producing any number of content versions in every language possible. For example, Brazilian team in São Paulo will be using infrastructure in Dublin to produce events there. Their equipment remains the same. They haven’t moved, they’re still in their control room. All we have done is give them access to very powerful equipment behind the scenes through a network so they can produce the same program with more features, with higher production quality and without having to buy, build and maintain equipment in São Paulo.

There are six production control rooms and six audio control rooms in this place just to start. And there is room to expand. That’s the purpose of this facility.

 

Stryker Dublin – Technical Operations Center. Copyright Riot Games.

 

How do you achieve this capacity; I mean remote access to powerful tech and simultaneity?

It all starts with the network. Riot has what’s called Riot Direct. This has been one of the things that has set Riot apart from other game studios, and it was birthed out of necessity. It is a global ISP to the level of any of these network providers. We developed in order to support the number of players playing Leage of Legends and give them all the best experience. We needed to take that traffic onto our backbone as early and as close as possible.

That network is one of the most undervalued assets in Riot’s repertoire. When we wanted to add Valorant we didn’t have to start from scratch. We were able to leverage what we had learned from running a massive game like League of Legends. The funny thing is that we approached them about adding video. This is something not many people know, but Riot has been doing remote productions for all of their big worldwide events for seven years. By putting that traffic into Riot Direct as early as possible, we can make sure that the video and audio gets to where we’re going to produce the event from, to the production control room, and then send their final signals to the site, or to YouTube or Twitch, or to any of our distribution partners.

 

What are the technical characteristics of Riot Direct?

It’s undersea fiber all over the world connecting continents. We do points of presence at all the major operator hostings around the world where we pick up connections from local ISPs and say, “Hey, if you need access to any of our games, we can give you access to that on our backbone, and throw that traffic to us as fast as possible.” That does two things. It gets it off that operator’s network, so they can spend their time serving whatever it is to their customers, and it also allows us to make sure that that traffic is guaranteed end-to-end.

Now the fun part is that the type of traffic being sent from the games is very similar to the type of traffic being sent to the video. Naturally we are already using a network that is designed for this.

 

With the network, everything works remotely, right? What is the workflow, do you receive the signals from the point where they originate and at this facility do you process them?

Here’s where it gets a little different. Let’s just pick an example. We could use one as an example that’s coming up is MSI. It’s called the Mid-Season Invitational. It’s League’s middle of the year event. we’ll be picking those signals up in Busan in Korea, and we’ll be bringing them to the nearest Riot Direct point of presence to get it onto the network, and then from there, it doesn’t actually come to this building. It goes to a data center. What we did differently about building this facility, and it was so that we forced ourselves to think differently and push the boundaries of what was technically possible, is there is no equipment room in this building. We do not have video signals building. The idea around this facility is not to be where all of the equipment is. Let’s put that in a data center where it makes sense.

We do that for a couple reasons. One, it forced us not to fall back on the old models of SDI, because the data center just happens to be further and far enough away from us. That makes us think about how we could give these control rooms to regions. The other reason is that this possibility gives us the opportunity to expand production without having to build highly technical and complex data centers or equipment rooms in each of our regions around the world. They still get the benefits but they don’t get any of the added complexity.

 

Stryker Dublin – Production Room. Copyright Riot Games

 

How did you scale up all these infrastructures not depending physical facilities?

This is the first of three facilities. The idea here is not to build one giant facility to service the world. We’d actually tried that in the past. Los Angeles is the hub of Riot Games. We realize that one facility anywhere wasn’t going to solve the problem because we would overburden that one, we would create a single point of failure, and it would need to be massive.

What we did instead is to follow the sun and we carve the world into three swaths of time. For each part, we could build one facility. Each would have to be 1/3 the size, but together they to be enough to satisfy the largest show. Our number was 18 simultaneous productions around the world, which is what traditionally Riot has done. That is 18 languages plus English.

We said, “How could we do this?” We did not want to subject the facilities to the intensity of working around the clock, i.e. 24/7 in three eight-hour shifts per day. What we decided was that each of the facilities would be operational during daylight hours and then pass the baton to the next one. That way we were able to offer service every hour of the day, seven days a week, and 24 hours a day.

 

What technology can we find in your facilities?

Let’s start at the data center because that’s probably the coolest part. Everything in here is a network which is routers, firewalls, etc. One of the biggest network players obviously is Cisco. I approached them and said we’re looking at a very large production facility and we have other needs in the space. We worked out a partnership and that was what birthed this idea that they would be partners with us, not just a vendor or a manufacturer providing us gear. True to form, they have a dedicated, massively brilliant engineer that has helped us solve a lot of problems and avoid pitfalls common to massive 2110 networks like PTP. It is an entirely Cisco or Cisco Meraki network based on whether it’s the video fabric or the infrastructure.

The production areas you will find the JPEG XS is on Nevion. From 2019, we’ve been testing with them to do the JPEG XS. They were very early on and having that codec as a test preview, and they have performed admirably. We’re using them today for all of our contribution feeds.

Within the facility, it’s a mix actually. We’re not beholden to any one broadcast vendor. We actually think that we should be able to support any but they do need to align to standards. They need to be 2110 compliant and have a pathway to 2110-22, which is our hope to be able to keep compressed essence as our primary format.

 

What challenges you found developing the network?

We set out to make it not a layer 2 network. We have some brilliant network engineers at Riot. We contacted them and said, “You may not understand video, but if you were to build a network and you had these requirements and it had to operate at this level of performance, what would you do? They replied that we would make it fully routed and all the areas and all the ports would be a Layer 3 subnet. The idea was to have total control.

The idea was to have total control. We communicated that to our transmission providers and they said, “Wow, well, I mean it’s possible. It’s very complex, but it’s possible and it has a lot of advantages. If you are able to do this, we are able to work in that environment.

Everything is deterministic. Every flow is where it is because we have designed it to be there, not because it just finds its way, or builds its way freely, or has to remember where it got a single meeting point where everything is, that’s not how it works. We let the network decide, and that’s where Cisco comes in. Its IP Fabric for Media and its non-blocking multicast maintain the state of the entire network and make decisions every time a route is requested.

We didn’t realize that that hadn’t been done very often. I think now that we’ve built it, we’ve realized its value. It’s not without its challenges, but I think we’ve encountered new challenges, not the same ones that others have had to fight before. We’ve created our own nuance, which is great.

 

Stryker Dublin – Production Room. Copyright Riot Games.

 

What is the broadcast control layer?

Right now we are using Grass Valley Orbit as our broadcast control layer. This would be the equivalent of a broadband router. Below that, Orbit talks directly to the devices on the network and that’s where the actual routing tables are updated. That’s the current functionality. It’s kind of hard to take a legacy broadcast application and bring it into something new. We don’t know if that’s where we’re going to be in a couple of years.

However, Riot has a very strong software development team, and they have ideas. There’s a world where we can go freelance and build something custom. For the initial release, it’s Grass Valley Orbit. For our video switchers, it’s all native 2110. As everybody knows, that just means it connects to the network right at the edge of the switch, inside and still SDI-based switches.

We’ve left two full production control rooms in a state where we’re going to try new things. That’s all I will say. They are physically built. They have the same intercom, the same multiview, the same number of seats, the same basic infrastructure network and positions. We have populated them with a legacy contingent of transmission equipment. That’s where we believe there is a power that, as Riot, we will discover over time.

 

How do you manage stored media?

The perennial problem that occurs when you develop facilities like this is that, of course, they will produce content and what do we then do with that content? We have developed global content operations to acquire, enhance and enrich with metadata and then make it available to all our partners around the world. That same content can be versioned to find new outlets or incentivize new markets.

The content is enriched to be searchable. It is indexed through machine learning and artificial intelligence to extract data from it, including team names, plays, game server data sources all time texts. We can go back and use it to create even more content in the future.

It’s been a challenge for Riot. And it has been because we wanted to do it centrally. If you don’t do it this way, you’re leaving it to all the sections to figure it out on their own. We wanted to solve it for everybody and what we’ve achieved is that all the content that goes through Stryker goes into one big repository, which right now is running in the public cloud, available to everybody who needs it.

 

Stryker Dublin – Production Room. Copyright Riot Games.

 

The objectives of this project are to serve content to a global audience, what are you developing to achieve this capability and what is your vision for the future? 

One of the objectives of these facilities is to offer content to fans in any language, wherever they live, whatever language they speak, they should be able to enjoy the content. Those rooms, potentially, a traditional switcher doesn’t have the capacity we need to do the types of shows, nor is it smart to do 19 different rooms to do the same thing in a traditional way, just 19 different times in parallel. What we’re going to do is around automation and cloud.

We believe there is the possibility of building our shows in an ephemeral way that allows us to have 19 different graphics engines connected to the same show, all working from the same rundown, but with 19 different outputs, and potentially 19 different casters broadcasting the same show, but from all over the world and aggregating that content in the public cloud. In our view, this is the future of broadcasting.

 

What are your next steps?

Dublin facility is in its first shadow. Riot has a model of crawl, walk, and run. The crawl is during masters one which is happening as we speak. This facility’s in shadow mode basically following along with the production from Iceland and creating our versions. It’s a mix of an engineering shakeout to make sure the facility is technically sound but also training opportunity, building up our operators, confidence in the facility and how they work, and then working out any kinks or bugs that we discover along the way.

Our next will be a walk. That’s where we get to add to existing productions and create something that was not possible before. It won’t be developing a complete show, because that would be running, but it will be something that Stryker is capable of.

After that, the next step will be to run and for that we will have to wait until the event that will be held at the end of the summer this year in Madrid. This event will be the first to be produced entirely with Stryker.

POST TAGS:
Clear-Com is among t
Ateme to present too