The evolution of newsroom automation technologies
By Craig Wilson, Product Evangelist, Avid
Television news has a long history of evolving, from reporters on manual typewriters through a newsroom computer system, to laptops in the field connected via virtual private networks (VPNs). These workflow transitions have been gradual, but they’ve hit an upwards trajectory once the Covid-19 pandemic hit globally. Most broadcast plans now actively involve embedding journalists more deeply in their communities and reducing their presence in the newsroom or virtualizing common news workflows in the cloud.
However, what were once buzzwords, like AI, are now being put to the test, with more room for automation across the newsroom floor allowed to ensure smooth running of processes. Things like automatic transcription, automatic summarization, automatic translation, enabling one media asset to be used multiple times across languages, and making that process easier and more efficient is key. It’s what artificial intelligence (AI) has been able to provide crews with for the past couple of years.
How it has changed the newsroom production landscape…
Some of our broadcast news customers are looking for more efficient ways of doing things and removing the day-to-day mundane tasks. I recently spoke to Nic Newman, from the Reuters Institute for the Study of Journalism about the key trends facing media companies in the year ahead, in Avid’s Making the Media podcast, who spoke about how BBC used AI to take constituency results from elections and essentially using AI models to write narrative stories, which are then checked by journalists. It’s a form of “robo-journalism” whereby the AI does half the job and then a human checking what reads well and approving the copy.
Also, with people out of the office, tools that enable workflows to communicate between teams, track progress, share content, were key to ensuring smooth continuation of processes. Avid’s Media Central Collaborate was one of these tools that allows dispersed teams to collaborate virtually, create and manage topics and then add info, link ideas, assign people and equipment to cover different aspects of the story, and add tasks. Team members can then view topics, assignments, and the people they’ll be working with, gather the resources they need, and execute on their tasks—all in one easy-to-use interface.
The trends and future to watch out for…
We’ll see more general adoption of AI in the future as more companies embrace some of the cutting-edge features it can provide, particularly in terms of distribution. One of those aspects looks at the analytics behind the content, such as what people like, when they like it, and how they like it, which informs future editorial decisions and allows you to customise material to the correct social media channels or online audiences. It’ll be about receiving that quick feedback from AI powered analytics services and using it to make the appropriate decisions in a virtuous circle of content creation
More and more broadcasters will look to try and take manual processes out and replace them with automated ones. Automated editing is a strong example. Editors are now able to do a search across the audio of media and retrieve the right clips given the right metadata is in place. However, this process is complex as there’s a lot of data to sift through. In years to come, AI will get better at identifying the key interesting bits based on the metadata available and can even produce a rough cut that will then be completed by an editor. From an archive perspective this is even more crucial, as there are many hours of footage preserved that editors and journalists have to comb through. You can never employ enough people to do that job, but an AI system can identify the correct parameters for a reel, make metadata easily discoverable and monetize it too if needed. However AI is not perfect and it will take time to enhance metadata discovery and make content easier to find. Also, there are of course concerns with AI models out there, especially around this concept of “robo-journalism”, but that’s something that will dissipate with time, especially in situations where reporters are as the end point in that model, checking and approving all final content.
We’re also seeing newsrooms evolving across the audio sphere too. If you look at audio right now, you don’t just have podcasts, but also short-form audio, with more voice devices opening new ways of telling stories and interacting. Same with digital video. A few years ago, as broadcast hubs were pivoting towards digital video forms across online, we’re now seeing a second wave of that towards digital video across TikTok and Instagram, with newer and younger audiences that expect content delivered differently, to cater for their needs.