Digital Nirvana upgrades MetadataIQ and integrates it with the Avid ecosystem

Digital Nirvana has recently announced the upgrade of MetadataIQ. This product is a SaaS-based tool that automatically generates speech-to-text and video metadata on Avid PAM/MAM tools. The new version, which will be showcased firstly at NAB, makes beta-tested video intelligence capabilities commercially available and integrates directly with Avid MediaCentral.

MetadataIQ 4.0 relies on machine learning and AI capabilities in the cloud (speech to text, facial recognition, object identification, content classification, etc.) to create metadata more quickly and less expensively than traditional methods. The tool not only automatically generates speech-to-text transcripts on incoming feeds in real time but also integrates the transcript into the media in the Avid environment.

Also, instead of sending metadata only to Avid Interplay on-prem implementations, MetadataIQ 4.0 will integrate with Avid’s cloud-based MediaCentral hub. Thanks to cloud integration, editors will be able to combine searches in MediaCentral based on multiple forms of metadata. For example, if MetadataIQ generates metadata using OCR, facial recognition, and speech to text, when an editor enters search terms, MediaCentral will search all three of those types of metadata simultaneously.

Digital Nirvana introduced this software a year ago and has since been tested by news organizations, one in the United States and one in the Middle East. These organizations have seen their production processes improve and have benefited from the experience of having the relevant information indexed and in text format.

“These new developments will allow producers and editors to pinpoint the right clips and create content even faster, which is especially crucial when it comes to news, sports, and other time-sensitive broadcast applications,” said Russell Wise, senior vice president of sales and marketing at Digital Nirvana.

Dalet to showcase it
Swiss systems integr