ENCO upgrades enCaption4 with a feature that enables lip-sync grade caption synchronization

Enco enCaption 4 User Interface in a monitor

ENCO is heading to NAB Show New York with a new capability in its enCaption4 automated captioning system. Exhibiting in booth N355 from October 16 to 17, the company will unveil a new video delay feature that enables lip-sync grade caption synchronization even when transcribing live feeds. Recent enhancements that debuted at IBC last month will also be showcased.

enCaption4’s newest capability effectively synchronizes the live captions with the spoken words. The solution can now delay the associated video and audio by a user-configurable duration to provide lip-sync-like alignment. By setting a longer delay, customers can choose to expand the audio analysis window to enhance enCaption4’s speech-to-text accuracy.

“Broadcasters have long considered lip-sync-like caption synchronization for live content as the ‘holy grail’ of closed captioning, particularly for programming such as newscasts and sports,” said Ken Frommert, President of ENCO. “We’re excited to make this a reality with the seamless integration of video delay functionality within enCaption4. Now, customers can bring in a live feed and get an exceptionally well-synchronized, captioned version back out, all through a single system.”

The integrated video delay functionality is a key element of ENCO’s automated captioning patent, and can be applied to a wide array of enCaption4 output options. enCaption4 systems incorporating the optional, DoCaption-powered, internal closed caption encoder card automatically output SDI signals with synchronized captions embedded, while other enCaption4 units equipped with SDI outputs can deliver delayed video to external caption encoders. The video delay can also be used to align open captions that are overlaid atop web-destined and NDI output® streams.

NAB Show New York attendees will also see the North American debut of enCaption4’s advanced punctuation capabilities, which now supports  characters including full stops, commas, exclamation marks and question marks. The software detects the context surrounding pauses and trigger words to insert appropriate punctuation and capitalization on the fly, helping viewers better understand not only which words are being spoken, but also how they are being said.

Other recent enCaption4 features on display will include an enhanced scheduling interface; a Web API for third-party integration; the ability to detect changes between multiple speakers even within a single mixed feed; and further improvements to the system’s outstanding accuracy and speed.

Ikegami releases the
How to improve your