Automatic Content Segmentation: Transformational technology to reduce effort & cycle time
PFT Blog Team | 04 Jul 2019

Automatic Content Segmentation: Transformational technology to reduce effort & cycle time
Automatic Content Segmentation: Transformational technology to reduce effort & cycle time Click To Tweet

By Adrish Bera, Senior Vice President, AI and Machine Learning, OVP and Analytics & Atul Saxena, SVP, Chief Solution Architect and Product Evangelist

 

Typically, television stations require content producers to follow specific guidelines and mark content with certain elements before submitting their programs for playout. These non-program segments include blacks, color bars and slate are usually technical and physical in nature. The duration of these segments are as specified by the broadcaster.

 

Color bars, or color television test patterns are digitally generated electronic signals produced by a Test Signal Generator (TSG), camera or editing software. Color bars are accompanied by an audio tone and are used to calibrate the signal. A slate is a title description of the program content, displaying text information like Client Name, Show Title, Episode, Total Run Time (TRT), Contact Name etc. A black screen is a marker to identify where the program actually starts. These are called physical segments, and are used by the broadcaster’s Traffic and Master Control departments to schedule and process content through various stages of the workflow.

 

For broadcast on linear TV, finished content may also have other non-program segments like opening and closing montages, pre-caps, recaps, credits, disclaimers, promos and commercials. These segments are critical to structure the content, retain the interest of viewers from one episode to the next, and most importantly, to earn advertisement revenue from the content.

 

Interact blog

 

The Need for Content Segmentation

 

Physical segments in content do not appear at standard time intervals in all programming. Often, the producer’s or broadcaster’s workflow requires accurate identification of their time code. Start-of-Message (SOM) and End-of-Message (EOM) locations need to be defined so that these segments can be altered, updated or removed. For example, in case of linear playout, the blacks, slates and color bars are usually removed just before the playout. For OTT distribution, none of the physical segments are necessary, and commercials are typically replaced by ads from internet-based ad exchanges.

 

Currently, this is a manual process, where an operator sifts through the content and marks these segments by identifying time code in and time code out. This entails a lot of time and manual effort for broadcasters and content owners.

 

Changing the Game with Automation

 

Today, media recognition Artificial Intelligence (AI) engines, powered by sophisticated media ERP software, use AI and Machine Learning (ML) to automate the process of segmenting content. This technology builds content recognition models for identifying physical segments within content with frame accuracy. It deploys Computer Vision and Convolutional Neural Network (CNN) techniques to identify standard segments like color bars, blacks and slates, and is trained to recognize other segments like opening and closing montages, pre-caps, recaps, credits, disclaimers, promos and commercials. The engine detects segments by identifying the start & end points for slates, blacks, color bars etc., while automatically segregating the remaining content as the main video segment. It ensures recall and precision of segment identification with extremely high accuracy (95% and above).

 

What goes into the making of this innovative technology? Definitely a lot more than standard AI & ML technologies! To be able to effectively segment content, engines need not just data-driven ML, but also a fair amount of instructional learning delivered on the back of deep media domain expertise. For example, a machine may not be able to tell a “creative black” section of a video from a “black marker” within the video without specific domain logic and business rules in the form of instructions. In order for downstream use cases to work, the engine needs to detect segment boundaries at an exact frame level. Without this capability, there is always a risk of a few left over frames in the segment boundary completely ruining the viewing experience.

 

With the increase in multi-platform content, the Media & Entertainment (M&E) industry can use physical segment detection technology for multiple use cases like 1) Automatic commercial identification, 2) Linear TV playout monitoring for commercials and actual program segments 3) Finding out the differences between 2 video masters and 4) Reconformance of content from the pre-HD era (by detecting the clips used in a final edit from original footage)

 

To summarize, automatic content segmentation enables broadcasters, studios and streaming service providers to drastically cut down the amount of time and manual effort involved in multiple use cases. It helps users enhance operational efficiencies, reduce the chance of human error, shorten turnaround time and lower Total Cost of Operations (TCOP) on the back of automation.

Click here and we'll get you a meeting with our Subject Matter Expert

Comments


Subscribe to get emails delivered to your inbox

Email *
FacebookTwitterLinkedIn

Thank you!
Our team will get in touch with you shortly.