pft

Introduction

The global media and entertainment (M&E) industry has undergone a digital transformation in the last decade. In the last twelve to fifteen months, triggered by the restrictions imposed by the pandemic, content consumption has exploded through the roof as consumers stayed put at home, glued to their TVs, desktops, laptops, and mobile phones. In addition, the ubiquity of the internet and the proliferation of platforms and channels have given consumers unlimited content options. This rapid change with the consumer is accelerating leverage of technology & digital transformation within entertainment companies.

Amongst the many ideas to do so, M&E players are experimenting with AI technologies to help transform the content business and enhance the overall customer experience. But, unfortunately, this is a journey and success not yet commonplace.

Is AI working for You?

Our research shows that many M&E organizations have run AI projects with different vendors, often rue that "the demo was impressive, but the project hit a wall at the Proof of Concept (POC) stage itself because the AI solution did not work for my content!" When the cycle was repeated with multiple vendors, they may have even concluded that AI solutions are not available or mature enough to solve specific M&E business challenges.

We learned M&E enterprise's business problems in their totality couldn't be solved effectively by any one off-the-shelf AI solution in the market. The experiments with limited resources, budgets & the choice of available AI offerings have hit a wall with the search for a complete solution that has adequate accuracy to make AI work for the business use cases.

Shortcomings of current AI approaches

Usual excuses for not adopting AI

USE CASE

CONFORM

VALUE
PROPOSITION

Automatically conform regional edits, subtitles & dub tracks on global masters in a multiple frame rate baseline.

CHALLENGES

Currently, film and television studios are creating content at 29.97 or 23.97 FPS. They can also have it mastered at 25 FPS with local dubbed audio or the other way around. However, to distribute the content, the audio of the dubbed master has to be conformed to the source video master.

The subtitles in one FPS version have to be conformed to the source video master too. And, forced narrations have to be identified and exported to a sidecar file in sync with the source video.

Conforming audios and subtitles to the source video is a highly time and labor-intensive task that are also highly error-prone. Plus, forced narrations in the source have to be identified and translated by professionals before Conformance.

SOLUTION

An AI-powered video comparator can quickly conform the audio in the regional master to the source video master. It can automatically identify the audio and forced narration gaps in the regional master compared to the source video master.

With CLEAR Vision Cloud, conformance issues flagged are exported for a quick human QC/Edit to finalize and publish. In addition, the audio gaps are addressed in dubbing.

The forced narration issues are exported to a sidecar file lined up on the subtitle tool within Vision Cloud, and a linguist does a quick QC.

OUTCOMES

CLEAR Vision Cloud ensures very high accuracy across frame rates, enabling a reduction in time, effort, and cost involved in Conformance by leveraging a high level of ML-led automation to suit specific M&E requirements.

The high level of automation that ML will bring over time as the machines pick up the logic can help scale up the conformance activity for a large volume of content.


USE CASE

LOCALIZATION

VALUE
PROPOSITION

Automatically generate, transcribe, trans-create subtitles in over 60 languages.

CHALLENGES

Subtitle script creation is time-consuming and involves transcription/translation, syncing it with video and audio, and re-timing.

At times, due to the poor quality of audio, the subtitle script may get some words or phrases wrong. In addition, spelling and grammar errors while typing captions are also common.

Building an AI-enabled language transcription/translation tool that can deliver substantial business benefits is an arduous task. Even if the solution offers 95% accuracy, the captioning specialist must review the entire process, making the automation redundant as time and cost benefits are nullified.

Clients are specifying parameters like characters per line, the number of lines, and reading speed. Subtitles should always sync with the video.

For AI to drive subtitling in 60+ languages, it should get quite a few things right most of the time to make it work for studios, broadcasters, OTT players and, station groups.

SOLUTION

A language transcription/translation tool powered by AI can deliver a high accuracy of transcripts over time through integrations with AWS, Google, and Microsoft and machine wisdom.

The content processed by CLEAR Vision Cloud gets preloaded for QC, drawing attention to a text whose confidence is low or where captions are missing. So a caption specialist does not have to review the entire process manually.

Apart from auto-generating the captions ensuring a very high level of accuracy, CLEAR Vision Cloud offers side-by-side comparison aiding the caption specialist to fill in the gaps quickly.

OUTCOMES

CLEAR Vision Cloud ensures far greater accuracy (English language accuracy - 80-90%; other languages - 70-85%) apart from reducing time and effort involved in Localization by tweaking the AI models to suit specific M&E requirements.

Automatic subtitle generation plus time-coded subtitles against each shot means the user will no longer have to type the subtitles manually.

The subtitles generated are automatically synced with the video.

Wherever the system cannot auto-generate the subtitles, it gives the user a blank (length of the blank changes depending upon the size of the word), so the user can listen in to that particular piece of audio & manually type in the content.

Dictionary (road-map feature) helps users spell-check and context check to correct subtitles detected or keyed in, making it easy.

The SRT file generated can be exported to CLEAR for downstream activities.

Trans-creation option available, if required.


USE CASE

LOCALIZATION

VALUE
PROPOSITION

Automatic subtitle re-timing with 100% accuracy.

CHALLENGES

Subtitle re-timing is a manual process and takes a lot of time to ensure accuracy. As a result, it severely slows down the global syndication workflows.

An AI-enabled comparator tool that can compare video frames from source and edited versions accurately is challenging. If the comparator fails to detect time codes of cuts and inserts accurately, it leads to erroneous re-timing of the subtitles.

SOLUTION

An AI-enabled comparator tool for subtitle re-timing can deliver a high level of frame accuracy over time through machine learning.

The comparator can compare pre and post edit masters and accurately identify cuts, edits, and inserts frame, leading to much faster automatic subtitle re-timing.

CLEAR Vision Cloud compares both source and edited versions of the videos and identifies matched and un-matched segments.

It frame-accurately provides time codes of cuts and inserts in an edited video. It automatically re-times the subtitles on the edited video.

OUTCOMES

CLEAR Vision Cloud's AI-led comparator dramatically enhances frame accuracy to near 100% and considerably reduces subtitle re-timing cycle time, leading to better efficiencies and economics.

The user has to do a quick round of QC to make sure the subtitles are perfectly synced in the edited version of the video.

There is no need to traverse frame-by-frame to check its accuracy; errors, if any, are automatically flagged.


USE CASE

SEGMENTATION

VALUE
PROPOSITION

Eliminate playout errors drastically & increase monetization by auto-generating frame-accurate segmentation metadata.

CHALLENGES

Identifying and marking content segments before submitting the programs and ads for playout to a broadcaster is a time-consuming, mundane process involving error-prone manual labor.

Developing an AI-enabled segmentation tool that can expedite the process is not easy. The content segments come with noise and variations, and their accurate and fast detection requires deeper cognition and interpretation. Also, the tool has to develop the ability to identify custom segments of a content enterprise.

Even if a few segments or frames are missed, it requires a QC of the whole process.

SOLUTION

An AI-led segmentation tool can deliver speed and 100% accuracy in identifying segments (blacks, color bar, title slate, opening, and closing montages, pre-caps, recaps, credits, disclaimers, promos and commercials, text and textless segments, and custom segments as defined by YOU) in your content over time through machine wisdom.

CLEAR Vision Cloud automates accurate segment identification guided by visual, audio, and business rules (creative blacks/cutting out stills where audio ends/rolling credit v/s credit on content) and involves limited manual QC.

It identifies markers to help build the "Skip Intro" feature in Streaming platforms.

OUTCOMES

For Hearst Television, the time taken by CLEAR Vision Cloud is just 0.35x - 0.45x, which was 58% lesser than the time taken earlier. PFT also helped Hearst save more than 50% cost by eliminating manual intervention.

For short-form workflows, 100% accuracy, with automation in the range of 95-100%. For long-form workflows, increased accuracy and automation that enables 80-90% reduction of cycle time and over 50% in costs have been achieved over time.

AI-led automation ensures zero errors during playout and offers broadcasters the ability to insert local ads on barter segments leading to increased monetization.

No scope for error in play out.


USE CASE

MARKETING

VALUE
PROPOSITION

Locate the suitable clips you need to build a variety of cross-platform promo material with the high-quality discovery of data, automatically.

CHALLENGES

Promo editors have to spend copious amounts of time searching through the entire video footage running into hours for the suitable clips to make a line-up and still miss a few critical shots by mere forwards.

Building an AI-powered promo assist tool that will speed up the promo creation process while still offering the promo editor professional freedom is challenging. Promos are highly creative and engaging expressions of the main story's exciting facets.

SOLUTION

An AI-powered promo-assist tool can auto-generate and rank key highlights, moments, and dialogues. Then, based on input parameters like duration, scene selections, transition effects, and others, line up a set of shots for the promo editor to re-arrange and lock.

The editor can also search in natural language for more scenes as the ingested content is cataloged. Since the project can be exported to Adobe Premiere Pro, further post activities are also not hindered.

CLEAR Vision Cloud has built one of the best search solutions into the content archive for the global M&E industry.

  • Key highlights, moments, dialogues are auto-generated and ranked
  • Detailed metadata are auto-identified.
  • The tool throws up parameters for a promo (duration, scene selections, transition effects, to name a few).
  • The editor can re-arrange the chosen shots based on the order they would like them.
  • The editor can also search for scenes they want in English.
  • The project is exported to editing software like Adobe Premiere Pro as an EDL (Edit Decision List) file.
  • The cuts and final touches to the promo (rough cut) are applied within Premiere Pro.
  • Background sounds, transitions, color correction, graphics, etc., are applied to the rough cut to shape the final version of the promo.

OUTCOMES

CLEAR Vision Cloud automatically identifies key shots, helping save 60-80% of the search time, and with the tool, there is no question of missing any key moments.

The editor now needs just 15 min. to quickly search and review the auto-generated compilations and shortlist the clips.

Sometimes, the user might miss a few critical shots by mere forwards, but the tool would never miss them.


USE CASE

CONTENT QC

VALUE
PROPOSITION

Automatically quality-control the bag & tag versions of several promos daily to deliver 100% quality and prevent leakage in your advertising revenue.

CHALLENGES

If a channel plays out 200 promos in a week as part of different campaigns, creating these promos would mean manually managing about 5000 AV elements in a week and about 20,000 in a month!

Promo version QC operators have to manually check for parameters like duration, version, channel, sponsor, title info, etc., and identify those that do not meet the specifications, both creative and technical, which editors have to work on to correct.

With each promo taking 5-7 minutes, this is time-consuming and highly prone to errors of omission.

Failure to meet some of the parameters like duration and version has a direct impact on monetization.

An AI-powered promo version QC appliance that can auto-detect with 100% accuracy all parameters in a promo and flag deviations is easier said than achieved. The parameters are many - duration, version, channel, sponsor, title info, tune info, blacks, audio tracks, video tracks, the start of media, and house number.

SOLUTION

An AI-powered promo version QC appliance that can be tuned with machine learning towards 100% precision, ensuring that only a tiny percentage of promos are flagged for deviation and sent to a QC editor, thereby ensuring zero error play out.

The parameters are auto-detected in a video and presented to the QC editor.

A dashboard offers complete visibility into promo status - rejected or approved.

CLEAR Vision Cloud allows the editor to check the rejected promos, identify whether it's a creative or technical reason, override it if okay, or reject the promo.

OUTCOMES

CLEAR Vision Cloud drastically reduces manual promo version QC time to seconds as the parameters are auto-detected. In addition, the dashboard gives complete visibility into promo status. If the promo asset is rejected for a creative or technical reason, the Promo version QC editor can work on it within CLEAR to make the asset good.

Automatic detection of the above parameters means the user will no longer have to maintain a checklist and review these manually, which would take about 5-7 minutes per promo. They are served automatically.

he tool also allows the editor to work on the cut. For example, if the editor should reduce promo length by 15 seconds, the tool will enable them to do so.

Sometimes, the user might miss a few parameters, but the tool would never miss them.

The duration feature ensures there is no ad loss for a content enterprise because of human error. Furthermore, since the duration is accurate, the broadcaster can plan for the ad accordingly.

The version feature ensures no ad revenue loss due to human error.

No more spreadsheets!

CLEAR Vision Cloud

PFT's native media recognition AI platform CLEAR Vision Cloud helps solve real-world business problems of TV Networks, Studios, and OTT platforms because of its perfect combination of technology and consulting.

CLEAR Vision Cloud produces accurate data and actionable data. Click here to learn more, download our brochure, and schedule some time to talk to an AI expert.