Automated Closed Captioning Quality Qualitative Assessment.

Purpose of the Project

Existing methods for quality assessment in Closed Captioning (CC) require human resources and are limited to quantitative such as counting word errors. This project aims to design and develop an automated assessment system which can reflect Deaf or Hard of Hearing (D/HoH) viewer perspective. The system would use machine learning to replicate the human perception for quality assessment and predict the quality rating based on the caption representative values.

How it works

The prospect design of the system involves a multilayer perceptron-artificial neural networks (MLP-ANN), which has a standard structure to effectively predict the nonlinear data. The system will learn from a set of training data, which are the paired tuple of quality factors in CC and the qualitative rating which was assessed by D/HoH viewers. Once the system is trained, it will extract representative values for each quality factor of CC given a set of caption and transcript files. Then, the system will predict the quality rating at sentence level (e.g. if an English sentence runs over multiple caption blocks, it will merge the caption blocks and treat as an entity of analysis).

Impact of the project

It is expected to reduce the need of human resources in the procedure of quality assessment in Closed Captioning. The system would also satisfy the primary consumer of the service, who are D/HoH viewers, by reflecting their perspective in training the system with qualitative assessment data.

The research is at its initial stage, which finished the simulation before its data collection. We are currently looking into launching the first data collection to build the D/HoH user model.

Publication

Nam, S., & Fels, D. (2018, September). Assessing closed captioning quality using a multilayer perceptron. In 2018 IEEE First International Conference on Artificial Intelligence and Knowledge Engineering (AIKE) (pp. 9-16). IEEE.