MAE-VQA: an efficient and accurate end-to-end video quality assessment method for user generated content videos

In the digital age, the proliferation of user-generated content (UGC) videos presents unique challenges in maintaining video quality across diverse platforms. In this project, we propose Masked Auto-Encoder model for no-reference video quality assessment (NR-VQA) problem. To our best knowledge, we a...

Full description

Bibliographic Details
Main Author: Wang, Chuhan
Other Authors: Lin Weisi
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2024
Subjects:
Online Access:https://hdl.handle.net/10356/178566
_version_ 1826125164469813248
author Wang, Chuhan
author2 Lin Weisi
author_facet Lin Weisi
Wang, Chuhan
author_sort Wang, Chuhan
collection NTU
description In the digital age, the proliferation of user-generated content (UGC) videos presents unique challenges in maintaining video quality across diverse platforms. In this project, we propose Masked Auto-Encoder model for no-reference video quality assessment (NR-VQA) problem. To our best knowledge, we are the first to apply the MAE to NR- VQA, and propose the MAE-VQA model. Specifically, MAE-VQA model is designed to evaluate the quality of UGC videos without the need for reference footage, which is often unavailable in real-world scenarios. It is composed of three modules: patch masking module, auto-encoder module, and quality regression module, respectively for handling sampling strategy, capturing spatiotemporal representations, and mapping to video quality score. This approach is specifically designed to capture and analyze the complex spatiotemporal features and diverse distortions typical of UGC. Vision Transformer’s (ViT) self-attention mechanism allows for detailed observation of different parts in a video, facilitating the understanding of their correlation. Transformer is able to extract the features and texture information from the distorted video. Given that video content is highly redundant, appropriately extracted features can speed up the model without decreasing accuracy. By masking the majority of the input video, MAE-VQA can use ViTto learn robust spatiotemporal representations from videos. We conduct thorough assessments on benchmark datasets to contrast our methodology with cutting-edge techniques. The achievement of this project is that our approach achieves state-of-the-art performance across the majority of VQA datasets and secures a close second in the remainder, while resulting in a significant reduction in computational overhead.
first_indexed 2024-10-01T06:32:13Z
format Final Year Project (FYP)
id ntu-10356/178566
institution Nanyang Technological University
language English
last_indexed 2024-10-01T06:32:13Z
publishDate 2024
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1785662024-06-28T15:36:47Z MAE-VQA: an efficient and accurate end-to-end video quality assessment method for user generated content videos Wang, Chuhan Lin Weisi School of Computer Science and Engineering WSLin@ntu.edu.sg Computer and Information Science In the digital age, the proliferation of user-generated content (UGC) videos presents unique challenges in maintaining video quality across diverse platforms. In this project, we propose Masked Auto-Encoder model for no-reference video quality assessment (NR-VQA) problem. To our best knowledge, we are the first to apply the MAE to NR- VQA, and propose the MAE-VQA model. Specifically, MAE-VQA model is designed to evaluate the quality of UGC videos without the need for reference footage, which is often unavailable in real-world scenarios. It is composed of three modules: patch masking module, auto-encoder module, and quality regression module, respectively for handling sampling strategy, capturing spatiotemporal representations, and mapping to video quality score. This approach is specifically designed to capture and analyze the complex spatiotemporal features and diverse distortions typical of UGC. Vision Transformer’s (ViT) self-attention mechanism allows for detailed observation of different parts in a video, facilitating the understanding of their correlation. Transformer is able to extract the features and texture information from the distorted video. Given that video content is highly redundant, appropriately extracted features can speed up the model without decreasing accuracy. By masking the majority of the input video, MAE-VQA can use ViTto learn robust spatiotemporal representations from videos. We conduct thorough assessments on benchmark datasets to contrast our methodology with cutting-edge techniques. The achievement of this project is that our approach achieves state-of-the-art performance across the majority of VQA datasets and secures a close second in the remainder, while resulting in a significant reduction in computational overhead. Bachelor's degree 2024-06-26T05:39:37Z 2024-06-26T05:39:37Z 2024 Final Year Project (FYP) Wang, C. (2024). MAE-VQA: an efficient and accurate end-to-end video quality assessment method for user generated content videos. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/178566 https://hdl.handle.net/10356/178566 en SCSE23-0760 application/pdf Nanyang Technological University
spellingShingle Computer and Information Science
Wang, Chuhan
MAE-VQA: an efficient and accurate end-to-end video quality assessment method for user generated content videos
title MAE-VQA: an efficient and accurate end-to-end video quality assessment method for user generated content videos
title_full MAE-VQA: an efficient and accurate end-to-end video quality assessment method for user generated content videos
title_fullStr MAE-VQA: an efficient and accurate end-to-end video quality assessment method for user generated content videos
title_full_unstemmed MAE-VQA: an efficient and accurate end-to-end video quality assessment method for user generated content videos
title_short MAE-VQA: an efficient and accurate end-to-end video quality assessment method for user generated content videos
title_sort mae vqa an efficient and accurate end to end video quality assessment method for user generated content videos
topic Computer and Information Science
url https://hdl.handle.net/10356/178566
work_keys_str_mv AT wangchuhan maevqaanefficientandaccurateendtoendvideoqualityassessmentmethodforusergeneratedcontentvideos