FAST-VQA: efficient end-to-end video quality assessment with fragment sampling

Current deep video quality assessment (VQA) methods are usually with high computational costs when evaluating high-resolution videos. This cost hinders them from learning better video-quality-related representations via end-to-end training. Existing approaches typically consider naive sampling to re...

Full description

Bibliographic Details
Main Authors: Wu, Haoning, Chen, Chaofeng, Hou, Jingwen, Liao, Liang, Wang, Annan, Sun, Wenxiu, Yan, Qiong, Lin, Weisi
Other Authors: College of Computing and Data Science
Format: Conference Paper
Language:English
Published: 2024
Subjects:
Online Access:https://hdl.handle.net/10356/178453
https://link.springer.com/chapter/10.1007/978-3-031-20068-7_31
_version_ 1826127870669357056
author Wu, Haoning
Chen, Chaofeng
Hou, Jingwen
Liao, Liang
Wang, Annan
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
author2 College of Computing and Data Science
author_facet College of Computing and Data Science
Wu, Haoning
Chen, Chaofeng
Hou, Jingwen
Liao, Liang
Wang, Annan
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
author_sort Wu, Haoning
collection NTU
description Current deep video quality assessment (VQA) methods are usually with high computational costs when evaluating high-resolution videos. This cost hinders them from learning better video-quality-related representations via end-to-end training. Existing approaches typically consider naive sampling to reduce the computational cost, such as resizing and cropping. However, they obviously corrupt quality-related information in videos and are thus not optimal to learn good representations for VQA. Therefore, there is an eager need to design a new quality-retained sampling scheme for VQA. In this paper, we propose Grid Mini-patch Sampling (GMS), which allows consideration of local quality by sampling patches at their raw resolution and covers global quality with contextual relations via mini-patches sampled in uniform grids. These mini-patches are spliced and aligned temporally, named as fragments. We further build the Fragment Attention Network (FANet) specially designed to accommodate fragments as inputs. Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations. It improves state-of-the-art accuracy by around 10 % while reducing 99.5 % FLOPs on 1080P high-resolution videos. The newly learned video-quality-related representations can also be transferred into smaller VQA datasets, boosting the performance on these scenarios. Extensive experiments show that FAST-VQA has good performance on inputs of various resolutions while retaining high efficiency. We publish our code at https://github.com/timothyhtimothy/FAST-VQA.
first_indexed 2024-10-01T07:15:33Z
format Conference Paper
id ntu-10356/178453
institution Nanyang Technological University
language English
last_indexed 2024-10-01T07:15:33Z
publishDate 2024
record_format dspace
spelling ntu-10356/1784532024-06-20T08:40:39Z FAST-VQA: efficient end-to-end video quality assessment with fragment sampling Wu, Haoning Chen, Chaofeng Hou, Jingwen Liao, Liang Wang, Annan Sun, Wenxiu Yan, Qiong Lin, Weisi College of Computing and Data Science School of Computer Science and Engineering 17th European Conference on Computer Vision (ECCV 2022) S-Lab Computer and Information Science Video quality assessment Fragments Current deep video quality assessment (VQA) methods are usually with high computational costs when evaluating high-resolution videos. This cost hinders them from learning better video-quality-related representations via end-to-end training. Existing approaches typically consider naive sampling to reduce the computational cost, such as resizing and cropping. However, they obviously corrupt quality-related information in videos and are thus not optimal to learn good representations for VQA. Therefore, there is an eager need to design a new quality-retained sampling scheme for VQA. In this paper, we propose Grid Mini-patch Sampling (GMS), which allows consideration of local quality by sampling patches at their raw resolution and covers global quality with contextual relations via mini-patches sampled in uniform grids. These mini-patches are spliced and aligned temporally, named as fragments. We further build the Fragment Attention Network (FANet) specially designed to accommodate fragments as inputs. Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations. It improves state-of-the-art accuracy by around 10 % while reducing 99.5 % FLOPs on 1080P high-resolution videos. The newly learned video-quality-related representations can also be transferred into smaller VQA datasets, boosting the performance on these scenarios. Extensive experiments show that FAST-VQA has good performance on inputs of various resolutions while retaining high efficiency. We publish our code at https://github.com/timothyhtimothy/FAST-VQA. This study is supported under the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). 2024-06-20T08:40:38Z 2024-06-20T08:40:38Z 2022 Conference Paper Wu, H., Chen, C., Hou, J., Liao, L., Wang, A., Sun, W., Yan, Q. & Lin, W. (2022). FAST-VQA: efficient end-to-end video quality assessment with fragment sampling. 17th European Conference on Computer Vision (ECCV 2022), LNCS 13666, 538-554. https://dx.doi.org/10.1007/978-3-031-20068-7_31 9783031200670 https://hdl.handle.net/10356/178453 10.1007/978-3-031-20068-7_31 2-s2.0-85144507207 https://link.springer.com/chapter/10.1007/978-3-031-20068-7_31 LNCS 13666 538 554 en © 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG. All rights reserved.
spellingShingle Computer and Information Science
Video quality assessment
Fragments
Wu, Haoning
Chen, Chaofeng
Hou, Jingwen
Liao, Liang
Wang, Annan
Sun, Wenxiu
Yan, Qiong
Lin, Weisi
FAST-VQA: efficient end-to-end video quality assessment with fragment sampling
title FAST-VQA: efficient end-to-end video quality assessment with fragment sampling
title_full FAST-VQA: efficient end-to-end video quality assessment with fragment sampling
title_fullStr FAST-VQA: efficient end-to-end video quality assessment with fragment sampling
title_full_unstemmed FAST-VQA: efficient end-to-end video quality assessment with fragment sampling
title_short FAST-VQA: efficient end-to-end video quality assessment with fragment sampling
title_sort fast vqa efficient end to end video quality assessment with fragment sampling
topic Computer and Information Science
Video quality assessment
Fragments
url https://hdl.handle.net/10356/178453
https://link.springer.com/chapter/10.1007/978-3-031-20068-7_31
work_keys_str_mv AT wuhaoning fastvqaefficientendtoendvideoqualityassessmentwithfragmentsampling
AT chenchaofeng fastvqaefficientendtoendvideoqualityassessmentwithfragmentsampling
AT houjingwen fastvqaefficientendtoendvideoqualityassessmentwithfragmentsampling
AT liaoliang fastvqaefficientendtoendvideoqualityassessmentwithfragmentsampling
AT wangannan fastvqaefficientendtoendvideoqualityassessmentwithfragmentsampling
AT sunwenxiu fastvqaefficientendtoendvideoqualityassessmentwithfragmentsampling
AT yanqiong fastvqaefficientendtoendvideoqualityassessmentwithfragmentsampling
AT linweisi fastvqaefficientendtoendvideoqualityassessmentwithfragmentsampling