A self‐distillation object segmentation method via frequency domain knowledge augmentation
Abstract Most self‐distillation methods need complex auxiliary teacher structures and require lots of training samples in object segmentation task. To solve this challenging, a self‐distillation object segmentation method via frequency domain knowledge augmentation is proposed. Firstly, an object se...
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2023-04-01
|
| Series: | IET Computer Vision |
| Subjects: | |
| Online Access: | https://doi.org/10.1049/cvi2.12170 |
| _version_ | 1827966632449802240 |
|---|---|
| author | Lei Chen Tieyong Cao Yunfei Zheng Zheng Fang |
| author_facet | Lei Chen Tieyong Cao Yunfei Zheng Zheng Fang |
| author_sort | Lei Chen |
| collection | DOAJ |
| description | Abstract Most self‐distillation methods need complex auxiliary teacher structures and require lots of training samples in object segmentation task. To solve this challenging, a self‐distillation object segmentation method via frequency domain knowledge augmentation is proposed. Firstly, an object segmentation network which efficiently integrates multi‐level features is constructed. Secondly, a pixel‐wise virtual teacher generation model is proposed to drive the transferring of pixel‐wise knowledge to the object segmentation network through self‐distillation learning, so as to improve its generalisation ability. Finally, a frequency domain knowledge adaptive generation method is proposed to augment data, which utilise differentiable quantisation operator to adjust the learnable pixel‐wise quantisation table dynamically. What's more, we reveal convolutional neural network is more inclined to learn low‐frequency information during the train. Experiments on five object segmentation datasets show that the proposed method can enhance the performance of the object segmentation network effectively. The boosting performance of our method is better than recent self‐distillation methods, and the average Fβ and mIoU are increased by about 1.5% and 3.6% compared with typical feature refinement self‐distillation method. |
| first_indexed | 2024-04-09T17:52:50Z |
| format | Article |
| id | doaj.art-b23897b99ad84c72aa01a290710db13b |
| institution | Directory Open Access Journal |
| issn | 1751-9632 1751-9640 |
| language | English |
| last_indexed | 2024-04-09T17:52:50Z |
| publishDate | 2023-04-01 |
| publisher | Wiley |
| record_format | Article |
| series | IET Computer Vision |
| spelling | doaj.art-b23897b99ad84c72aa01a290710db13b2023-04-15T11:16:51ZengWileyIET Computer Vision1751-96321751-96402023-04-0117334135110.1049/cvi2.12170A self‐distillation object segmentation method via frequency domain knowledge augmentationLei Chen0Tieyong Cao1Yunfei Zheng2Zheng Fang3The Army Engineering University of PLA Nanjing ChinaThe Army Engineering University of PLA Nanjing ChinaThe Army Engineering University of PLA Nanjing ChinaThe Army Engineering University of PLA Nanjing ChinaAbstract Most self‐distillation methods need complex auxiliary teacher structures and require lots of training samples in object segmentation task. To solve this challenging, a self‐distillation object segmentation method via frequency domain knowledge augmentation is proposed. Firstly, an object segmentation network which efficiently integrates multi‐level features is constructed. Secondly, a pixel‐wise virtual teacher generation model is proposed to drive the transferring of pixel‐wise knowledge to the object segmentation network through self‐distillation learning, so as to improve its generalisation ability. Finally, a frequency domain knowledge adaptive generation method is proposed to augment data, which utilise differentiable quantisation operator to adjust the learnable pixel‐wise quantisation table dynamically. What's more, we reveal convolutional neural network is more inclined to learn low‐frequency information during the train. Experiments on five object segmentation datasets show that the proposed method can enhance the performance of the object segmentation network effectively. The boosting performance of our method is better than recent self‐distillation methods, and the average Fβ and mIoU are increased by about 1.5% and 3.6% compared with typical feature refinement self‐distillation method.https://doi.org/10.1049/cvi2.12170computer visionconvolutional neural netsimage segmentation |
| spellingShingle | Lei Chen Tieyong Cao Yunfei Zheng Zheng Fang A self‐distillation object segmentation method via frequency domain knowledge augmentation IET Computer Vision computer vision convolutional neural nets image segmentation |
| title | A self‐distillation object segmentation method via frequency domain knowledge augmentation |
| title_full | A self‐distillation object segmentation method via frequency domain knowledge augmentation |
| title_fullStr | A self‐distillation object segmentation method via frequency domain knowledge augmentation |
| title_full_unstemmed | A self‐distillation object segmentation method via frequency domain knowledge augmentation |
| title_short | A self‐distillation object segmentation method via frequency domain knowledge augmentation |
| title_sort | self distillation object segmentation method via frequency domain knowledge augmentation |
| topic | computer vision convolutional neural nets image segmentation |
| url | https://doi.org/10.1049/cvi2.12170 |
| work_keys_str_mv | AT leichen aselfdistillationobjectsegmentationmethodviafrequencydomainknowledgeaugmentation AT tieyongcao aselfdistillationobjectsegmentationmethodviafrequencydomainknowledgeaugmentation AT yunfeizheng aselfdistillationobjectsegmentationmethodviafrequencydomainknowledgeaugmentation AT zhengfang aselfdistillationobjectsegmentationmethodviafrequencydomainknowledgeaugmentation AT leichen selfdistillationobjectsegmentationmethodviafrequencydomainknowledgeaugmentation AT tieyongcao selfdistillationobjectsegmentationmethodviafrequencydomainknowledgeaugmentation AT yunfeizheng selfdistillationobjectsegmentationmethodviafrequencydomainknowledgeaugmentation AT zhengfang selfdistillationobjectsegmentationmethodviafrequencydomainknowledgeaugmentation |