A self‐distillation object segmentation method via frequency domain knowledge augmentation
Abstract Most self‐distillation methods need complex auxiliary teacher structures and require lots of training samples in object segmentation task. To solve this challenging, a self‐distillation object segmentation method via frequency domain knowledge augmentation is proposed. Firstly, an object se...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2023-04-01
|
Series: | IET Computer Vision |
Subjects: | |
Online Access: | https://doi.org/10.1049/cvi2.12170 |
Summary: | Abstract Most self‐distillation methods need complex auxiliary teacher structures and require lots of training samples in object segmentation task. To solve this challenging, a self‐distillation object segmentation method via frequency domain knowledge augmentation is proposed. Firstly, an object segmentation network which efficiently integrates multi‐level features is constructed. Secondly, a pixel‐wise virtual teacher generation model is proposed to drive the transferring of pixel‐wise knowledge to the object segmentation network through self‐distillation learning, so as to improve its generalisation ability. Finally, a frequency domain knowledge adaptive generation method is proposed to augment data, which utilise differentiable quantisation operator to adjust the learnable pixel‐wise quantisation table dynamically. What's more, we reveal convolutional neural network is more inclined to learn low‐frequency information during the train. Experiments on five object segmentation datasets show that the proposed method can enhance the performance of the object segmentation network effectively. The boosting performance of our method is better than recent self‐distillation methods, and the average Fβ and mIoU are increased by about 1.5% and 3.6% compared with typical feature refinement self‐distillation method. |
---|---|
ISSN: | 1751-9632 1751-9640 |