Sequential Normalization: Embracing Smaller Sample Sizes for Normalization
Normalization as a layer within neural networks has over the years demonstrated its effectiveness in neural network optimization across a wide range of different tasks, with one of the most successful approaches being that of batch normalization. The consensus is that better estimates of the BatchNo...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-07-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/13/7/337 |
_version_ | 1827596934462832640 |
---|---|
author | Neofytos Dimitriou Ognjen Arandjelović |
author_facet | Neofytos Dimitriou Ognjen Arandjelović |
author_sort | Neofytos Dimitriou |
collection | DOAJ |
description | Normalization as a layer within neural networks has over the years demonstrated its effectiveness in neural network optimization across a wide range of different tasks, with one of the most successful approaches being that of batch normalization. The consensus is that better estimates of the BatchNorm normalization statistics (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>σ</mi><mn>2</mn></msup></semantics></math></inline-formula>) in each mini-batch result in better optimization. In this work, we challenge this belief and experiment with a new variant of BatchNorm known as GhostNorm that, despite independently normalizing batches within the mini-batches, i.e., <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>σ</mi><mn>2</mn></msup></semantics></math></inline-formula> are independently computed and applied to groups of samples in each mini-batch, outperforms BatchNorm consistently. Next, we introduce sequential normalization (SeqNorm), the sequential application of the above type of normalization across two dimensions of the input, and find that models trained with SeqNorm consistently outperform models trained with BatchNorm or GhostNorm on multiple image classification data sets. Our contributions are as follows: (i) we uncover a source of regularization that is unique to GhostNorm, and not simply an extension from BatchNorm, and illustrate its effects on the loss landscape, (ii) we introduce sequential normalization (SeqNorm) a new normalization layer that improves the regularization effects of GhostNorm, (iii) we compare both GhostNorm and SeqNorm against BatchNorm alone as well as with other regularization techniques, (iv) for both GhostNorm and SeqNorm models, we train models whose performance is consistently better than our baselines, including ones with BatchNorm, on the standard image classification data sets of CIFAR–10, CIFAR-100, and ImageNet ((<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>0.2</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>0.7</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>0.4</mn><mo>%</mo></mrow></semantics></math></inline-formula>), and (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>0.3</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>1.7</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>1.1</mn><mo>%</mo></mrow></semantics></math></inline-formula>) for GhostNorm and SeqNorm, respectively). |
first_indexed | 2024-03-09T03:19:59Z |
format | Article |
id | doaj.art-024c8e9bd4be4ca8b86851dacdc4fbcd |
institution | Directory Open Access Journal |
issn | 2078-2489 |
language | English |
last_indexed | 2024-03-09T03:19:59Z |
publishDate | 2022-07-01 |
publisher | MDPI AG |
record_format | Article |
series | Information |
spelling | doaj.art-024c8e9bd4be4ca8b86851dacdc4fbcd2023-12-03T15:11:15ZengMDPI AGInformation2078-24892022-07-0113733710.3390/info13070337Sequential Normalization: Embracing Smaller Sample Sizes for NormalizationNeofytos Dimitriou0Ognjen Arandjelović1School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UKSchool of Computer Science, University of St Andrews, St Andrews KY16 9SX, UKNormalization as a layer within neural networks has over the years demonstrated its effectiveness in neural network optimization across a wide range of different tasks, with one of the most successful approaches being that of batch normalization. The consensus is that better estimates of the BatchNorm normalization statistics (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>σ</mi><mn>2</mn></msup></semantics></math></inline-formula>) in each mini-batch result in better optimization. In this work, we challenge this belief and experiment with a new variant of BatchNorm known as GhostNorm that, despite independently normalizing batches within the mini-batches, i.e., <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi>μ</mi></semantics></math></inline-formula> and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mi>σ</mi><mn>2</mn></msup></semantics></math></inline-formula> are independently computed and applied to groups of samples in each mini-batch, outperforms BatchNorm consistently. Next, we introduce sequential normalization (SeqNorm), the sequential application of the above type of normalization across two dimensions of the input, and find that models trained with SeqNorm consistently outperform models trained with BatchNorm or GhostNorm on multiple image classification data sets. Our contributions are as follows: (i) we uncover a source of regularization that is unique to GhostNorm, and not simply an extension from BatchNorm, and illustrate its effects on the loss landscape, (ii) we introduce sequential normalization (SeqNorm) a new normalization layer that improves the regularization effects of GhostNorm, (iii) we compare both GhostNorm and SeqNorm against BatchNorm alone as well as with other regularization techniques, (iv) for both GhostNorm and SeqNorm models, we train models whose performance is consistently better than our baselines, including ones with BatchNorm, on the standard image classification data sets of CIFAR–10, CIFAR-100, and ImageNet ((<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>0.2</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>0.7</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>0.4</mn><mo>%</mo></mrow></semantics></math></inline-formula>), and (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>0.3</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>1.7</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>+</mo><mn>1.1</mn><mo>%</mo></mrow></semantics></math></inline-formula>) for GhostNorm and SeqNorm, respectively).https://www.mdpi.com/2078-2489/13/7/337batch normalizationghost normalizationloss landscapecomputer visionneural networksImageNet |
spellingShingle | Neofytos Dimitriou Ognjen Arandjelović Sequential Normalization: Embracing Smaller Sample Sizes for Normalization Information batch normalization ghost normalization loss landscape computer vision neural networks ImageNet |
title | Sequential Normalization: Embracing Smaller Sample Sizes for Normalization |
title_full | Sequential Normalization: Embracing Smaller Sample Sizes for Normalization |
title_fullStr | Sequential Normalization: Embracing Smaller Sample Sizes for Normalization |
title_full_unstemmed | Sequential Normalization: Embracing Smaller Sample Sizes for Normalization |
title_short | Sequential Normalization: Embracing Smaller Sample Sizes for Normalization |
title_sort | sequential normalization embracing smaller sample sizes for normalization |
topic | batch normalization ghost normalization loss landscape computer vision neural networks ImageNet |
url | https://www.mdpi.com/2078-2489/13/7/337 |
work_keys_str_mv | AT neofytosdimitriou sequentialnormalizationembracingsmallersamplesizesfornormalization AT ognjenarandjelovic sequentialnormalizationembracingsmallersamplesizesfornormalization |