Home > Published Issues > 2024 > Volume 15, No. 5, 2024 >
JAIT 2024 Vol.15(5): 565-571
doi: 10.12720/jait.15.5.565-571

A Speed-up Channel Attention Technique for Accelerating the Learning Curve of a Binarized Squeeze-and-Excitation (SE) Based ResNet Model

Wu Shaoqing 1,* and Hiroyuki Yamauchi 2,*
1. Graduate School, Fukuoka Institute of Technology, Fukuoka, Japan
2. Department of Computer Science and Engineering, Fukuoka Institute of Technology, Fukuoka, Japan
Email: mfm22202@bene.fit.ac.jp (W.S.); yamauchi@fit.ac.jp (H.Y.)
*Corresponding author

Manuscript received December 7, 2023 ; revised December 26, 2023; accepted January 23, 2024; published May 10, 2024.

Abstract—The use of 1-bit representation for network weights, as opposed to the conventional 32-bit, has been investigated to save on the required power and memory footprint. Squeeze-and-Excitation (SE) based channel attention techniques aim to further reduce the number of parameters by eliminating redundant channels. However, this approach leads to a significant drawback of an unstable and slow learning curve, especially when compared to fitting parameters in SE networks. To address this issue, this paper presents the first attempt to accelerate the learning curve, even with a 1-bit representation for weights across the entire Squeeze-and-Excitation Residual Network (SEResNet14). The proposed technique within the SE module significantly speeds up channel attention, yielding a steeper learning curve for the network. We also extensively investigate the impact of activation functions within the SE module, aiming to understand their performance-enhancing attributes when applied with the proposed technique. Experimental results demonstrate that even under stringent compression, an appropriate choice of activation function can still ensure the efficacy of our technique in the SE module. We found that the proposed technique results in: (1) a 60% reduction in the required number of epochs to achieve an error rate of 0.3; and (2) a decrease in the error rate by approximately 44% at the 10th epoch, compared to a baseline method that does not use the proposed scheme.
 
Keywords—Residual Network 14 (ResNet14), CIFAR-10, Squeeze-and-Excitation (SE) attention mechanism, 1-bit quantization, model compression, activation functions, channel feature maps binarization, ultra-compact AI deployment

Cite: Wu Shaoqing and Hiroyuki Yamauchi, "A Speed-up Channel Attention Technique for Accelerating the Learning Curve of a Binarized Squeeze-and-Excitation (SE) Based ResNet Model," Journal of Advances in Information Technology, Vol. 15, No. 5, pp. 565-571, 2024.

Copyright © 2024 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.