Learn&Drop: Fast Learning of CNNs based on Layer Dropping

TL;DR

Learn&Drop accelerates CNN training by layer dropping, reducing ResNet-152 forward propagation FLOPs by 83.74%.

cs.CV 🟡 Intermediate 2026-04-26 2 citations 35 views
Giorgio Cruciata Luca Cruciata Liliana Lo Presti Jan Van Gemert Marco La Cascia
deep learning convolutional neural networks training efficiency layer dropping forward propagation

Key Findings

Methodology

This paper introduces a novel method called Learn&Drop, which evaluates scores during training to determine whether each layer's parameters should continue learning. Based on these scores, the network is scaled down, reducing the number of parameters to be learned and speeding up training. Unlike existing methods, this approach focuses on reducing the number of operations during forward propagation in training rather than compressing the network for inference or limiting operations during backpropagation.

Key Results

  • Experiments show that with the Learn&Drop method, the forward propagation FLOPs of VGG-11 are reduced by 17.83%, while ResNet-152 sees a reduction of 83.74%. On the MNIST, CIFAR-10, and Imagenette datasets, training time is more than halved without significantly impacting accuracy.
  • For ResNet-152, using the Learn&Drop method reduces training time by over 50% while maintaining accuracy comparable to traditional training methods.
  • The method's effectiveness in improving training efficiency is validated across different depths of CNNs by applying it to VGG and ResNet architectures.

Significance

This research holds significant implications for both academia and industry. It addresses the long-standing issue of lengthy training times and high computational costs in deep learning models, particularly in applications requiring online training or model fine-tuning, such as visual tracking and recommendation systems. By reducing training time, this method enhances model real-time performance and adaptability.

Technical Contribution

The technical contribution of this paper lies in proposing a new training strategy that improves training efficiency by gradually dropping convolutional layers during training. This differs from traditional pruning methods, which typically compress the network during inference. The Learn&Drop method temporarily drops layers during training, accelerating both forward and backward propagation.

Novelty

This method is the first to improve CNN training efficiency through layer dropping during training. Unlike existing freezing layer methods, it not only stops gradient computation but also physically removes layers. The innovation lies in temporarily dropping layers during training rather than permanently compressing the network.

Limitations

  • In some cases, prematurely dropping layers may lead to decreased model accuracy, especially in complex or noisy datasets.
  • The method requires additional computational resources to store and process feature maps from dropped layers, which may increase memory usage.
  • The method's effectiveness may not meet expectations in certain specific network architectures or tasks.

Future Work

Future research directions include validating the method's effectiveness on more network architectures and datasets, exploring feature map storage and processing optimization without increasing memory usage, and investigating the combination with other training acceleration techniques like mixed-precision training to further enhance efficiency.

AI Executive Summary

The use of deep learning models is becoming increasingly widespread, particularly in the field of computer vision, where convolutional neural networks (CNNs) are highly regarded for their exceptional performance. However, the training time for CNNs is often lengthy, especially when dealing with large datasets or limited hardware resources. Existing methods primarily focus on network compression during inference or limiting operations during backpropagation, but this paper proposes a novel training strategy that focuses on reducing operations during forward propagation in training.

The Learn&Drop method evaluates scores during training to determine whether each layer's parameters should continue learning. Based on these scores, the network is scaled down, reducing the number of parameters to be learned and speeding up training. This method has been validated on VGG and ResNet architectures, with experiments showing that on the MNIST, CIFAR-10, and Imagenette datasets, training time is more than halved without significantly impacting accuracy.

The core technical principle of this method is to evaluate each layer's learning status through gradient monitoring and decide whether to drop the layer based on its score. Dropped layers no longer participate in subsequent training processes, reducing the computational load of both forward and backward propagation. Experiments demonstrate that with the Learn&Drop method, the forward propagation FLOPs of VGG-11 are reduced by 17.83%, while ResNet-152 sees a reduction of 83.74%.

This research holds significant implications for both academia and industry. It addresses the long-standing issue of lengthy training times and high computational costs in deep learning models, particularly in applications requiring online training or model fine-tuning, such as visual tracking and recommendation systems. By reducing training time, this method enhances model real-time performance and adaptability.

However, the method also has some limitations. For instance, prematurely dropping layers may lead to decreased model accuracy, especially in complex or noisy datasets. Additionally, the method requires additional computational resources to store and process feature maps from dropped layers, which may increase memory usage. Future research directions include validating the method's effectiveness on more network architectures and datasets, exploring feature map storage and processing optimization without increasing memory usage, and investigating the combination with other training acceleration techniques like mixed-precision training to further enhance efficiency.

Deep Analysis

Background

In recent years, the application of deep learning models has become increasingly widespread across various fields, particularly in computer vision, natural language processing, and speech recognition. Convolutional neural networks (CNNs) are a popular deep learning model known for their exceptional performance in processing image and video data. CNNs consist of multiple layers that process input data by mapping it to the expected output. However, the training time for CNNs is often lengthy, especially when dealing with large datasets or limited hardware resources. As modern deep learning models continue to increase in depth and size, although this typically leads to better performance, it also comes with significant computational costs. Therefore, improving CNN training efficiency without compromising accuracy has become an important research direction.

Core Problem

The training time for deep learning models is often lengthy, particularly when dealing with large datasets or limited hardware resources. Existing methods primarily focus on network compression during inference or limiting operations during backpropagation, but these methods have limited effectiveness in applications requiring online training or model fine-tuning, such as visual tracking and recommendation systems. Therefore, reducing computational load during training to improve efficiency has become a pressing issue.

Innovation

This paper introduces a novel method called Learn&Drop, which evaluates scores during training to determine whether each layer's parameters should continue learning. Unlike existing freezing layer methods, this method not only stops gradient computation but also physically removes layers, accelerating both forward and backward propagation. The method temporarily drops layers during training rather than permanently compressing the network, allowing the full model to be used during inference.

Methodology

  • �� Evaluate scores during training to determine whether each layer's parameters should continue learning.
  • �� Scale down the network based on these scores, reducing the number of parameters to be learned.
  • �� Dropped layers no longer participate in subsequent training processes, reducing the computational load of both forward and backward propagation.
  • �� Use the full model during inference for predictions.

Experiments

Experiments were conducted on VGG and ResNet architectures using the MNIST, CIFAR-10, and Imagenette datasets. Results show that with the Learn&Drop method, the forward propagation FLOPs of VGG-11 are reduced by 17.83%, while ResNet-152 sees a reduction of 83.74%. Training time is more than halved on these datasets without significantly impacting accuracy, validating the method's effectiveness across different depths of CNNs.

Results

Experiments demonstrate that with the Learn&Drop method, the forward propagation FLOPs of VGG-11 are reduced by 17.83%, while ResNet-152 sees a reduction of 83.74%. On the MNIST, CIFAR-10, and Imagenette datasets, training time is more than halved without significantly impacting accuracy, validating the method's effectiveness in improving training efficiency across different depths of CNNs.

Applications

The method holds significant implications for applications requiring online training or model fine-tuning, such as visual tracking and recommendation systems. By reducing training time, the method enhances model real-time performance and adaptability.

Limitations & Outlook

In some cases, prematurely dropping layers may lead to decreased model accuracy, especially in complex or noisy datasets. Additionally, the method requires additional computational resources to store and process feature maps from dropped layers, which may increase memory usage. Future research directions include validating the method's effectiveness on more network architectures and datasets, exploring feature map storage and processing optimization without increasing memory usage.

Plain Language Accessible to non-experts

Imagine you're in a kitchen cooking a meal. You have a lot of pots and pans, each capable of making different dishes, but you don't need to use all of them every time. The Learn&Drop method is like a smart chef who observes the usage of each pot and pan during cooking and decides which ones can be temporarily set aside to speed up the cooking process. This way, you not only save time but also ensure that each dish tastes just as good. In training neural networks, this method observes each layer's parameter changes and decides whether to continue learning that layer, reducing computational load and speeding up training.

ELI14 Explained like you're 14

Hey there! Imagine you're playing a super complex game with lots of tasks to complete in each level. Usually, you'd spend a lot of time completing each task, but sometimes you notice that some tasks aren't that important and can be skipped. The Learn&Drop method is like a game assistant that helps you identify which tasks can be skipped, so you can level up faster! In training neural networks, it observes each layer's learning status and decides whether to continue training that layer, speeding up the whole process. Isn't that cool?

Glossary

Convolutional Neural Network (CNN)

A type of deep learning model particularly adept at processing image data. CNNs consist of multiple layers that extract features through convolution operations.

In this paper, CNNs are the primary focus for validating the effectiveness of the Learn&Drop method.

Forward Propagation

A step in neural network training where data is passed from the input layer to the output layer. Each layer processes input data based on its weight matrix and activation function.

In this paper, forward propagation's computational load is a key focus for optimization by the Learn&Drop method.

Backward Propagation

A step in neural network training where weights are updated by computing the gradient of the loss function with respect to each weight.

In this paper, backward propagation's computational load is also a key focus for optimization by the Learn&Drop method.

Gradient

The derivative of the loss function with respect to model parameters, indicating how changes in parameters affect the loss.

In this paper, gradients are used to evaluate each layer's learning status.

Layer Dropping

Temporarily removing certain layers during training to reduce computational load and speed up training.

In this paper, layer dropping is the core technique of the Learn&Drop method.

VGG

A convolutional neural network architecture known for its depth and small convolutional filters.

In this paper, VGG is used to validate the effectiveness of the Learn&Drop method.

ResNet

A convolutional neural network architecture that uses residual blocks to alleviate the vanishing gradient problem.

In this paper, ResNet is used to validate the effectiveness of the Learn&Drop method.

FLOPs

Floating-point operations, a metric for measuring computational complexity.

In this paper, FLOPs are used to evaluate the optimization effect of the Learn&Drop method on computational load.

Feature Map

Multidimensional data output by convolutional layers, representing features of input data.

In this paper, feature maps are used to continue training the remaining network after layer dropping.

Batch Normalization

A method to accelerate neural network training by normalizing inputs to each layer, reducing internal covariate shift.

In this paper, batch normalization is used to enhance training efficiency in VGG networks.

Open Questions Unanswered questions from this research

  • 1 While the Learn&Drop method performs well on multiple datasets and network architectures, its performance on more complex tasks and larger datasets still needs further validation. Particularly in applications involving multimodal data or requiring real-time responses, the method's effectiveness and applicability remain open questions.
  • 2 In some cases, prematurely dropping layers may lead to decreased model accuracy. How to more accurately evaluate each layer's learning status to avoid unnecessary layer dropping is a question that requires further research.
  • 3 The method requires additional computational resources to store and process feature maps from dropped layers, which may increase memory usage. How to optimize feature map storage and processing without increasing memory usage is a direction worth exploring.
  • 4 Although the method performs well on VGG and ResNet architectures, its applicability to other types of network architectures still needs further research. Particularly in applications involving recurrent neural networks (RNNs) or transformers, the method's effectiveness needs validation.
  • 5 In practical applications, how to combine other training acceleration techniques, such as mixed-precision training, to further enhance training efficiency is a direction worth exploring.

Applications

Immediate Applications

Visual Tracking

In visual tracking applications, the target's appearance changes over time, requiring the model to quickly adapt to these changes. The Learn&Drop method speeds up training, enhancing the model's real-time performance and adaptability.

Recommendation Systems

Recommendation systems require regular retraining to adapt to changes in user preferences. The Learn&Drop method reduces training time, improving the update efficiency of recommendation systems.

Online Learning

In scenarios where data arrives continuously, models need to perform online learning. The Learn&Drop method reduces training time, making online learning more efficient.

Long-term Vision

Autonomous Driving

In autonomous driving, vehicles need to process large amounts of sensor data in real-time. The Learn&Drop method can help speed up model training, enhancing system real-time performance and safety.

Smart Cities

Smart cities need to process large amounts of real-time data to optimize resource allocation and management. The Learn&Drop method can help improve data processing and analysis efficiency, driving the development of smart cities.

Abstract

This paper proposes a new method to improve the training efficiency of deep convolutional neural networks. During training, the method evaluates scores to measure how much each layer's parameters change and whether the layer will continue learning or not. Based on these scores, the network is scaled down such that the number of parameters to be learned is reduced, yielding a speed up in training. Unlike state-of-the-art methods that try to compress the network to be used in the inference phase or to limit the number of operations performed in the backpropagation phase, the proposed method is novel in that it focuses on reducing the number of operations performed by the network in the forward propagation during training. The proposed training strategy has been validated on two widely used architecture families: VGG and ResNet. Experiments on MNIST, CIFAR-10 and Imagenette show that, with the proposed method, the training time of the models is more than halved without significantly impacting accuracy. The FLOPs reduction in the forward propagation during training ranges from 17.83\% for VGG-11 to 83.74\% for ResNet-152. These results demonstrate the effectiveness of the proposed technique in speeding up learning of CNNs. The technique will be especially useful in applications where fine-tuning or online training of convolutional models is required, for instance because data arrive sequentially.

cs.CV cs.AI cs.NE

References (12)

Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better

Gaurav Menghani

2021 590 citations ⭐ Influential View Analysis →

Accurate and Fast Deep Evolutionary Networks Structured Representation Through Activating and Freezing Dense Networks

Dayu Tan, W. Zhong, Xin Peng et al.

2022 6 citations

Heuristic-based automatic pruning of deep neural networks

T. Choudhary, V. Mishra, Anurag Goswami et al.

2022 9 citations

Efficient structured pruning based on deep feature stabilization

Sheng Xu, Hanlin Chen, Xuan Gong et al.

2021 9 citations

Efficient and effective training of sparse recurrent neural networks

Shiwei Liu, Iftitahu Ni'mah, Vlado Menkovski et al.

2021 34 citations

Layer Pruning via Fusible Residual Convolutional Block for Deep Neural Networks

Pengtao Xu, Jian Cao, Fanhua Shang et al.

2020 31 citations View Analysis →

To filter prune, or to layer prune, that is the question

Sara Elkerdawy, Mostafa Elhoushi, A. Singh et al.

2020 45 citations View Analysis →

Fast Training of Deep Neural Networks Robust to Adversarial Perturbations

Justin A. Goodwin, Olivia M. Brown, Victoria Helus

2020 4 citations View Analysis →

Array programming with NumPy

Charles R. Harris, K. Millman, S. Walt et al.

2020 19770 citations View Analysis →

PyTorch: An Imperative Style, High-Performance Deep Learning Library

Adam Paszke, Sam Gross, Francisco Massa et al.

2019 51354 citations View Analysis →

Shallowing Deep Networks: Layer-Wise Pruning Based on Feature Representations

Shi Chen, Qi Zhao

2019 149 citations

Sparse Networks from Scratch: Faster Training without Losing Performance

Tim Dettmers, Luke Zettlemoyer

2019 370 citations View Analysis →

Cited By (2)

Motivation is Something You Need

Optimizing Training Time in Deep Learning Models Using Distributed Computing Techniques

2026