Skip to main content
SearchLogin or Signup

Understanding Capacity Saturation in Incremental Learning

Published onJun 08, 2021
Understanding Capacity Saturation in Incremental Learning


In class-incremental learning, a model continuously learns from a sequential data stream in which new classes are introduced. There are two main challenges in class- incremental learning: catastrophic forgetting and capacity saturation. In this work, we focus on capacity saturation where a learner is unable to achieve good generalization due to its limited capacity. To understand how to increase model capacity, we present the continual architecture design problem where at any given step, a continual learner needs to adapt its architecture to achieve a good balance between performance, computational cost and memory limitations. To address this problem, we propose Continual Neural Architecture Search (CNAS) which takes advantage of the sequential nature of class- incremental learning to efficiently identify strong architectures. CNAS consists of a task network for image classification and a reinforcement learning agent as the meta-controller for architecture adaptation. We also accelerate learning by transferring weights from the previous learning step thus saving a large amount of computational resources. We evaluate CNAS on the CIFAR-100 dataset in several incremental learning scenarios with limited computational power (1 GPU). We empirically demonstrate that CNAS can mitigate capacity saturation and achieve performances comparable with full architecture search while being at least one order of magnitude more efficient.

Article ID: 2021L07

Month: May

Year: 2021

Address: Online

Venue: Canadian Conference on Artificial Intelligence

Publisher: Canadian Artificial Intelligence Association



No comments here