65
A key component for a Lifelong Learning Agent is the integration or consolidation of new task knowledge with prior task knowledge. Consolidation requires a solution to several problems, most notably, the catastrophic forgetting problem where the development of representation for a new task reduces the accuracy of prior tasks. This paper extends our prior work on consolidation using multiple tasks learning (MTL) networks and a task rehearsal or replay approach. The goal is to maintain functional stability of the MTL network models for prior tasks, while providing representational plasticity to integrate new task knowledge into the same network. Our approach uses (1) a conditional variational autoencoder (CVAE) to generate accurate pseudo-examples (PEs) of prior tasks, (2) sweep-rehearsal requiring only a small number of PEs for each training iteration, (3) the appropriate weighing of PEs to ensure consolidation of new task knowledge with prior, and (4) a novel network architecture we call MTL with Context inputs (MTLc) which combines the best of standard MTL and context-sensitive MTL (csMTL) architectures. Sequential learning of twenty classification tasks using a combination of MNIST and Fashion-MNIST datasets shows that our CVAE based approach to generating accurate PEs is promising and that MTLc performs better than either MTL or csMTL with minimal loss of task accuracy over the sequence of tasks.
Article ID: 2024S6
Month: May
Year: 2024
Address: Online
Venue: The 37th Canadian Conference on Artificial Intelligence
Publisher: Canadian Artificial Intelligence Association
URL: https://caiac.pubpub.org/pub/upkiq98d