Skip to main content
SearchLoginLogin or Signup

Liquid Ensemble Selection for Continual Learning

76

Published onMay 27, 2024
Liquid Ensemble Selection for Continual Learning
·

Abstract

Continual learning aims to enable machine learning models to acquire new knowledge from a shifting data distribution without forgetting what has already been learned. Such shifting distributions can be broken into disjoint subsets of related examples called contexts; training each ensemble member on a different context makes it possible for the ensemble as a whole to achieve much higher accuracy with less forgetting than a naive model. We address the problem of selecting which models within an ensemble should learn on any given data and which should predict. By drawing on work from delegative voting we develop an algorithm for using delegation to dynamically select which models in an ensemble are active. We explore various delegation methods and performance metrics, ultimately finding that delegation can provide a significant performance boost over naive learning in the face of distribution shifts.

Article ID: 2024S8

Month: May

Year: 2024

Address: Online

Venue: The 37th Canadian Conference on Artificial Intelligence

Publisher: Canadian Artificial Intelligence Association

URL: https://caiac.pubpub.org/pub/7gegug1h


Comments
0
comment
No comments here
Why not start the discussion?