Skip to main content
SearchLoginLogin or Signup

Adaptive Learning Rates for Gradient Boosting Machines

20

Published onMay 27, 2024
Adaptive Learning Rates for Gradient Boosting Machines
·

Abstract

Gradient Boosting Machines (GBM) is a widely applicable machine learning algorithm that has demonstrated top performance in a variety of fields. In this paper, we explore the potential of adaptive learning rates to achieve accelerated convergence in GBMs. We introduce a novel boosting algorithm called Delta-Bar-Delta (DBD) Boosting that leverages insights from the steepest-descent algorithm of the same name. We show improved performance over the baseline GBM model through a series of experiments. We also show that our proposed DBD boosting algorithm can be conveniently combined with other optimization improvements, such as momentum and Nesterov's Accelerated Gradient. We perform hyperparameter tuning and evaluate our algorithm on series of classification and regression tasks. Our findings demonstrate empirically improved convergence rate compared to existing approaches. Furthermore, we observe and discuss intriguing behaviors related to adaptive learning rates within the context of GBMs, highlighting the intricate dynamics of our proposed method. This research contributes to the ongoing advancement of gradient boosting techniques in machine learning, offering new perspectives and tools for improved convergence and faster training.

Article ID: 2024S1

Month: May

Year: 2024

Address: Online

Venue: The 37th Canadian Conference on Artificial Intelligence

Publisher: Canadian Artificial Intelligence Association

URL: https://caiac.pubpub.org/pub/py65wd3c


Comments
0
comment
No comments here
Why not start the discussion?