no code implementations • 27 Aug 2023 • Chapman Siu
We study the online variant of GentleAdaboost, where we combine a weak learner to a strong learner in an online fashion.
no code implementations • 19 Sep 2021 • Chapman Siu, Jason Traish, Richard Yi Da Xu
We propose using regularization for Multi-Agent Reinforcement Learning rather than learning explicit cooperative structures called {\em Multi-Agent Regularized Q-learning} (MARQ).
no code implementations • 19 Sep 2021 • Chapman Siu, Jason Traish, Richard Yi Da Xu
We demonstrate the flexibility of this approach and how it can be adapted to online contexts where the environment is available to collect experiences and a variety of other contexts.
no code implementations • 19 Sep 2021 • Chapman Siu, Jason Traish, Richard Yi Da Xu
This paper introduces Greedy UnMix (GUM) for cooperative multi-agent reinforcement learning (MARL).
no code implementations • 25 Sep 2019 • Chapman Siu
We show that Residual Networks (ResNet) is equivalent to boosting feature representation, without any modification to the underlying ResNet training algorithm.
1 code implementation • 25 Apr 2019 • Chapman Siu
Gradient Boosting Decision Tree (GBDT) are popular machine learning algorithms with implementations such as LightGBM and in popular machine learning toolkits like Scikit-Learn.
1 code implementation • 26 Nov 2018 • Chapman Siu
This work presents an approach to automatically induction for non-greedy decision trees constructed from neural network architecture.
1 code implementation • 12 Jun 2018 • Chapman Siu, Richard Yi Da Xu
The framework aims to promote diversity based on the kernel produced on a feature level, through at most three stages: feature sampling, local criteria and global criteria for feature selection.