no code implementations • 22 Sep 2023 • Willa Potosnak, Cristian Challu, Kin G. Olivares, Artur Dubrawski
Our global-local architecture improves over patient-specific models by 9. 2-14. 6%.
1 code implementation • 11 May 2023 • Kin G. Olivares, David Luo, Cristian Challu, Stefania La Vattiata, Max Mergenthaler, Artur Dubrawski
Large collections of time series data are often organized into hierarchies with different levels of aggregation; examples include product and geographical groupings.
1 code implementation • 22 Oct 2022 • Cristian Challu, Peihong Jiang, Ying Nian Wu, Laurent Callot
In this work, we tackle two widespread challenges in real applications for time-series forecasting that have been largely understudied: distribution shifts and missing data.
1 code implementation • 3 Oct 2022 • Mononito Goswami, Cristian Challu, Laurent Callot, Lenon Minorics, Andrey Kan
The practical problem of selecting the most accurate model for a given dataset without labels has received little attention in the literature.
1 code implementation • 15 Feb 2022 • Cristian Challu, Peihong Jiang, Ying Nian Wu, Laurent Callot
Multivariate time series anomaly detection has become an active area of research in recent years, with Deep Learning models outperforming previous approaches on benchmark datasets.
4 code implementations • 30 Jan 2022 • Cristian Challu, Kin G. Olivares, Boris N. Oreshkin, Federico Garza, Max Mergenthaler-Canseco, Artur Dubrawski
Recent progress in neural forecasting accelerated improvements in the performance of large-scale forecasting systems.
no code implementations • 7 Jun 2021 • Cristian Challu, Kin G. Olivares, Gus Welter, Artur Dubrawski
We validate our proposed method, DMIDAS, on high-frequency healthcare and electricity price data with long forecasting horizons (~1000 timestamps) where we improve the prediction accuracy by 5% over state-of-the-art models, reducing the number of parameters of NBEATS by nearly 70%.
2 code implementations • 12 Apr 2021 • Kin G. Olivares, Cristian Challu, Grzegorz Marcjasz, Rafał Weron, Artur Dubrawski
We extend the neural basis expansion analysis (NBEATS) to incorporate exogenous factors.
no code implementations • 25 Sep 2019 • Kin Gutierrez, Cristian Challu, Jin Li, Artur Dubrawski
Adaptive moment methods have been remarkably successful for optimization under the presence of high dimensional or sparse gradients, in parallel to this, adaptive sampling probabilities for SGD have allowed optimizers to improve convergence rates by prioritizing examples to learn efficiently.
no code implementations • 6 Nov 2018 • Kin Gutierrez, Jin Li, Cristian Challu, Artur Dubrawski
We observe that the benefits of~\textsc{DASGrad} increase with the model complexity and variability of the gradients, and we explore the resulting utility in extensions of distribution-matching multitask learning.