1 code implementation • 15 Feb 2024 • Romain Ilbert, Ambroise Odonnat, Vasilii Feofanov, Aladin Virmaux, Giuseppe Paolo, Themis Palpanas, Ievgen Redko
Transformer-based architectures achieved breakthrough performance in natural language processing and computer vision, yet they remain inferior to simpler linear baselines in multivariate long-term forecasting.
no code implementations • 16 Nov 2023 • Romain Ilbert, Thai V. Hoang, Zonghua Zhang, Themis Palpanas
Our optimal model can retain up to $92. 02\%$ the performance of the original forecasting model in terms of Mean Squared Error (MSE) on clean data, while being more robust than the standard adversarially trained models on perturbed data.