1 code implementation • 29 Apr 2024 • Yiyuan Yang, Ming Jin, Haomin Wen, Chaoli Zhang, Yuxuan Liang, Lintao Ma, Yi Wang, Chenghao Liu, Bin Yang, Zenglin Xu, Jiang Bian, Shirui Pan, Qingsong Wen
Conditioned models, on the other hand, utilize extra information to enhance performance and are similarly divided for both predictive and generative tasks.
1 code implementation • ICLR 2024 • Shiyu Wang, Haixu Wu, Xiaoming Shi, Tengge Hu, Huakun Luo, Lintao Ma, James Y. Zhang, Jun Zhou
Time series forecasting is widely used in extensive applications, such as traffic planning and weather forecasting.
5 code implementations • 10 Oct 2023 • Yong liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, Mingsheng Long
These forecasters leverage Transformers to model the global dependencies over temporal tokens of time series, with each token formed by multiple variates of the same timestamp.
no code implementations • 9 Oct 2023 • Yong Lin, Fan Zhou, Lu Tan, Lintao Ma, Jiameng Liu, Yansu He, Yuan Yuan, Yu Liu, James Zhang, Yujiu Yang, Hao Wang
To address this challenge, we then propose Continuous Invariance Learning (CIL), which extracts invariant features across continuously indexed domains.
1 code implementation • 3 Oct 2023 • Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen
We begin by reprogramming the input time series with text prototypes before feeding it into the frozen LLM to align the two modalities.
1 code implementation • 19 Aug 2023 • Run Luo, Zikai Song, Lintao Ma, JinLin Wei, Wei Yang, Min Yang
In inference, the model refines a set of paired randomly generated boxes to the detection and tracking results in a flexible one-step or multi-step denoising diffusion process.
no code implementations • 11 Feb 2023 • Fan Zhou, Chen Pan, Lintao Ma, Yu Liu, Shiyu Wang, James Zhang, Xinxin Zhu, Xuanwei Hu, Yunhua Hu, Yangfei Zheng, Lei Lei, Yun Hu
Moreover, unlike most previous reconciliation methods which either rely on strong assumptions or focus on coherent constraints only, we utilize deep neural optimization networks, which not only achieve coherency without any assumptions, but also allow more flexible and realistic constraints to achieve task-based targets, e. g., lower under-estimation penalty and meaningful decision-making loss to facilitate the subsequent downstream tasks.
1 code implementation • 28 Dec 2022 • Shiyu Wang, Fan Zhou, Yinbo Sun, Lintao Ma, James Zhang, Yangfei Zheng, Bo Zheng, Lei Lei, Yun Hu
Multivariate time series forecasting with hierarchical structure is pervasive in real-world applications, demanding not only predicting each level of the hierarchy, but also reconciling all forecasts to ensure coherency, i. e., the forecasts should satisfy the hierarchical aggregation constraints.
no code implementations • 21 Nov 2022 • Siqiao Xue, Xiaoming Shi, Hongyan Hao, Lintao Ma, Shiyu Wang, Shijun Wang, James Zhang
Point process is the dominant paradigm for modeling event sequences occurring at irregular intervals.
1 code implementation • 31 May 2022 • Siqiao Xue, Chao Qu, Xiaoming Shi, Cong Liao, Shiyi Zhu, Xiaoyu Tan, Lintao Ma, Shiyu Wang, Shijun Wang, Yun Hu, Lei Lei, Yangfei Zheng, Jianguo Li, James Zhang
Predictive autoscaling (autoscaling with workload forecasting) is an important mechanism that supports autonomous adjustment of computing resources in accordance with fluctuating workload demands in the Cloud.
no code implementations • 19 May 2020 • Shijun Wang, Baocheng Zhu, Lintao Ma, Yuan Qi
In this paper, we consider optimizing a smooth, convex, lower semicontinuous function in Riemannian space with constraints.