no code implementations • 1 Mar 2024 • shiyi qi, Liangjian Wen, Yiduo Li, Yuanhang Yang, Zhe Li, Zhongwen Rao, Lujia Pan, Zenglin Xu
To substantiate this claim, we introduce the Cross-variable Decorrelation Aware feature Modeling (CDAM) for Channel-mixing approaches, aiming to refine Channel-mixing by minimizing redundant information between channels while enhancing relevant mutual information.
no code implementations • 27 Feb 2024 • Yuanhang Yang, shiyi qi, Wenchao Gu, Chaozheng Wang, Cuiyun Gao, Zenglin Xu
To address this issue, we present \tool, a novel MoE designed to enhance both the efficacy and efficiency of sparse MoE models.
no code implementations • 25 Feb 2024 • shiyi qi, Zenglin Xu, Yiduo Li, Liangjian Wen, Qingsong Wen, Qifan Wang, Yuan Qi
Recent advancements in deep learning have led to the development of various models for long-term multivariate time-series forecasting (LMTF), many of which have shown promising results.
1 code implementation • 18 May 2023 • Zhe Li, shiyi qi, Yiduo Li, Zenglin Xu
In this paper, we thoroughly investigate the intrinsic effectiveness of recent approaches and make three key observations: 1) linear mapping is critical to prior long-term time series forecasting efforts; 2) RevIN (reversible normalization) and CI (Channel Independent) play a vital role in improving overall forecasting performance; and 3) linear mapping can effectively capture periodic features in time series and has robustness for different periods across channels when increasing the input horizon.
1 code implementation • 11 Oct 2022 • Yuanhang Yang, shiyi qi, Chuanyi Liu, Qifan Wang, Cuiyun Gao, Zenglin Xu
Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI).