1 code implementation • 3 May 2024 • Jiexia Ye, Weiqi Zhang, Ke Yi, Yongzi Yu, Ziyue Li, Jia Li, Fugee Tsung
There are two main research lines, namely pre-training foundation models from scratch for time series and adapting large language foundation models for time series.
1 code implementation • 18 Apr 2024 • Haoyuan Jiang, Ziyue Li, Hua Wei, Xuantang Xiong, Jingqing Ruan, Jiaming Lu, Hangyu Mao, Rui Zhao
The effectiveness of traffic light control has been significantly improved by current reinforcement learning-based approaches via better cooperation among multiple traffic lights.
no code implementations • 5 Apr 2024 • Jiuyun Hu, Ziyue Li, Chen Zhang, Fugee Tsung, Hao Yan
Moreover, a case study in the station clustering based on real passenger flow data is conducted, with quite valuable insights discovered.
1 code implementation • 13 Mar 2024 • Zhishuai Li, Xiang Wang, Jingjing Zhao, Sun Yang, Guoqing Du, Xiaoru Hu, Bin Zhang, Yuxiao Ye, Ziyue Li, Rui Zhao, Hangyu Mao
Then, in the first stage, question-SQL pairs are retrieved as few-shot demonstrations, prompting the LLM to generate a preliminary SQL (PreSQL).
Ranked #1 on Text-To-SQL on spider
no code implementations • 6 Mar 2024 • Ziyue Li, Tian Li, Virginia Smith, Jeff Bilmes, Tianyi Zhou
Optimizing the performance of many objectives (instantiated by tasks or clients) jointly with a few Pareto stationary solutions (models) is critical in machine learning.
no code implementations • 5 Mar 2024 • Bin Zhang, Yuxiao Ye, Guoqing Du, Xiaoru Hu, Zhishuai Li, Sun Yang, Chi Harold Liu, Rui Zhao, Ziyue Li, Hangyu Mao
Then we formulate five evaluation tasks to comprehensively assess the performance of diverse methods across various LLMs throughout the Text-to-SQL process. Our study highlights the performance disparities among LLMs and proposes optimal in-context learning solutions tailored to each task.
1 code implementation • 23 Jan 2024 • Zhishuai Li, Yunhao Nie, Ziyue Li, Lei Bai, Yisheng Lv, Rui Zhao
As a pre-trained paradigm, we conduct the Kriging task from a new perspective of representation: we aim to first learn robust and general representations and then recover attributes from representations.
no code implementations • 18 Jan 2024 • Chenxi Liu, Sun Yang, Qianxiong Xu, Zhishuai Li, Cheng Long, Ziyue Li, Rui Zhao
In this paper, we propose a Spatial-Temporal Large Language Model (ST-LLM) for traffic prediction.
1 code implementation • 8 Jan 2024 • Pengxin Guo, Pengrong Jin, Ziyue Li, Lei Bai, Yu Zhang
To make the model trained on historical data better adapt to future data in a fully online manner, this paper conducts the first study of the online test-time adaptation techniques for spatial-temporal traffic flow forecasting problems.
Ranked #4 on Traffic Prediction on PeMS07
2 code implementations • 26 Dec 2023 • Hangyu Mao, Rui Zhao, Ziyue Li, Zhiwei Xu, Hao Chen, Yiqun Chen, Bin Zhang, Zhen Xiao, Junge Zhang, Jiangjin Yin
Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL.
1 code implementation • 22 Dec 2023 • Jiaming Lu, Jingqing Ruan, Haoyuan Jiang, Ziyue Li, Hangyu Mao, Rui Zhao
Furthermore, we implement a scenario-shared Co-Train module to facilitate the learning of generalizable dynamics information across different scenarios.
1 code implementation • 11 Dec 2023 • Zhishuai Li, Ziyue Li, Xiaoru Hu, Guoqing Du, Yunhao Nie, Feng Zhu, Lei Bai, Rui Zhao
Trajectory recovery based on the snapshots from the city-wide multi-camera network facilitates urban mobility sensing and driveway optimization.
no code implementations • 23 Nov 2023 • Bin Zhang, Hangyu Mao, Jingqing Ruan, Ying Wen, Yang Li, Shao Zhang, Zhiwei Xu, Dapeng Li, Ziyue Li, Rui Zhao, Lijuan Li, Guoliang Fan
The remarkable progress in Large Language Models (LLMs) opens up new avenues for addressing planning and decision-making problems in Multi-Agent Systems (MAS).
no code implementations • 19 Nov 2023 • Yilun Kong, Jingqing Ruan, Yihong Chen, Bin Zhang, Tianpeng Bao, Shiwei Shi, Guoqing Du, Xiaoru Hu, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
Large Language Models (LLMs) have demonstrated proficiency in addressing tasks that necessitate a combination of task planning and the usage of external tools that require a blend of task planning and the utilization of external tools, such as APIs.
no code implementations • 5 Nov 2023 • Dedong Li, Ziyue Li, Zhishuai Li, Lei Bai, Qingyuan Gong, Lijun Sun, Wolfgang Ketter, Rui Zhao
Then, we propose a Multi-view Graph and Complexity Aware Transformer (MGCAT) model to encode these semantics in trajectory pre-training from two aspects: 1) adaptively aggregate the multi-view graph features considering trajectory pattern, and 2) higher attention to critical nodes in a complex trajectory.
no code implementations • 5 Nov 2023 • Qianxiong Xu, Cheng Long, Ziyue Li, Sijie Ruan, Rui Zhao, Zhishuai Li
To address this issue, we first present a novel Increment training strategy: instead of masking nodes (and reconstructing them), we add virtual nodes into the training graph so as to mitigate the graph gap issue naturally.
no code implementations • 31 Oct 2023 • Ziyue Li, Hao Yan, Chen Zhang, Lijun Sun, Wolfgang Ketter, Fugee Tsung
In this paper, we propose a novel tensor Dirichlet Process Multinomial Mixture model with graphs, which can preserve the hierarchical structure of the multi-dimensional trip information and cluster them in a unified one-step manner with the ability to determine the number of clusters automatically.
no code implementations • 28 Oct 2023 • Guanghu Sui, Zhishuai Li, Ziyue Li, Sun Yang, Jingqing Ruan, Hangyu Mao, Rui Zhao
Our experiments with Large Language Models (LLMs) illustrate the significant performance improvement on the business dataset and prove the substantial potential of our method.
no code implementations • 7 Aug 2023 • Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications.
no code implementations • 1 Aug 2023 • Kaijian Liu, Shixiang Tang, Ziyue Li, Zhishuai Li, Lei Bai, Feng Zhu, Rui Zhao
The distribution representation of a clue is a vector consisting of the relation between this clue and all other clues from all modalities, thus being modality agnostic and good for person clustering.
no code implementations • 2 Jul 2023 • Ziyue Li, Yuchen Fang, You Li, Kan Ren, Yansen Wang, Xufang Luo, Juanyong Duan, Congrui Huang, Dongsheng Li, Lili Qiu
A timely detection of seizures for newborn infants with electroencephalogram (EEG) has been a common yet life-saving practice in the Neonatal Intensive Care Unit (NICU).
no code implementations • 23 Jun 2023 • Ziyue Li, Hao Yan, Chen Zhang, Andi Wang, Wolfgang Ketter, Lijun Sun, Fugee Tsung
In this paper, we propose a novel Tensor Dirichlet Process Multinomial Mixture model (Tensor-DPMM), which is designed to preserve the multi-mode and hierarchical structure of the multi-dimensional trip information via tensor, and cluster them in a unified one-step manner.
no code implementations • 15 Jun 2023 • YiRong Chen, Ziyue Li, Wanli Ouyang, Michael Lepech
In this work, we propose an Adaptive Hierarchical SpatioTemporal Network (AHSTN) to promote traffic forecasting by exploiting the spatial hierarchy and modeling multi-scale spatial correlations.
1 code implementation • 12 Jun 2023 • Junpeng Lin, Ziyue Li, Zhishuai Li, Lei Bai, Rui Zhao, Chen Zhang
In this work, we propose a novel approach for traffic prediction that embeds time-varying dynamic Bayesian network to capture the fine spatiotemporal topology of traffic data.
Ranked #13 on Traffic Prediction on METR-LA
1 code implementation • 12 Jun 2023 • Luxuan Wang, Lei Bai, Ziyue Li, Rui Zhao, Fugee Tsung
We evaluated the effectiveness and flexibility of our representation learning framework on correlated time series forecasting and cold-start transferring the forecasting model to new instances with limited data.
Correlated Time Series Forecasting Representation Learning +1
1 code implementation • 5 Jun 2023 • Tian Lan, Ziyue Li, Zhishuai Li, Lei Bai, Man Li, Fugee Tsung, Wolfgang Ketter, Rui Zhao, Chen Zhang
This encourages the multi-task design: with each DAG as a task, the MM-DAG tries to learn the multiple DAGs jointly so that their consensus and consistency are maximized.
1 code implementation • International Conference on Learning Representations 2023 • Ziyue Li, Kan Ren, Xinyang Jiang, Yifei Shen, Haipeng Zhang, Dongsheng Li
Moreover, our method is highly efficient and achieves more than 1000 times training speedup compared to the conventional DG methods with fine-tuning a pretrained model.
Ranked #1 on Domain Generalization on PACS
no code implementations • 29 Jan 2023 • Ziyue Li, Kan Ren, Yifan Yang, Xinyang Jiang, Yuqing Yang, Dongsheng Li
Ensemble methods can deliver surprising performance gains but also bring significantly higher computational costs, e. g., can be up to 2048X in large-scale ensemble tasks.
1 code implementation • 14 Sep 2022 • Zhenyu Mao, Ziyue Li, Dedong Li, Lei Bai, Rui Zhao
Unlike the existing cross-scale contrastive learning methods on graphs that only contrast a graph and its belonging nodes, the contrast between road segment and trajectory is elaborately tailored via novel positive sampling and adaptive weighting strategies.
no code implementations • 9 Mar 2022 • Ziyue Li, Kan Ren, Xinyang Jiang, Bo Li, Haipeng Zhang, Dongsheng Li
Fine-tuning pretrained models is a common practice in domain generalization (DG) tasks.
Ranked #8 on Domain Generalization on TerraIncognita
no code implementations • 29 Sep 2021 • Ziyue Li, Kan Ren, Xinyang Jiang, Mingzhe Han, Haipeng Zhang, Dongsheng Li
Real-world data is often generated by some complex distribution, which can be approximated by a composition of multiple simpler distributions.
no code implementations • 8 Nov 2020 • Taha Ameen ur Rahman, Alton S. Barbehenn, Xinan Chen, Hassan Dbouk, James A. Douglas, Yuncong Geng, Ian George, John B. Harvill, Sung Woo Jeon, Kartik K. Kansal, Kiwook Lee, Kelly A. Levick, Bochao Li, Ziyue Li, Yashaswini Murthy, Adarsh Muthuveeru-Subramaniam, S. Yagiz Olmez, Matthew J. Tomei, Tanya Veeravalli, Xuechao Wang, Eric A. Wayman, Fan Wu, Peng Xu, Shen Yan, Heling Zhang, Yibo Zhang, Yifan Zhang, Yibo Zhao, Sourya Basu, Lav R. Varshney
Many information sources are not just sequences of distinguishable symbols but rather have invariances governed by alternative counting paradigms such as permutations, combinations, and partitions.
Information Theory Information Theory
no code implementations • 23 Apr 2020 • Ziyue Li, Hao Yan, Chen Zhang, Fugee Tsung
Spatiotemporal data is very common in many applications, such as manufacturing systems and transportation systems.
1 code implementation • 11 Dec 2019 • Ziyue Li, Nurettin Dorukhan Sergin, Hao Yan, Chen Zhang, Fugee Tsung
Low-rank tensor decomposition and completion have attracted significant interest from academia given the ubiquity of tensor data.