no code implementations • 31 Mar 2024 • Jialin Chen, Jan Eric Lenssen, Aosong Feng, Weihua Hu, Matthias Fey, Leandros Tassiulas, Jure Leskovec, Rex Ying
Motivated by our observation of a correlation between the time series model's performance boost against channel mixing and the intrinsic similarity on a pair of channels, we developed a novel and adaptable Channel Clustering Module (CCM).
no code implementations • 15 Mar 2024 • Haoyue Tang, Tian Xie, Aosong Feng, Hanyu Wang, Chenyang Zhang, Yang Bai
Solving image inverse problems (e. g., super-resolution and inpainting) requires generating a high fidelity image that matches the given input (the low-resolution image or the masked image).
1 code implementation • 7 Mar 2024 • Aosong Feng, Weikang Qiu, Jinbin Bai, Kaicheng Zhou, Zhen Dong, Xiao Zhang, Rex Ying, Leandros Tassiulas
Building on the success of text-to-image diffusion models (DPMs), image editing is an important application to enable human interaction with AI-generated content.
no code implementations • 7 Mar 2024 • Aosong Feng, Jialin Chen, Juan Garza, Brooklyn Berry, Francisco Salazar, Yifeng Gao, Rex Ying, Leandros Tassiulas
The high-resolution time series classification problem is essential due to the increasing availability of detailed temporal data in various domains.
1 code implementation • 22 Feb 2024 • Rui Yang, Boming Yang, Sixun Ouyang, Tianwei She, Aosong Feng, Yuang Jiang, Freddy Lecue, Jinghui Lu, Irene Li
We assess LLMs' zero-shot performance in creating domain-specific concept graphs and introduce TutorQA, a new expert-verified NLP-focused benchmark for scientific graph reasoning and QA.
1 code implementation • 24 Oct 2023 • Jinbin Bai, Zhen Dong, Aosong Feng, Xiao Zhang, Tian Ye, Kaicheng Zhou, Mike Zheng Shou
In the field of image processing, applying intricate semantic modifications within existing images remains an enduring challenge.
no code implementations • 25 Jul 2023 • Linyao Chen, Aosong Feng, Boming Yang, Zihui Li
Recently, diffusion models have excelled in image generation tasks and have also been applied to neural language processing (NLP) for controllable text generation.
1 code implementation • 5 May 2023 • Irene Li, Aosong Feng, Dragomir Radev, Rex Ying
Encoding long sequences in Natural Language Processing (NLP) is a challenging problem.
1 code implementation • 21 Oct 2022 • Aosong Feng, Irene Li, Yuang Jiang, Rex Ying
Efficient Transformers have been developed for long sequence modeling, due to their subquadratic memory and time complexity.
1 code implementation • 9 Jul 2022 • Aosong Feng, Leandros Tassiulas
Traffic flow forecasting on graphs has real-world applications in many fields, such as transportation system and computer networks.
1 code implementation • 3 Jan 2022 • Aosong Feng, Chenyu You, Shiqiang Wang, Leandros Tassiulas
We also show that the trained graph filters in KerGNNs can reveal the local graph structures of the dataset, which significantly improves the model interpretability compared with conventional GNN models.
no code implementations • 28 Oct 2021 • Chenyu You, Lianyi Han, Aosong Feng, Ruihan Zhao, Hui Tang, Wei Fan
Space-time video super-resolution (STVSR) aims to construct a high space-time resolution video sequence from the corresponding low-frame-rate, low-resolution video sequence.
no code implementations • NAACL (DLG4NLP) 2022 • Irene Li, Aosong Feng, Hao Wu, Tianxiao Li, Toyotaro Suzumura, Ruihai Dong
Besides, the model allows better interpretability for predicted labels as the token-label edges are exposed.
no code implementations • 2 Mar 2020 • Aosong Feng, Priyadarshini Panda
We achieve this by first training a small network (with lesser parameters) on a small subset of the original dataset, and then gradually expanding the network using Net2Net transformation to train incrementally on larger subsets of the dataset.