no code implementations • 22 Mar 2024 • Wanli Xie, Ruibin Zhao, Zhenguo Xu, TingTing Liang
To tackle these challenges, this study suggests the implementation of a grey-informed neural network (GINN).
1 code implementation • 3 Nov 2023 • Fangyuan Zhang, TingTing Liang, Zhengyuan Wu, Yuyu Yin
Recently, significant progress has been made in the development of Vision Language Models (VLMs), expanding the capabilities of LLMs and enabling them to execute more diverse instructions.
no code implementations • 7 Sep 2023 • Jiangshu Du, Congying Xia, Wenpeng Yin, TingTing Liang, Philip S. Yu
In intent detection tasks, leveraging meaningful semantic information from intent labels can be particularly beneficial for few-shot scenarios.
1 code implementation • 20 Nov 2022 • Jintang Li, Jiaying Peng, Liang Chen, Zibin Zheng, TingTing Liang, Qing Ling
In this work, we seek to address these challenges and propose Spectral Adversarial Training (SAT), a simple yet effective adversarial training approach for GNNs.
1 code implementation • 4 Jul 2022 • Zhiwei Lin, TingTing Liang, Taihong Xiao, Yongtao Wang, Zhi Tang, Ming-Hsuan Yang
To address this issue, we propose a neural architecture search method named FlowNAS to automatically find the better encoder architecture for flow estimation task.
1 code implementation • 30 May 2022 • Kaicheng Yu, Tang Tao, Hongwei Xie, Zhiwei Lin, Zhongwei Wu, Zhongyu Xia, TingTing Liang, Haiyang Sun, Jiong Deng, Dayang Hao, Yongtao Wang, Xiaodan Liang, Bing Wang
There are two critical sensors for 3D perception in autonomous driving, the camera and the LiDAR.
2 code implementations • 27 May 2022 • TingTing Liang, Hongwei Xie, Kaicheng Yu, Zhongyu Xia, Zhiwei Lin, Yongtao Wang, Tao Tang, Bing Wang, Zhi Tang
Fusing the camera and LiDAR information has become a de-facto standard for 3D object detection tasks.
no code implementations • 1 Apr 2022 • TingTing Liang, Yixuan Jiang, Congying Xia, Ziqiang Zhao, Yuyu Yin, Philip S. Yu
Recently, conversational OpenQA is proposed to address these issues with the abundant contextual information in the conversation.
5 code implementations • 1 Jul 2021 • TingTing Liang, Xiaojie Chu, Yudong Liu, Yongtao Wang, Zhi Tang, Wei Chu, Jingdong Chen, Haibin Ling
With multi-scale testing, we push the current best single model result to a new record of 60. 1% box AP and 52. 3% mask AP without using extra training data.
Ranked #6 on Object Detection on COCO-O (using extra training data)
1 code implementation • CVPR 2021 • TingTing Liang, Yongtao Wang, Zhi Tang, Guosheng Hu, Haibin Ling
Encouraged by the success, we propose a novel One-Shot Path Aggregation Network Architecture Search (OPANAS) algorithm, which significantly improves both searching efficiency and detection accuracy.
no code implementations • COLING 2020 • Lichao Sun, Congying Xia, Wenpeng Yin, TingTing Liang, Philip S. Yu, Lifang He
Our studies show that mixup is a domain-independent data augmentation technique to pre-trained language models, resulting in significant performance improvement for transformer-based models.
no code implementations • 19 Jan 2020 • Kaiyu Shan, Yongtao Wang, Zhuoying Wang, TingTing Liang, Zhi Tang, Ying Chen, Yangyan Li
To efficiently extract spatiotemporal features of video for action recognition, most state-of-the-art methods integrate 1D temporal convolution into a conventional 2D CNN backbone.