1 code implementation • 23 May 2024 • Chuanyang Zheng, Yihang Gao, Han Shi, Minbin Huang, Jingyao Li, Jing Xiong, Xiaozhe Ren, Michael Ng, Xin Jiang, Zhenguo Li, Yu Li
Positional encoding plays a crucial role in transformers, significantly impacting model performance and length generalization.
no code implementations • 22 Feb 2024 • Jingyao Li, Pengguang Chen, Xuan Ju, Hong Xu, Jiaya Jia
Our research aims to bridge the domain gap between natural and artificial scenarios with efficient tuning strategies.
no code implementations • 5 Jan 2024 • Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Jiaya Jia
The crux of effective out-of-distribution (OOD) detection lies in acquiring a robust in-distribution (ID) representation, distinct from OOD samples.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 26 Dec 2023 • Jingyao Li, Pengguang Chen, Shaozuo Yu, Shu Liu, Jiaya Jia
Experimental results demonstrate that, when labeling 80% of the samples, the performance of the current SOTA method declines by 0. 74%, whereas our proposed BAL achieves performance comparable to the full dataset.
1 code implementation • 26 Dec 2023 • Jingyao Li, Pengguang Chen, Jiaya Jia
Large Language Models (LLMs) have showcased impressive capabilities in handling straightforward programming tasks.
Ranked #1 on Code Generation on CodeContests (Test Set pass@1 metric)
no code implementations • 15 Apr 2023 • Jingyao Li, Pengguang Chen, Shengju Qian, Jiaya Jia
However, existing models easily misidentify input pixels from unseen classes, thus confusing novel classes with semantically-similar ones.
1 code implementation • CVPR 2023 • Jingyao Li, Pengguang Chen, Shaozuo Yu, Zexin He, Shu Liu, Jiaya Jia
The core of out-of-distribution (OOD) detection is to learn the in-distribution (ID) representation, which is distinguishable from OOD samples.
Ranked #12 on Out-of-Distribution Detection on ImageNet-1k vs Places (AUROC metric)
1 code implementation • 25 Jul 2021 • Junjie Li, Jingyao Li, Wenbo Zhou, Shuai Lü
The training of generative adversarial networks (GANs) is usually vulnerable to mode collapse and vanishing gradients.