1 code implementation • 8 Mar 2024 • Hao Kang, Qingru Zhang, Souvik Kundu, Geonhwa Jeong, Zaoxing Liu, Tushar Krishna, Tuo Zhao
Key-value (KV) caching has become the de-facto to accelerate generation speed for large language models (LLMs) inference.
no code implementations • 26 Dec 2023 • Lu Ling, Yichen Sheng, Zhi Tu, Wentian Zhao, Cheng Xin, Kun Wan, Lantao Yu, Qianyu Guo, Zixun Yu, Yawen Lu, Xuanmao Li, Xingpeng Sun, Rohan Ashok, Aniruddha Mukherjee, Hao Kang, Xiangrui Kong, Gang Hua, Tianyi Zhang, Bedrich Benes, Aniket Bera
We have witnessed significant progress in deep learning-based 3D vision, ranging from neural radiance field (NeRF) based 3D representation learning to applications in novel view synthesis (NVS).
1 code implementation • 28 Nov 2023 • Jiaxin Lu, Hao Kang, Haoxiang Li, Bo Liu, Yiding Yang, QiXing Huang, Gang Hua
Generation-based methods that generate grasping postures conditioned on the object can often produce diverse grasping, but they are insufficient for high grasping success due to lack of discriminative information.
1 code implementation • 15 Nov 2023 • Yutian Chen, Hao Kang, Vivian Zhai, Liangze Li, Rita Singh, Bhiksha Raj
This paper introduces a novel approach for identifying the possible large language models (LLMs) involved in text generation.
1 code implementation • 2 Jun 2023 • Yu Yang, Hao Kang, Baharan Mirzasoleiman
To improve the efficiency and sustainability of learning deep models, we propose CREST, the first scalable framework with rigorous theoretical guarantees to identify the most valuable examples for training non-convex models, particularly deep networks.
2 code implementations • 13 May 2023 • Yutian Chen, Hao Kang, Vivian Zhai, Liangze Li, Rita Singh, Bhiksha Raj
This paper presents a novel approach for detecting ChatGPT-generated vs. human-written text using language models.
1 code implementation • ICCV 2023 • Siming Yan, Zhenpei Yang, Haoxiang Li, Chen Song, Li Guan, Hao Kang, Gang Hua, QiXing Huang
The most popular and accessible 3D representation, i. e., point clouds, involves discrete samples of the underlying continuous 3D surface.
Ranked #5 on 3D Point Cloud Linear Classification on ModelNet40 (using extra training data)
3D Point Cloud Classification 3D Point Cloud Linear Classification +3
no code implementations • 1 May 2021 • Bo Liu, Haoxiang Li, Hao Kang, Nuno Vasconcelos, Gang Hua
A consistency loss has been introduced to limit the impact from unlabeled data while leveraging them to update the feature embedding.
no code implementations • 1 May 2021 • Bo Liu, Haoxiang Li, Hao Kang, Gang Hua, Nuno Vasconcelos
It is shown that, unlike class-balanced sampling, this is an adversarial augmentation strategy.
no code implementations • ICCV 2021 • Bo Liu, Haoxiang Li, Hao Kang, Gang Hua, Nuno Vasconcelos
A new learning algorithm is then proposed for GeometrIc Structure Transfer (GIST), with resort to a combination of loss functions that combine class-balanced and random sampling to guarantee that, while overfitting to the popular classes is restricted to geometric parameters, it is leveraged to transfer class geometry from popular to few-shot classes.
no code implementations • 24 Mar 2021 • Wei Wei, Li Guan, Yue Liu, Hao Kang, Haoxiang Li, Ying Wu, Gang Hua
By the proposed physical regularization, our method can generate HDRs which are not only visually appealing but also physically plausible.
1 code implementation • CVPR 2020 • Bo Liu, Hao Kang, Haoxiang Li, Gang Hua, Nuno Vasconcelos
It is argued that the classic softmax classifier is a poor solution for open-set recognition, since it tends to overfit on the training classes.
no code implementations • 27 Sep 2016 • Sören Pirk, Vojtech Krs, Kaimo Hu, Suren Deepak Rajasekaran, Hao Kang, Bedrich Benes, Yusuke Yoshiyasu, Leonidas J. Guibas
We introduce a new general representation for proximal interactions among physical objects that is agnostic to the type of objects or interaction involved.