Search Results for author: Jiaheng Liu

Found 35 papers, 13 papers with code

MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets

no code implementations22 Apr 2024 Zeyu Li, Ruitong Gan, Chuanchen Luo, Yuxi Wang, Jiaheng Liu, Ziwei Zhu Man Zhang, Qing Li, XuCheng Yin, Zhaoxiang Zhang, Junran Peng

Driven by powerful image diffusion models, recent research has achieved the automatic creation of 3D objects from textual or visual guidance.

Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model

no code implementations5 Apr 2024 Xinrun Du, Zhouliang Yu, Songyang Gao, Ding Pan, Yuyang Cheng, Ziyang Ma, Ruibin Yuan, Xingwei Qu, Jiaheng Liu, Tianyu Zheng, Xinchen Luo, Guorui Zhou, Binhang Yuan, Wenhu Chen, Jie Fu, Ge Zhang

In this study, we introduce CT-LLM, a 2B large language model (LLM) that illustrates a pivotal shift towards prioritizing the Chinese language in developing LLMs.

Language Modelling Large Language Model

Lemur: Log Parsing with Entropy Sampling and Chain-of-Thought Merging

1 code implementation28 Feb 2024 Wei zhang, Hongcheng Guo, Anjie Le, Jian Yang, Jiaheng Liu, Zhoujun Li, Tieqiao Zheng, Shi Xu, Runqiang Zang, Liangfan Zheng, Bo Zhang

Log parsing, which entails transforming raw log messages into structured templates, constitutes a critical phase in the automation of log analytics.

Log Parsing

ConceptMath: A Bilingual Concept-wise Benchmark for Measuring Mathematical Reasoning of Large Language Models

1 code implementation22 Feb 2024 Yanan Wu, Jie Liu, Xingyuan Bu, Jiaheng Liu, Zhanhui Zhou, Yuanxing Zhang, Chenchen Zhang, Zhiqi Bai, Haibin Chen, Tiezheng Ge, Wanli Ouyang, Wenbo Su, Bo Zheng

This paper introduces ConceptMath, a bilingual (English and Chinese), fine-grained benchmark that evaluates concept-wise mathematical reasoning of Large Language Models (LLMs).

Math Mathematical Reasoning

MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues

no code implementations22 Feb 2024 Ge Bai, Jie Liu, Xingyuan Bu, Yancheng He, Jiaheng Liu, Zhanhui Zhou, Zhuoran Lin, Wenbo Su, Tiezheng Ge, Bo Zheng, Wanli Ouyang

By conducting a detailed analysis of real multi-turn dialogue data, we construct a three-tier hierarchical ability taxonomy comprising 4208 turns across 1388 multi-turn dialogues in 13 distinct tasks.

Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!

1 code implementation19 Feb 2024 Zhanhui Zhou, Jie Liu, Zhichen Dong, Jiaheng Liu, Chao Yang, Wanli Ouyang, Yu Qiao

Large language models (LLMs) need to undergo safety alignment to ensure safe conversations with humans.

Language Modelling

MLAD: A Unified Model for Multi-system Log Anomaly Detection

no code implementations15 Jan 2024 Runqiang Zang, Hongcheng Guo, Jian Yang, Jiaheng Liu, Zhoujun Li, Tieqiao Zheng, Xu Shi, Liangfan Zheng, Bo Zhang

In spite of the rapid advancements in unsupervised log anomaly detection techniques, the current mainstream models still necessitate specific training for individual system datasets, resulting in costly procedures and limited scalability due to dataset size, thereby leading to performance bottlenecks.

Anomaly Detection Relational Reasoning +1

E^2-LLM: Efficient and Extreme Length Extension of Large Language Models

no code implementations13 Jan 2024 Jiaheng Liu, Zhiqi Bai, Yuanxing Zhang, Chenchen Zhang, Yu Zhang, Ge Zhang, Jiakai Wang, Haoran Que, Yukang Chen, Wenbo Su, Tiezheng Ge, Jie Fu, Wenhu Chen, Bo Zheng

Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources.

4k Position

xCoT: Cross-lingual Instruction Tuning for Cross-lingual Chain-of-Thought Reasoning

no code implementations13 Jan 2024 Linzheng Chai, Jian Yang, Tao Sun, Hongcheng Guo, Jiaheng Liu, Bing Wang, Xiannian Liang, Jiaqi Bai, Tongliang Li, Qiyao Peng, Zhoujun Li

To bridge the gap among different languages, we propose a cross-lingual instruction fine-tuning framework (xCOT) to transfer knowledge from high-resource languages to low-resource languages.

Few-Shot Learning Language Modelling +1

M2C: Towards Automatic Multimodal Manga Complement

1 code implementation26 Oct 2023 Hongcheng Guo, Boyang Wang, Jiaqi Bai, Jiaheng Liu, Jian Yang, Zhoujun Li

In other words, the Multimodal Manga Complement (M2C) task has not been investigated, which aims to handle the aforementioned issues by providing a shared semantic space for vision and language understanding.

RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models

1 code implementation1 Oct 2023 Zekun Moore Wang, Zhongyuan Peng, Haoran Que, Jiaheng Liu, Wangchunshu Zhou, Yuhan Wu, Hongcheng Guo, Ruitong Gan, Zehao Ni, Man Zhang, Zhaoxiang Zhang, Wanli Ouyang, Ke Xu, Wenhu Chen, Jie Fu, Junran Peng

The advent of Large Language Models (LLMs) has paved the way for complex tasks such as role-playing, which enhances user interactions by enabling models to imitate various characters.

Benchmarking

GripRank: Bridging the Gap between Retrieval and Generation via the Generative Knowledge Improved Passage Ranking

no code implementations29 May 2023 Jiaqi Bai, Hongcheng Guo, Jiaheng Liu, Jian Yang, Xinnian Liang, Zhao Yan, Zhoujun Li

However, the retrieved passages are not ideal for guiding answer generation because of the discrepancy between retrieval and generation, i. e., the candidate passages are all treated equally during the retrieval procedure without considering their potential to generate a proper answer.

Answer Generation Dialogue Generation +6

Multilingual Entity and Relation Extraction from Unified to Language-specific Training

no code implementations11 Jan 2023 Zixiang Wang, Jian Yang, Tongliang Li, Jiaheng Liu, Ying Mo, Jiaqi Bai, Longtao He, Zhoujun Li

In this paper, we propose a two-stage multilingual training method and a joint model called Multilingual Entity and Relation Extraction framework (mERE) to mitigate language interference across languages.

Relation Relation Extraction +1

ICD-Face: Intra-class Compactness Distillation for Face Recognition

no code implementations ICCV 2023 Zhipeng Yu, Jiaheng Liu, Haoyu Qin, Yichao Wu, Kun Hu, Jiayi Tian, Ding Liang

Knowledge distillation is an effective model compression method to improve the performance of a lightweight student model by transferring the knowledge of a well-performed teacher model, which has been widely adopted in many computer vision tasks, including face recognition (FR).

Face Recognition Knowledge Distillation +1

GD-MAE: Generative Decoder for MAE Pre-training on LiDAR Point Clouds

1 code implementation CVPR 2023 Honghui Yang, Tong He, Jiaheng Liu, Hua Chen, Boxi Wu, Binbin Lin, Xiaofei He, Wanli Ouyang

In contrast to previous 3D MAE frameworks, which either design a complex decoder to infer masked information from maintained regions or adopt sophisticated masking strategies, we instead propose a much simpler paradigm.

3D-QueryIS: A Query-based Framework for 3D Instance Segmentation

no code implementations17 Nov 2022 Jiaheng Liu, Tong He, Honghui Yang, Rui Su, Jiayi Tian, Junran Wu, Hongcheng Guo, Ke Xu, Wanli Ouyang

Previous top-performing methods for 3D instance segmentation often maintain inter-task dependencies and the tendency towards a lack of robustness.

3D Instance Segmentation Segmentation +1

LVP-M3: Language-aware Visual Prompt for Multilingual Multimodal Machine Translation

no code implementations19 Oct 2022 Hongcheng Guo, Jiaheng Liu, Haoyang Huang, Jian Yang, Zhoujun Li, Dongdong Zhang, Zheng Cui, Furu Wei

To this end, we first propose the Multilingual MMT task by establishing two new Multilingual MMT benchmark datasets covering seven languages.

Multimodal Machine Translation Translation

LogLG: Weakly Supervised Log Anomaly Detection via Log-Event Graph Construction

no code implementations23 Aug 2022 Hongcheng Guo, Yuhui Guo, Renjie Chen, Jian Yang, Jiaheng Liu, Zhoujun Li, Tieqiao Zheng, Weichao Hou, Liangfan Zheng, Bo Zhang

Experiments on five benchmarks validate the effectiveness of LogLG for detecting anomalies on unlabeled log data and demonstrate that LogLG, as the state-of-the-art weakly supervised method, achieves significant performance improvements compared to existing methods.

Anomaly Detection graph construction +1

Deep 3D Vessel Segmentation based on Cross Transformer Network

1 code implementation22 Aug 2022 Chengwei Pan, Baolian Qi, Gangming Zhao, Jiaheng Liu, Chaowei Fang, Dingwen Zhang, Jinpeng Li

In CTN, a transformer module is constructed in parallel to a U-Net to learn long-distance dependencies between different anatomical regions; and these dependencies are communicated to the U-Net at multiple stages to endow it with global awareness.

Computed Tomography (CT) Segmentation

Computer-aided Tuberculosis Diagnosis with Attribute Reasoning Assistance

1 code implementation1 Jul 2022 Chengwei Pan, Gangming Zhao, Junjie Fang, Baolian Qi, Jiaheng Liu, Chaowei Fang, Dingwen Zhang, Jinpeng Li, Yizhou Yu

Although deep learning algorithms have been intensively developed for computer-aided tuberculosis diagnosis (CTD), they mainly depend on carefully annotated datasets, leading to much time and resource consumption.

Attribute Relational Reasoning +1

CoupleFace: Relation Matters for Face Recognition Distillation

no code implementations12 Apr 2022 Jiaheng Liu, Haoyu Qin, Yichao Wu, Jinyang Guo, Ding Liang, Ke Xu

In this work, we observe that mutual relation knowledge between samples is also important to improve the discriminative ability of the learned representation of the student model, and propose an effective face recognition distillation method called CoupleFace by additionally introducing the Mutual Relation Distillation (MRD) into existing distillation framework.

Face Recognition Knowledge Distillation +1

Inter-class Discrepancy Alignment for Face Recognition

no code implementations2 Mar 2021 Jiaheng Liu, Yudong Wu, Yichao Wu, Zhenmao Li, Chen Ken, Ding Liang, Junjie Yan

In this study, we make a key observation that the local con-text represented by the similarities between the instance and its inter-class neighbors1plays an important role forFR.

Face Recognition

DAM: Discrepancy Alignment Metric for Face Recognition

no code implementations ICCV 2021 Jiaheng Liu, Yudong Wu, Yichao Wu, Chuming Li, Xiaolin Hu, Ding Liang, Mengyu Wang

To estimate the LID of each face image in the verification process, we propose two types of LID Estimation (LIDE) methods, which are reference-based and learning-based estimation methods, respectively.

Face Recognition

A Unified End-to-End Framework for Efficient Deep Image Compression

1 code implementation9 Feb 2020 Jiaheng Liu, Guo Lu, Zhihao Hu, Dong Xu

Our EDIC method can also be readily incorporated with the Deep Video Compression (DVC) framework to further improve the video compression performance.

Image Compression Video Compression

Learning to Auto Weight: Entirely Data-driven and Highly Efficient Weighting Framework

no code implementations27 May 2019 Zhenmao Li, Yichao Wu, Ken Chen, Yudong Wu, Shunfeng Zhou, Jiaheng Liu, Junjie Yan

Example weighting algorithm is an effective solution to the training bias problem, however, most previous typical methods are usually limited to human knowledge and require laborious tuning of hyperparameters.

Knowledge Distillation via Route Constrained Optimization

1 code implementation ICCV 2019 Xiao Jin, Baoyun Peng, Yi-Chao Wu, Yu Liu, Jiaheng Liu, Ding Liang, Xiaolin Hu

However, we find that the representation of a converged heavy model is still a strong constraint for training a small student model, which leads to a high lower bound of congruence loss.

Face Recognition Knowledge Distillation

Correlation Congruence for Knowledge Distillation

2 code implementations ICCV 2019 Baoyun Peng, Xiao Jin, Jiaheng Liu, Shunfeng Zhou, Yi-Chao Wu, Yu Liu, Dongsheng Li, Zhaoning Zhang

Most teacher-student frameworks based on knowledge distillation (KD) depend on a strong congruent constraint on instance level.

Face Recognition Image Classification +3

Dynamic Multi-path Neural Network

no code implementations28 Feb 2019 Yingcheng Su, Shunfeng Zhou, Yi-Chao Wu, Tian Su, Ding Liang, Jiaheng Liu, Dixin Zheng, Yingxu Wang, Junjie Yan, Xiaolin Hu

Although deeper and larger neural networks have achieved better performance, the complex network structure and increasing computational cost cannot meet the demands of many resource-constrained applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.