Search Results for author: Jiaxiang Liu

Found 26 papers, 8 papers with code

Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering

no code implementations15 Feb 2024 Jiaxiang Liu, Tong Zhou, Yubo Chen, Kang Liu, Jun Zhao

In summary, our results pave the way for enhancing LLMs by incorporating Pseudo- and Multisource-KGs, particularly in the context of open-ended questions.

Graph Generation Knowledge Graphs +1

Scalable Geometric Fracture Assembly via Co-creation Space among Assemblers

1 code implementation19 Dec 2023 Ruiyuan Zhang, Jiaxiang Liu, Zexi Li, Hao Dong, Jie Fu, Chao Wu

Therefore, there is a need to develop a scalable framework for geometric fracture assembly without relying on semantic information.

3D Assembly

A ChatGPT Aided Explainable Framework for Zero-Shot Medical Image Diagnosis

no code implementations5 Jul 2023 Jiaxiang Liu, Tianxiang Hu, Yan Zhang, Xiaotang Gai, Yang Feng, Zuozhu Liu

Recent advances in pretrained vision-language models (VLMs) such as CLIP have shown great performance for zero-shot natural image recognition and exhibit benefits in medical applications.

Image Classification Medical Image Classification

JoinBoost: Grow Trees Over Normalized Data Using Only SQL

no code implementations1 Jul 2023 Zezhou Huang, Rathijit Sen, Jiaxiang Liu, Eugene Wu

Although dominant for tabular data, ML libraries that train tree models over normalized databases (e. g., LightGBM, XGBoost) require the data to be denormalized as a single table, materialized, and exported.

Efficient Text-Guided 3D-Aware Portrait Generation with Score Distillation Sampling on Distribution

no code implementations3 Jun 2023 Yiji Cheng, Fei Yin, Xiaoke Huang, Xintong Yu, Jiaxiang Liu, Shikun Feng, Yujiu Yang, Yansong Tang

These elaborated designs enable our model to generate portraits with robust multi-view semantic consistency, eliminating the need for optimization-based methods.

Text to 3D

ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization

1 code implementation9 Jan 2023 Weixin Liu, Xuyi Chen, Jiaxiang Liu, Shikun Feng, Yu Sun, Hao Tian, Hua Wu

Experimental results demonstrate that our method yields a student with much better generalization, significantly outperforms existing baselines, and establishes a new state-of-the-art result on in-domain, out-domain, and low-resource datasets in the setting of task-agnostic distillation.

Knowledge Distillation Language Modelling +1

Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks

1 code implementation2 Jul 2022 Jiaxiang Liu, Yunhan Xing, Xiaomu Shi, Fu Song, Zhiwu Xu, Zhong Ming

Our approach is orthogonal to and can be integrated with many existing verification techniques.

ERNIE-Search: Bridging Cross-Encoder with Dual-Encoder via Self On-the-fly Distillation for Dense Passage Retrieval

no code implementations18 May 2022 Yuxiang Lu, Yiding Liu, Jiaxiang Liu, Yunsheng Shi, Zhengjie Huang, Shikun Feng Yu Sun, Hao Tian, Hua Wu, Shuaiqiang Wang, Dawei Yin, Haifeng Wang

Our method 1) introduces a self on-the-fly distillation method that can effectively distill late interaction (i. e., ColBERT) to vanilla dual-encoder, and 2) incorporates a cascade distillation process to further improve the performance with a cross-encoder teacher.

Knowledge Distillation Open-Domain Question Answering +2

ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention

no code implementations23 Mar 2022 Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

We argue that two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.

Sparse Learning text-classification +1

AI-enabled Automatic Multimodal Fusion of Cone-Beam CT and Intraoral Scans for Intelligent 3D Tooth-Bone Reconstruction and Clinical Applications

no code implementations11 Mar 2022 Jin Hao, Jiaxiang Liu, Jin Li, Wei Pan, Ruizhe Chen, Huimin Xiong, Kaiwei Sun, Hangzheng Lin, Wanlu Liu, Wanghui Ding, Jianfei Yang, Haoji Hu, Yueling Zhang, Yang Feng, Zeyu Zhao, Huikai Wu, Youyi Zheng, Bing Fang, Zuozhu Liu, Zhihe Zhao

Here, we present a Deep Dental Multimodal Analysis (DDMA) framework consisting of a CBCT segmentation model, an intraoral scan (IOS) segmentation model (the most accurate digital dental model), and a fusion model to generate 3D fused crown-root-bone structures with high fidelity and accurate occlusal and dentition information.

Segmentation

ERNIE-SPARSE: Robust Efficient Transformer Through Hierarchically Unifying Isolated Information

no code implementations29 Sep 2021 Yang Liu, Jiaxiang Liu, Yuxiang Lu, Shikun Feng, Yu Sun, Zhida Feng, Li Chen, Hao Tian, Hua Wu, Haifeng Wang

The first factor is information bottleneck sensitivity, which is caused by the key feature of Sparse Transformer — only a small number of global tokens can attend to all other tokens.

text-classification Text Classification

Alpha at SemEval-2021 Task 6: Transformer Based Propaganda Classification

no code implementations SEMEVAL 2021 Zhida Feng, Jiji Tang, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun, Li Chen

This paper describes our system participated in Task 6 of SemEval-2021: the task focuses on multimodal propaganda technique classification and it aims to classify given image and text into 22 classes.

Classification

ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression

1 code implementation4 Jun 2021 Weiyue Su, Xuyi Chen, Shikun Feng, Jiaxiang Liu, Weixin Liu, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

Specifically, the first stage, General Distillation, performs distillation with guidance from pretrained teacher, gerenal data and latent distillation loss.

Knowledge Distillation

Pose Discrepancy Spatial Transformer Based Feature Disentangling for Partial Aspect Angles SAR Target Recognition

no code implementations7 Mar 2021 Zaidao Wen, Jiaxiang Liu, ZhunGa Liu, Quan Pan

This letter presents a novel framework termed DistSTN for the task of synthetic aperture radar (SAR) automatic target recognition (ATR).

ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model

no code implementations SEMEVAL 2020 Zhengjie Huang, Shikun Feng, Weiyue Su, Xuyi Chen, Shuohuan Wang, Jiaxiang Liu, Xuan Ouyang, Yu Sun

This paper describes the system designed by ERNIE Team which achieved the first place in SemEval-2020 Task 10: Emphasis Selection For Written Text in Visual Media.

Data Augmentation Feature Engineering +3

OleNet at SemEval-2019 Task 9: BERT based Multi-Perspective Models for Suggestion Mining

no code implementations SEMEVAL 2019 Jiaxiang Liu, Shuohuan Wang, Yu Sun

This paper describes our system partici- pated in Task 9 of SemEval-2019: the task is focused on suggestion mining and it aims to classify given sentences into sug- gestion and non-suggestion classes in do- main specific and cross domain training setting respectively.

Sentence Suggestion mining

Cannot find the paper you are looking for? You can Submit a new open access paper.