Search Results for author: Liangzhi Li

Found 19 papers, 13 papers with code

Can multiple-choice questions really be useful in detecting the abilities of LLMs?

1 code implementation26 Mar 2024 Wangyue Li, Liangzhi Li, Tong Xiang, Xiao Liu, Wei Deng, Noa Garcia

Additionally, we propose two methods to quantify the consistency and confidence of LLMs' output, which can be generalized to other QA evaluation benchmarks.

Multiple-choice Question Answering

BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering

no code implementations13 Dec 2023 Xiaojie Hong, Zixin Song, Liangzhi Li, Xiaoli Wang, Feiyan Liu

Medical Visual Question Answering (Med-VQA) is a very important task in healthcare industry, which answers a natural language question with a medical image.

Medical Visual Question Answering Question Answering +1

Towards Robust and Accurate Visual Prompting

no code implementations18 Nov 2023 Qi Li, Liangzhi Li, Zhouqiang Jiang, Bowen Wang

Visual prompting, an efficient method for transfer learning, has shown its potential in vision tasks.

Adversarial Robustness Transfer Learning +1

Instruct Me More! Random Prompting for Visual In-Context Learning

1 code implementation7 Nov 2023 Jiahao Zhang, Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara

Our findings suggest that InMeMo offers a versatile and efficient way to enhance the performance of visual ICL with lightweight training.

Foreground Segmentation In-Context Learning +2

Concatenated Masked Autoencoders as Spatial-Temporal Learner

1 code implementation2 Nov 2023 Zhouqiang Jiang, Bowen Wang, Tong Xiang, Zhaofeng Niu, Hong Tang, Guangshun Li, Liangzhi Li

Learning representations from videos requires understanding continuous motion and visual correspondences between frames.

Action Recognition Data Augmentation +3

TCRA-LLM: Token Compression Retrieval Augmented Large Language Model for Inference Cost Reduction

no code implementations24 Oct 2023 Junyi Liu, Liangzhi Li, Tong Xiang, Bowen Wang, Yiming Qian

Our summarization compression can reduce 65% of the retrieval token size with further 0. 3% improvement on the accuracy; semantic compression provides a more flexible way to trade-off the token size with performance, for which we can reduce the token size by 20% with only 1. 6% of accuracy drop.

Food recommendation In-Context Learning +3

Dual-Feedback Knowledge Retrieval for Task-Oriented Dialogue Systems

no code implementations23 Oct 2023 Tianyuan Shi, Liangzhi Li, Zijian Lin, Tao Yang, Xiaojun Quan, Qifan Wang

Efficient knowledge retrieval plays a pivotal role in ensuring the success of end-to-end task-oriented dialogue systems by facilitating the selection of relevant information necessary to fulfill user requests.

Open-Domain Question Answering Response Generation +2

IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuning

1 code implementation23 Aug 2023 Feiyu Zhang, Liangzhi Li, JunHao Chen, Zhouqiang Jiang, Bowen Wang, Yiming Qian

This approach is different from the pruning method as it is not limited by the initial number of training parameters, and each parameter matrix has a higher rank upper bound for the same training overhead.

CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care

1 code implementation NeurIPS 2023 Tong Xiang, Liangzhi Li, Wangyue Li, Mingbai Bai, Lu Wei, Bowen Wang, Noa Garcia

In an effort to minimize the reliance on human resources for performance evaluation, we offer off-the-shelf judgment models for automatically assessing the LF output of LLMs given benchmark questions.

Misinformation

Learning Bottleneck Concepts in Image Classification

1 code implementation CVPR 2023 Bowen Wang, Liangzhi Li, Yuta Nakashima, Hajime Nagahara

Using some image classification tasks as our testbed, we demonstrate BotCL's potential to rebuild neural networks for better interpretability.

Classification Image Classification

Match Them Up: Visually Explainable Few-shot Image Classification

1 code implementation25 Nov 2020 Bowen Wang, Liangzhi Li, Manisha Verma, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara

Few-shot learning (FSL) approaches are usually based on an assumption that the pre-trained knowledge can be obtained from base (seen) categories and can be well transferred to novel (unseen) categories.

Classification Few-Shot Image Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.