Search Results for author: Fangkai Jiao

Found 14 papers, 10 papers with code

Describe-then-Reason: Improving Multimodal Mathematical Reasoning through Visual Comprehension Training

no code implementations22 Apr 2024 Mengzhao Jia, Zhihan Zhang, Wenhao Yu, Fangkai Jiao, Meng Jiang

Open-source multimodal large language models (MLLMs) excel in various tasks involving textual and visual inputs but still struggle with complex multimodal mathematical reasoning, lagging behind proprietary models like GPT-4V(ision) and Gemini-Pro.

Math Mathematical Reasoning

Relevant or Random: Can LLMs Truly Perform Analogical Reasoning?

no code implementations19 Apr 2024 Chengwei Qin, Wenhan Xia, Tan Wang, Fangkai Jiao, Yuchen Hu, Bosheng Ding, Ruirui Chen, Shafiq Joty

One key finding in psychology is that compared with irrelevant past experiences, recalling relevant ones can help humans better handle new tasks.

GSM8K

How Much are LLMs Contaminated? A Comprehensive Survey and the LLMSanitize Library

1 code implementation31 Mar 2024 Mathieu Ravaut, Bosheng Ding, Fangkai Jiao, Hailin Chen, Xingxuan Li, Ruochen Zhao, Chengwei Qin, Caiming Xiong, Shafiq Joty

With the rise of Large Language Models (LLMs) in recent years, new opportunities are emerging, but also new challenges, and contamination is quickly becoming critical.

Question Answering

Learning Planning-based Reasoning by Trajectories Collection and Process Reward Synthesizing

2 code implementations1 Feb 2024 Fangkai Jiao, Chengwei Qin, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty

Large Language Models (LLMs) have demonstrated significant potential in handling complex reasoning tasks through step-by-step rationale generation.

Hallucination Logical Reasoning

Improving In-context Learning via Bidirectional Alignment

no code implementations28 Dec 2023 Chengwei Qin, Wenhan Xia, Fangkai Jiao, Shafiq Joty

Large language models (LLMs) have shown impressive few-shot generalization on many tasks via in-context learning (ICL).

In-Context Learning

UNK-VQA: A Dataset and a Probe into the Abstention Ability of Multi-modal Large Models

1 code implementation17 Oct 2023 Yangyang Guo, Fangkai Jiao, Zhiqi Shen, Liqiang Nie, Mohan Kankanhalli

Teaching Visual Question Answering (VQA) models to refrain from answering unanswerable questions is necessary for building a trustworthy AI system.

Attribute Question Answering +1

Exploring Self-supervised Logic-enhanced Training for Large Language Models

2 code implementations23 May 2023 Fangkai Jiao, Zhiyang Teng, Bosheng Ding, Zhengyuan Liu, Nancy F. Chen, Shafiq Joty

Existing efforts to improve logical reasoning ability of language models have predominantly relied on supervised fine-tuning, hindering generalization to new domains and/or tasks.

In-Context Learning Logical Reasoning

Panda LLM: Training Data and Evaluation for Open-Sourced Chinese Instruction-Following Large Language Models

1 code implementation4 May 2023 Fangkai Jiao, Bosheng Ding, Tianze Luo, Zhanfeng Mo

This project focuses on enhancing open-source large language models through instruction-tuning and providing comprehensive evaluations of their performance.

Instruction Following

Retrieving Multimodal Information for Augmented Generation: A Survey

no code implementations20 Mar 2023 Ruochen Zhao, Hailin Chen, Weishi Wang, Fangkai Jiao, Xuan Long Do, Chengwei Qin, Bosheng Ding, Xiaobao Guo, Minzhi Li, Xingxuan Li, Shafiq Joty

As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs' generation ability, which enables LLMs to better interact with the world.

Retrieval

A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction

1 code implementation ACL 2020 Yilin Niu, Fangkai Jiao, Mantong Zhou, Ting Yao, Jingfang Xu, Minlie Huang

Neural models have achieved great success on machine reading comprehension (MRC), many of which typically consist of two components: an evidence extractor and an answer predictor.

Machine Reading Comprehension Multi-Choice MRC +1

Cannot find the paper you are looking for? You can Submit a new open access paper.