Search Results for author: Man Luo

Found 33 papers, 9 papers with code

Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models

1 code implementation23 Apr 2024 Mihir Parmar, Nisarg Patel, Neeraj Varshney, Mutsumi Nakamura, Man Luo, Santosh Mashetty, Arindam Mitra, Chitta Baral

Existing work investigating this reasoning ability of LLMs has focused only on a couple of inference rules (such as modus ponens and modus tollens) of propositional and first-order logic.

Logical Reasoning Question Answering

Refining Text-to-Image Generation: Towards Accurate Training-Free Glyph-Enhanced Image Generation

no code implementations25 Mar 2024 Sanyam Lakhanpal, Shivang Chopra, Vinija Jain, Aman Chadha, Man Luo

We introduce a benchmark, LenCom-Eval, specifically designed for testing models' capability in generating images with Lengthy and Complex visual text.

Optical Character Recognition (OCR) Text-to-Image Generation

In-context Learning with Retrieved Demonstrations for Language Models: A Survey

no code implementations21 Jan 2024 Man Luo, Xin Xu, Yue Liu, Panupong Pasupat, Mehran Kazemi

Language models, especially pre-trained large language models, have showcased remarkable abilities as few-shot in-context learners (ICL), adept at adapting to new tasks with just a few demonstrations in the input context.

In-Context Learning Retrieval

Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models

no code implementations2 Oct 2023 Man Luo, Shrinidhi Kumbhar, Ming Shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya, Chitta Baral

This work strives to understand the proficiency of LLMs in logical reasoning by offering a brief review of the latest progress in this area; with a focus on the logical reasoning datasets, tasks, and the methods adopted to utilize LLMs for reasoning.

Knowledge Distillation Language Modelling +1

MDDial: A Multi-turn Differential Diagnosis Dialogue Dataset with Reliability Evaluation

1 code implementation16 Aug 2023 Srija Macherla, Man Luo, Mihir Parmar, Chitta Baral

We introduce a unified score for the ADD system that takes into account the interplay between symptoms and diagnosis.

Natural Language Understanding

End-to-end Knowledge Retrieval with Multi-modal Queries

1 code implementation1 Jun 2023 Man Luo, Zhiyuan Fang, Tejas Gokhale, Yezhou Yang, Chitta Baral

We investigate knowledge retrieval with multi-modal queries, i. e. queries containing information split across image and text inputs, a challenging task that differs from previous work on cross-modal retrieval.

Benchmarking Cross-Modal Retrieval +2

Dr.ICL: Demonstration-Retrieved In-context Learning

no code implementations23 May 2023 Man Luo, Xin Xu, Zhuyun Dai, Panupong Pasupat, Mehran Kazemi, Chitta Baral, Vaiva Imbrasaite, Vincent Y Zhao

In-context learning (ICL), teaching a large language model (LLM) to perform a task with few-shot demonstrations rather than adjusting the model parameters, has emerged as a strong paradigm for using LLMs.

In-Context Learning Language Modelling +2

Can Open-Domain QA Reader Utilize External Knowledge Efficiently like Humans?

no code implementations23 Nov 2022 Neeraj Varshney, Man Luo, Chitta Baral

Comparing with the FiD reader, this approach matches its accuracy by utilizing just 18. 32% of its reader inference cost and also outperforms it by achieving up to 55. 10% accuracy on NQ Open.

Open-Domain Question Answering TriviaQA

Fleet Rebalancing for Expanding Shared e-Mobility Systems: A Multi-agent Deep Reinforcement Learning Approach

1 code implementation11 Nov 2022 Man Luo, Bowen Du, Wenzhe Zhang, Tianyou Song, Kun Li, HongMing Zhu, Mark Birkin, Hongkai Wen

This is particularly challenging in the context of expanding systems, because i) the range of the EVs is limited while charging time is typically long, which constrain the viable rebalancing operations; and ii) the EV stations in the system are dynamically changing, i. e., the legitimate targets for rebalancing operations can vary over time.

Multi-agent Reinforcement Learning

A Study on the Efficiency and Generalization of Light Hybrid Retrievers

no code implementations4 Oct 2022 Man Luo, Shashank Jain, Anchit Gupta, Arash Einolghozati, Barlas Oguz, Debojeet Chatterjee, Xilun Chen, Chitta Baral, Peyman Heidari

Driven by this question, we leverage an indexing-efficient dense retriever (i. e. DrBoost) and introduce a LITE retriever that further reduces the memory of DrBoost.

Adversarial Attack Contrastive Learning +1

BioTABQA: Instruction Learning for Biomedical Table Question Answering

no code implementations6 Jul 2022 Man Luo, Sharad Saxena, Swaroop Mishra, Mihir Parmar, Chitta Baral

To the best of our knowledge, none of TQA datasets exist in the biomedical domain where tables are frequently used to present information.

Question Answering

Neural Retriever and Go Beyond: A Thesis Proposal

no code implementations NAACL (ACL) 2022 Man Luo

First, we introduce methods to address the abovementioned issues of neural retrievers from three angles, new model architectures, IR-oriented pretraining tasks, and generating large scale training data.

Open-Domain Question Answering

In-BoXBART: Get Instructions into Biomedical Multi-Task Learning

2 code implementations Findings (NAACL) 2022 Mihir Parmar, Swaroop Mishra, Mirali Purohit, Man Luo, M. Hassan Murad, Chitta Baral

Recently, instructional prompts have shown significant improvement towards multi-task generalization; however, the effect of instructional prompts and Multi-Task Learning (MTL) has not been systematically studied in the biomedical domain.

Few-Shot Learning Multi-Task Learning

Improving Contrastive Learning with Model Augmentation

1 code implementation25 Mar 2022 Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong

However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.

Contrastive Learning Data Augmentation +2

Choose Your QA Model Wisely: A Systematic Study of Generative and Extractive Readers for Question Answering

no code implementations SpaNLP (ACL) 2022 Man Luo, Kazuma Hashimoto, Semih Yavuz, Zhiwei Liu, Chitta Baral, Yingbo Zhou

Among several interesting findings, it is important to highlight that (1) the generative readers perform better in long context QA, (2) the extractive readers perform better in short context while also showing better out-of-domain generalization, and (3) the encoder of encoder-decoder PrLMs (e. g., T5) turns out to be a strong extractive reader and outperforms the standard choice of encoder-only PrLMs (e. g., RoBERTa).

Domain Generalization Multi-Task Learning +1

Improving Biomedical Information Retrieval with Neural Retrievers

no code implementations19 Jan 2022 Man Luo, Arindam Mitra, Tejas Gokhale, Chitta Baral

We show that BM25 and our method can complement each other, and a simple hybrid model leads to further gains in the large corpus setting.

Biomedical Information Retrieval Information Retrieval +4

Deployment Optimization for Shared e-Mobility Systems with Multi-agent Deep Neural Search

no code implementations3 Nov 2021 Man Luo, Bowen Du, Konstantin Klemmer, HongMing Zhu, Hongkai Wen

Shared e-mobility services have been widely tested and piloted in cities across the globe, and already woven into the fabric of modern urban planning.

Self-supervised Learning for Sequential Recommendation with Model Augmentation

no code implementations29 Sep 2021 Zhiwei Liu, Yongjun Chen, Jia Li, Man Luo, Philip S. Yu, Caiming Xiong

However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.

Contrastive Learning Data Augmentation +2

A Simple Approach to Jointly Rank Passages and Select Relevant Sentences in the OBQA Context

no code implementations NAACL (ACL) 2022 Man Luo, Shuguang Chen, Chitta Baral

Furthermore, we propose consistency and similarity constraints to promote the correlation and interaction between passage ranking and sentence selection. The experiments demonstrate that our framework can achieve competitive results with previous systems and outperform the baseline by 28\% in terms of exact matching of relevant sentences on the HotpotQA dataset.

Passage Ranking Question Answering +1

Weakly-Supervised Visual-Retriever-Reader for Knowledge-based Question Answering

1 code implementation EMNLP 2021 Man Luo, Yankai Zeng, Pratyay Banerjee, Chitta Baral

The visual retriever aims to retrieve relevant knowledge, and the visual reader seeks to predict answers based on given knowledge.

Question Answering Retrieval +1

Deep Signature FBSDE Algorithm

no code implementations24 Aug 2021 Qi Feng, Man Luo, Zhaoyu Zhang

We propose a deep signature/log-signature FBSDE algorithm to solve forward-backward stochastic differential equations (FBSDEs) with state and path dependent features.

Unitary Approximate Message Passing for Sparse Bayesian Learning

no code implementations25 Jan 2021 Man Luo, Qinghua Guo, Ming Jin, Yonina C. Eldar, Defeng, Huang, Xiangming Meng

Sparse Bayesian learning (SBL) can be implemented with low complexity based on the approximate message passing (AMP) algorithm.

Variational Inference

Can Transformers Reason About Effects of Actions?

no code implementations17 Dec 2020 Pratyay Banerjee, Chitta Baral, Man Luo, Arindam Mitra, Kuntal Pal, Tran C. Son, Neeraj Varshney

A recent work has shown that transformers are able to "reason" with facts and rules in a limited setting where the rules are natural language expressions of conjunctions of conditions implying a conclusion.

Common Sense Reasoning Question Answering

Strong Equivalence for LPMLN Programs

no code implementations18 Sep 2019 Joohyung Lee, Man Luo

We show that the verification of strong equivalence in LPMLN can be reduced to equivalence checking in classical logic via a reduct and choice rules as well as to equivalence checking under the "soft" logic of here-and-there.

Strong equivalence for $\rm LP^{MLN}$ programs

no code implementations18 May 2019 Man Luo

Strong equivalence is a well-studied and important concept in answer set programming (ASP).

Logic in Computer Science

Demand Prediction for Electric Vehicle Sharing

no code implementations10 Mar 2019 Man Luo, Hongkai Wen, Yi Luo, Bowen Du, Konstantin Klemmer, Hong-Ming Zhu

Electric Vehicle (EV) sharing systems have recently experienced unprecedented growth across the globe.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.