Search Results for author: William Yang Wang

Found 228 papers, 116 papers with code

Counterfactual Vision-and-Language Navigation via Adversarial Path Sampler

no code implementations ECCV 2020 Tsu-Jui Fu, Xin Eric Wang, Matthew F. Peterson,Scott T. Grafton, Miguel P. Eckstein, William Yang Wang

In particular, we present a model-agnostic adversarial path sampler (APS) that learns to sample challenging paths that force the navigator to improve based on the navigation performance.

counterfactual Counterfactual Reasoning +2

Lost in Translation? Translation Errors and Challenges for Fair Assessment of Text-to-Image Models on Multilingual Concepts

no code implementations17 Mar 2024 Michael Saxon, Yiran Luo, Sharon Levy, Chitta Baral, Yezhou Yang, William Yang Wang

Benchmarks of the multilingual capabilities of text-to-image (T2I) models compare generated images prompted in a test language to an expected image distribution over a concept set.

Translation

Reward Guided Latent Consistency Distillation

no code implementations16 Mar 2024 Jiachen Li, Weixi Feng, Wenhu Chen, William Yang Wang

By distilling a latent consistency model (LCM) from a pre-trained teacher latent diffusion model (LDM), LCD facilitates the generation of high-fidelity images within merely 2 to 4 inference steps.

Image Generation

Hire a Linguist!: Learning Endangered Languages with In-Context Linguistic Descriptions

no code implementations28 Feb 2024 Kexun Zhang, Yee Man Choi, Zhenqiao Song, Taiqi He, William Yang Wang, Lei LI

On the contrary, we observe that 2000 endangered languages, though without a large corpus, have a grammar book or a dictionary.

Perils of Self-Feedback: Self-Bias Amplifies in Large Language Models

no code implementations18 Feb 2024 Wenda Xu, Guanglei Zhu, Xuandong Zhao, Liangming Pan, Lei LI, William Yang Wang

Recent studies show that self-feedback improves large language models (LLMs) on certain tasks while worsens other tasks.

Mathematical Reasoning Text Generation

Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation

1 code implementation5 Feb 2024 Xinyi Wang, Alfonso Amayuelas, Kexun Zhang, Liangming Pan, Wenhu Chen, William Yang Wang

To understand how pre-training with a next-token prediction objective contributes to the emergence of such reasoning capability, we propose that we can view an LM as deriving new conclusions by aggregating indirect reasoning paths seen at pre-training time.

Knowledge Graphs Math

Weak-to-Strong Jailbreaking on Large Language Models

1 code implementation30 Jan 2024 Xuandong Zhao, Xianjun Yang, Tianyu Pang, Chao Du, Lei LI, Yu-Xiang Wang, William Yang Wang

In this paper, we propose the weak-to-strong jailbreaking attack, an efficient method to attack aligned LLMs to produce harmful text.

Tweets to Citations: Unveiling the Impact of Social Media Influencers on AI Research Visibility

no code implementations24 Jan 2024 Iain Xie Weissburg, Mehir Arora, Xinyi Wang, Liangming Pan, William Yang Wang

As the number of accepted papers at AI and ML conferences reaches into the thousands, it has become unclear how researchers access and read research publications.

Causal Inference

Efficient Online Data Mixing For Language Model Pre-Training

no code implementations5 Dec 2023 Alon Albalak, Liangming Pan, Colin Raffel, William Yang Wang

The data used to pretrain large language models has a decisive impact on a model's downstream performance, which has led to a large body of work on data selection methods that aim to automatically determine the most suitable data to use for pretraining.

Language Modelling

VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following

no code implementations29 Nov 2023 Yujie Lu, Xiujun Li, William Yang Wang, Yejin Choi

We introduce VISUAL EMBEDDED INSTRUCTION (VIM), a new framework designed to evaluate the visual instruction following capability of Multimodal Large Language Models (MLLMs).

In-Context Learning visual instruction following

GPT-4V(ision) as a Generalist Evaluator for Vision-Language Tasks

no code implementations2 Nov 2023 Xinlu Zhang, Yujie Lu, Weizhi Wang, An Yan, Jun Yan, Lianke Qin, Heng Wang, Xifeng Yan, William Yang Wang, Linda Ruth Petzold

Automatically evaluating vision-language tasks is challenging, especially when it comes to reflecting human judgments due to limitations in accounting for fine-grained details.

Image Generation

A Survey on Detection of LLMs-Generated Content

1 code implementation24 Oct 2023 Xianjun Yang, Liangming Pan, Xuandong Zhao, Haifeng Chen, Linda Petzold, William Yang Wang, Wei Cheng

The burgeoning capabilities of advanced large language models (LLMs) such as ChatGPT have led to an increase in synthetic content generation with implications across a variety of sectors, including media, cybersecurity, public discourse, and education.

ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models

1 code implementation14 Oct 2023 Alex Mei, Sharon Levy, William Yang Wang

As large language models are integrated into society, robustness toward a suite of prompts is increasingly important to maintain reliability in a high-variance environment. Robustness evaluations must comprehensively encapsulate the various settings in which a user may invoke an intelligent system.

Empowering Psychotherapy with Large Language Models: Cognitive Distortion Detection through Diagnosis of Thought Prompting

1 code implementation11 Oct 2023 Zhiyu Chen, Yujie Lu, William Yang Wang

Mental illness remains one of the most critical public health issues of our time, due to the severe scarcity and accessibility limit of professionals.

Guiding Language Model Math Reasoning with Planning Tokens

no code implementations9 Oct 2023 Xinyi Wang, Lucas Caccia, Oleksiy Ostapenko, Xingdi Yuan, William Yang Wang, Alessandro Sordoni

Large language models (LLMs) have recently attracted considerable interest for their ability to perform complex reasoning tasks, such as chain-of-thought reasoning.

Language Modelling Math

Zero-Shot Detection of Machine-Generated Codes

1 code implementation8 Oct 2023 Xianjun Yang, Kexun Zhang, Haifeng Chen, Linda Petzold, William Yang Wang, Wei Cheng

We then modify the previous zero-shot text detection method, DetectGPT (Mitchell et al., 2023) by utilizing a surrogate white-box model to estimate the probability of the rightmost tokens, allowing us to identify code snippets generated by language models.

Language Modelling Text Detection

Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models

no code implementations4 Oct 2023 Xianjun Yang, Xiao Wang, Qi Zhang, Linda Petzold, William Yang Wang, Xun Zhao, Dahua Lin

This study serves as a clarion call for a collective effort to overhaul and fortify the safety of open-source LLMs against malicious attackers.

Guiding Instruction-based Image Editing via Multimodal Large Language Models

2 code implementations29 Sep 2023 Tsu-Jui Fu, Wenze Hu, Xianzhi Du, William Yang Wang, Yinfei Yang, Zhe Gan

Extensive experimental results demonstrate that expressive instructions are crucial to instruction-based image editing, and our MGIE can lead to a notable improvement in automatic metrics and human evaluation while maintaining competitive inference efficiency.

Image Manipulation Response Generation

VELMA: Verbalization Embodiment of LLM Agents for Vision and Language Navigation in Street View

1 code implementation12 Jul 2023 Raphael Schumann, Wanrong Zhu, Weixi Feng, Tsu-Jui Fu, Stefan Riezler, William Yang Wang

In this work, we propose VELMA, an embodied LLM agent that uses a verbalization of the trajectory and of visual environment observations as contextual prompt for the next action.

Decision Making Natural Language Understanding +1

Multilingual Conceptual Coverage in Text-to-Image Models

1 code implementation2 Jun 2023 Michael Saxon, William Yang Wang

We propose "Conceptual Coverage Across Languages" (CoCo-CroLa), a technique for benchmarking the degree to which any generative text-to-image system provides multilingual parity to its training language in terms of tangible nouns.

Benchmarking

DNA-GPT: Divergent N-Gram Analysis for Training-Free Detection of GPT-Generated Text

1 code implementation27 May 2023 Xianjun Yang, Wei Cheng, Yue Wu, Linda Petzold, William Yang Wang, Haifeng Chen

However, this progress also presents a significant challenge in detecting the origin of a given text, and current research on detection methods lags behind the rapid evolution of LLMs.

ALGO: Synthesizing Algorithmic Programs with LLM-Generated Oracle Verifiers

1 code implementation NeurIPS 2023 Kexun Zhang, Danqing Wang, Jingtao Xia, William Yang Wang, Lei LI

To address these challenges, we propose ALGO, a framework that synthesizes Algorithmic programs with LLM-Generated Oracles to guide the generation and verify their correctness.

Code Generation

LayoutGPT: Compositional Visual Planning and Generation with Large Language Models

1 code implementation NeurIPS 2023 Weixi Feng, Wanrong Zhu, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Xuehai He, Sugato Basu, Xin Eric Wang, William Yang Wang

When combined with a downstream image generation model, LayoutGPT outperforms text-to-image models/systems by 20-40% and achieves comparable performance as human users in designing visual layouts for numerical and spatial correctness.

Indoor Scene Synthesis Text-to-Image Generation

Let's Think Frame by Frame with VIP: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought

1 code implementation23 May 2023 Vaishnavi Himakunthala, Andy Ouyang, Daniel Rose, Ryan He, Alex Mei, Yujie Lu, Chinmay Sonar, Michael Saxon, William Yang Wang

Despite exciting recent results showing vision-language systems' capacity to reason about images using natural language, their capacity for video reasoning remains under-explored.

Descriptive Video Prediction

On the Risk of Misinformation Pollution with Large Language Models

1 code implementation23 May 2023 Yikang Pan, Liangming Pan, Wenhu Chen, Preslav Nakov, Min-Yen Kan, William Yang Wang

In this paper, we comprehensively investigate the potential misuse of modern Large Language Models (LLMs) for generating credible-sounding misinformation and its subsequent impact on information-intensive applications, particularly Open-Domain Question Answering (ODQA) systems.

Misinformation Open-Domain Question Answering

INSTRUCTSCORE: Explainable Text Generation Evaluation with Finegrained Feedback

1 code implementation23 May 2023 Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Yang Wang, Lei LI

By harnessing both explicit human instruction and the implicit knowledge of GPT-4, we fine-tune a text evaluation metric based on LLaMA, producing both a score for generated text and a human readable diagnostic report.

Text Generation

EDIS: Entity-Driven Image Search over Multimodal Web Content

1 code implementation23 May 2023 SiQi Liu, Weixi Feng, Tsu-Jui Fu, Wenhu Chen, William Yang Wang

Making image retrieval methods practical for real-world search applications requires significant progress in dataset scales, entity comprehension, and multimodal information fusion.

Image Retrieval Retrieval

Fact-Checking Complex Claims with Program-Guided Reasoning

1 code implementation22 May 2023 Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, Preslav Nakov

Fact-checking real-world claims often requires collecting multiple pieces of evidence and applying complex multi-step reasoning.

Fact Checking In-Context Learning

Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning

1 code implementation20 May 2023 Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang

We also introduce a self-refinement module, which utilizes the symbolic solver's error messages to revise symbolic formalizations.

Logical Reasoning

Data Augmentation for Diverse Voice Conversion in Noisy Environments

no code implementations18 May 2023 Avani Tanna, Michael Saxon, Amr El Abbadi, William Yang Wang

Voice conversion (VC) models have demonstrated impressive few-shot conversion quality on the clean, native speech populations they're trained on.

Data Augmentation Denoising +1

Collaborative Generative AI: Integrating GPT-k for Efficient Editing in Text-to-Image Generation

no code implementations18 May 2023 Wanrong Zhu, Xinyi Wang, Yujie Lu, Tsu-Jui Fu, Xin Eric Wang, Miguel Eckstein, William Yang Wang

We conduct a series of experiments to compare the common edits made by humans and GPT-k, evaluate the performance of GPT-k in prompting T2I, and examine factors that may influence this process.

Text Generation Text-to-Image Generation

LLMScore: Unveiling the Power of Large Language Models in Text-to-Image Synthesis Evaluation

1 code implementation NeurIPS 2023 Yujie Lu, Xianjun Yang, Xiujun Li, Xin Eric Wang, William Yang Wang

Existing automatic evaluation on text-to-image synthesis can only provide an image-text matching score, without considering the object-level compositionality, which results in poor correlation with human judgments.

Attribute Image Generation +2

Multimodal Procedural Planning via Dual Text-Image Prompting

1 code implementation2 May 2023 Yujie Lu, Pan Lu, Zhiyu Chen, Wanrong Zhu, Xin Eric Wang, William Yang Wang

The key challenges of MPP are to ensure the informativeness, temporal coherence, and accuracy of plans across modalities.

Informativeness Text-to-Image Generation

Users are the North Star for AI Transparency

no code implementations9 Mar 2023 Alex Mei, Michael Saxon, Shiyu Chang, Zachary C. Lipton, William Yang Wang

We conduct a broad literature survey, identifying many clusters of similar conceptions of transparency, tying each back to our north star with analysis of how it furthers or hinders our ideal AI transparency goals.

Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data

1 code implementation NeurIPS 2023 Alon Albalak, Colin Raffel, William Yang Wang

In this work, we focus on Few-shot Learning with Auxiliary Data (FLAD), a training paradigm that assumes access to auxiliary data during few-shot learning in hopes of improving generalization.

Few-Shot Learning

SWING: Balancing Coverage and Faithfulness for Dialogue Summarization

1 code implementation25 Jan 2023 Kung-Hsiang Huang, Siffi Singh, Xiaofei Ma, Wei Xiao, Feng Nan, Nicholas Dingwall, William Yang Wang, Kathleen McKeown

Missing information is a common issue of dialogue summarization where some information in the reference summaries is not covered in the generated summaries.

Natural Language Inference

CausalDialogue: Modeling Utterance-level Causality in Conversations

1 code implementation20 Dec 2022 Yi-Lin Tuan, Alon Albalak, Wenda Xu, Michael Saxon, Connor Pryor, Lise Getoor, William Yang Wang

Despite their widespread adoption, neural conversation models have yet to exhibit natural chat capabilities with humans.

Dialogue Generation

Tokenization Consistency Matters for Generative Models on Extractive NLP Tasks

1 code implementation19 Dec 2022 Kaiser Sun, Peng Qi, Yuhao Zhang, Lan Liu, William Yang Wang, Zhiheng Huang

We show that, with consistent tokenization, the model performs better in both in-domain and out-of-domain datasets, with a notable average of +1. 7 F2 gain when a BART model is trained on SQuAD and evaluated on 8 QA datasets.

Extractive Question-Answering Hallucination +1

Foveate, Attribute, and Rationalize: Towards Physically Safe and Trustworthy AI

1 code implementation19 Dec 2022 Alex Mei, Sharon Levy, William Yang Wang

Users' physical safety is an increasing concern as the market for intelligent systems continues to grow, where unconstrained systems may recommend users dangerous actions that can lead to serious injury.

Attribute

SESCORE2: Learning Text Generation Evaluation via Synthesizing Realistic Mistakes

1 code implementation19 Dec 2022 Wenda Xu, Xian Qian, Mingxuan Wang, Lei LI, William Yang Wang

In this paper, we propose SESCORE2, a self-supervised approach for training a model-based metric for text generation evaluation.

Dialogue Generation Machine Translation +2

Improving Cross-task Generalization of Unified Table-to-text Models with Compositional Task Configurations

no code implementations17 Dec 2022 Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Yang Wang, Zhiheng Huang

There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022).

Multi-Task Learning

Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis

1 code implementation9 Dec 2022 Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, William Yang Wang

In this work, we improve the compositional skills of T2I models, specifically more accurate attribute binding and better image compositions.

Attribute Image Generation

Offline Reinforcement Learning with Closed-Form Policy Improvement Operators

no code implementations29 Nov 2022 Jiachen Li, Edwin Zhang, Ming Yin, Qinxun Bai, Yu-Xiang Wang, William Yang Wang

Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning.

D4RL Offline RL +2

Bridging the Training-Inference Gap for Dense Phrase Retrieval

no code implementations25 Oct 2022 Gyuwan Kim, Jinhyuk Lee, Barlas Oguz, Wenhan Xiong, Yizhe Zhang, Yashar Mehdad, William Yang Wang

Building dense retrievers requires a series of standard procedures, including training and validating neural models and creating indexes for efficient search.

Open-Domain Question Answering Passage Retrieval +1

WikiWhy: Answering and Explaining Cause-and-Effect Questions

no code implementations21 Oct 2022 Matthew Ho, Aditya Sharma, Justin Chang, Michael Saxon, Sharon Levy, Yujie Lu, William Yang Wang

As large language models (LLMs) grow larger and more sophisticated, assessing their "reasoning" capabilities in natural language grows more challenging.

Question Answering

An Exploration of Data Efficiency in Intra-Dataset Task Transfer for Dialog Understanding

no code implementations21 Oct 2022 Josiah Ross, Luke Yoffe, Alon Albalak, William Yang Wang

Transfer learning is an exciting area of Natural Language Processing that has the potential to both improve model performance and increase data efficiency.

Transfer Learning

CPL: Counterfactual Prompt Learning for Vision and Language Models

no code implementations19 Oct 2022 Xuehai He, Diji Yang, Weixi Feng, Tsu-Jui Fu, Arjun Akula, Varun Jampani, Pradyumna Narayana, Sugato Basu, William Yang Wang, Xin Eric Wang

Prompt tuning is a new few-shot transfer learning technique that only tunes the learnable prompt for pre-trained vision and language models such as CLIP.

counterfactual Visual Question Answering

SafeText: A Benchmark for Exploring Physical Safety in Language Models

no code implementations18 Oct 2022 Sharon Levy, Emily Allaway, Melanie Subbiah, Lydia Chilton, Desmond Patton, Kathleen McKeown, William Yang Wang

Understanding what constitutes safe text is an important issue in natural language processing and can often prevent the deployment of models deemed harmful and unsafe.

Text Generation

ULN: Towards Underspecified Vision-and-Language Navigation

1 code implementation18 Oct 2022 Weixi Feng, Tsu-Jui Fu, Yujie Lu, William Yang Wang

Vision-and-Language Navigation (VLN) is a task to guide an embodied agent moving to a target position using language instructions.

Vision and Language Navigation

Mitigating Covertly Unsafe Text within Natural Language Systems

no code implementations17 Oct 2022 Alex Mei, Anisha Kabir, Sharon Levy, Melanie Subbiah, Emily Allaway, John Judge, Desmond Patton, Bruce Bimber, Kathleen McKeown, William Yang Wang

An increasingly prevalent problem for intelligent technologies is text safety, as uncontrolled systems may generate recommendations to their users that lead to injury or life-threatening consequences.

ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering

1 code implementation7 Oct 2022 Zhiyu Chen, Shiyang Li, Charese Smiley, Zhiqiang Ma, Sameena Shah, William Yang Wang

With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching.

Conversational Question Answering

Dynamic Latent Separation for Deep Learning

no code implementations7 Oct 2022 Yi-Lin Tuan, Zih-Yun Chiu, William Yang Wang

A core problem in machine learning is to learn expressive latent variables for model prediction on complex data that involves multiple sub-components in a flexible and interpretable fashion.

Representation Learning

Flexible Attention-Based Multi-Policy Fusion for Efficient Deep Reinforcement Learning

1 code implementation NeurIPS 2023 Zih-Yun Chiu, Yi-Lin Tuan, William Yang Wang, Michael C. Yip

In this work, we present Knowledge-Grounded RL (KGRL), an RL paradigm fusing multiple knowledge policies and aiming for human-like efficiency and flexibility.

reinforcement-learning Reinforcement Learning (RL)

Anticipating the Unseen Discrepancy for Vision and Language Navigation

no code implementations10 Sep 2022 Yujie Lu, Huiliang Zhang, Ping Nie, Weixi Feng, Wenda Xu, Xin Eric Wang, William Yang Wang

In this paper, we propose an Unseen Discrepancy Anticipating Vision and Language Navigation (DAVIS) that learns to generalize to unseen environments via encouraging test-time visual consistency.

Data Augmentation Decision Making +3

Causal Balancing for Domain Generalization

1 code implementation10 Jun 2022 Xinyi Wang, Michael Saxon, Jiachen Li, Hongyang Zhang, Kun Zhang, William Yang Wang

While machine learning models rapidly advance the state-of-the-art on various real-world tasks, out-of-domain (OOD) generalization remains a challenging problem given the vulnerability of these models to spurious correlations.

Domain Generalization

Neuro-Symbolic Procedural Planning with Commonsense Prompting

no code implementations6 Jun 2022 Yujie Lu, Weixi Feng, Wanrong Zhu, Wenda Xu, Xin Eric Wang, Miguel Eckstein, William Yang Wang

Procedural planning aims to implement complex high-level goals by decomposition into sequential simpler low-level steps.

Graph Sampling

Towards Understanding Gender-Seniority Compound Bias in Natural Language Generation

1 code implementation LREC 2022 Samhita Honnavalli, Aesha Parekh, Lily Ou, Sophie Groenwold, Sharon Levy, Vicente Ordonez, William Yang Wang

Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.

Text Generation

FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue

1 code implementation12 May 2022 Alon Albalak, Yi-Lin Tuan, Pegah Jandaghi, Connor Pryor, Luke Yoffe, Deepak Ramachandran, Lise Getoor, Jay Pujara, William Yang Wang

Task transfer, transferring knowledge contained in related tasks, holds the promise of reducing the quantity of labeled data required to fine-tune language models.

Dialogue Understanding Domain Adaptation +1

HybriDialogue: An Information-Seeking Dialogue Dataset Grounded on Tabular and Textual Data

no code implementations Findings (ACL) 2022 Kai Nakamura, Sharon Levy, Yi-Lin Tuan, Wenhu Chen, William Yang Wang

A pressing challenge in current dialogue systems is to successfully converse with users on topics with information distributed across different modalities.

Response Generation Retrieval

Imagination-Augmented Natural Language Understanding

1 code implementation NAACL 2022 Yujie Lu, Wanrong Zhu, Xin Eric Wang, Miguel Eckstein, William Yang Wang

Human brains integrate linguistic and perceptual information simultaneously to understand natural language, and hold the critical ability to render imaginations.

Natural Language Understanding

End-to-end Dense Video Captioning as Sequence Generation

no code implementations COLING 2022 Wanrong Zhu, Bo Pang, Ashish V. Thapliyal, William Yang Wang, Radu Soricut

Dense video captioning aims to identify the events of interest in an input video, and generate descriptive captions for each event.

Ranked #3 on Dense Video Captioning on ViTT (CIDEr metric, using extra training data)

Dense Video Captioning Descriptive

Addressing Issues of Cross-Linguality in Open-Retrieval Question Answering Systems For Emergent Domains

1 code implementation26 Jan 2022 Alon Albalak, Sharon Levy, William Yang Wang

Open-retrieval question answering systems are generally trained and tested on large datasets in well-established domains.

Question Answering Retrieval +1

Relational Graph Learning for Grounded Video Description Generation

no code implementations2 Dec 2021 Wenqiao Zhang, Xin Eric Wang, Siliang Tang, Haizhou Shi, Haocheng Shi, Jun Xiao, Yueting Zhuang, William Yang Wang

Such a setting can help explain the decisions of captioning models and prevents the model from hallucinating object words in its description.

Graph Learning Hallucination +2

VIOLET : End-to-End Video-Language Transformers with Masked Visual-token Modeling

1 code implementation24 Nov 2021 Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, Zicheng Liu

Further, unlike previous studies that found pre-training tasks on video inputs (e. g., masked frame modeling) not very effective, we design a new pre-training task, Masked Visual-token Modeling (MVM), for better video modeling.

Question Answering Retrieval +5

MIC: Model-agnostic Integrated Cross-channel Recommenders

no code implementations22 Oct 2021 Yujie Lu, Ping Nie, Shengyu Zhang, Ming Zhao, Ruobing Xie, William Yang Wang, Yi Ren

However, existing work are primarily built upon pre-defined retrieval channels, including User-CF (U2U), Item-CF (I2I), and Embedding-based Retrieval (U2I), thus access to the limited correlation between users and items which solely entail from partial information of latent interactions.

Recommendation Systems Retrieval +2

Attacking Open-domain Question Answering by Injecting Misinformation

1 code implementation15 Oct 2021 Liangming Pan, Wenhu Chen, Min-Yen Kan, William Yang Wang

We curate both human-written and model-generated false documents that we inject into the evidence corpus of QA models and assess the impact on the performance of these systems.

Misinformation Open-Domain Question Answering

Self-Supervised Knowledge Assimilation for Expert-Layman Text Style Transfer

1 code implementation6 Oct 2021 Wenda Xu, Michael Saxon, Misha Sra, William Yang Wang

This is a particularly notable issue in the medical domain, where layman are often confused by medical text online.

Language Modelling Self-Supervised Learning +2

A Massively Multilingual Analysis of Cross-linguality in Shared Embedding Space

1 code implementation EMNLP 2021 Alex Jones, William Yang Wang, Kyle Mahowald

We verify some of our linguistic findings by looking at the effect of morphological segmentation on English-Inuktitut alignment, in addition to examining the effect of word order agreement on isomorphism for 66 zero-shot language pairs from a different corpus.

Retrieval Sentence

D-REX: Dialogue Relation Extraction with Explanations

1 code implementation NLP4ConvAI (ACL) 2022 Alon Albalak, Varun Embar, Yi-Lin Tuan, Lise Getoor, William Yang Wang

Existing research studies on cross-sentence relation extraction in long-form multi-party conversations aim to improve relation extraction without considering the explainability of such methods.

Dialog Relation Extraction Relation +3

FinQA: A Dataset of Numerical Reasoning over Financial Data

1 code implementation EMNLP 2021 Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, William Yang Wang

In contrast to existing tasks on general domain, the finance domain includes complex numerical reasoning and understanding of heterogeneous representations.

Question Answering

Neural Stylistic Response Generation with Disentangled Latent Variables

no code implementations ACL 2021 Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang

Generating open-domain conversational responses in the desired style usually suffers from the lack of parallel data in the style.

Response Generation Sentence

Local Explanation of Dialogue Response Generation

1 code implementation NeurIPS 2021 Yi-Lin Tuan, Connor Pryor, Wenhu Chen, Lise Getoor, William Yang Wang

To gain insights into the reasoning process of a generation model, we propose a new method, local explanation of response generation (LERG) that regards the explanations as the mutual interaction of segments in input and output sentences.

Implicit Relations Response Generation +1

ImaginE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation

no code implementations10 Jun 2021 Wanrong Zhu, Xin Eric Wang, An Yan, Miguel Eckstein, William Yang Wang

Automatic evaluations for natural language generation (NLG) conventionally rely on token-level or embedding-level comparisons with text references.

nlg evaluation Text Generation

Counterfactual Maximum Likelihood Estimation for Training Deep Networks

1 code implementation NeurIPS 2021 Xinyi Wang, Wenhu Chen, Michael Saxon, William Yang Wang

Although deep learning models have driven state-of-the-art performance on a wide array of tasks, they are prone to spurious correlations that should not be learned as predictive clues.

counterfactual Domain Generalization +2

Semi-Supervised Policy Initialization for Playing Games with Language Hints

1 code implementation NAACL 2021 Tsu-Jui Fu, William Yang Wang

Using natural language as a hint can supply an additional reward for playing sparse-reward games.

Language-Driven Image Style Transfer

1 code implementation1 Jun 2021 Tsu-Jui Fu, Xin Eric Wang, William Yang Wang

We propose contrastive language visual artist (CLVA) that learns to extract visual semantics from style instructions and accomplish LDAST by the patch-wise style discriminator.

Style Transfer

Zero-shot Fact Verification by Claim Generation

1 code implementation ACL 2021 Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang Wang

However, for each new domain that requires fact verification, creating a dataset by manually writing claims and linking them to their supporting evidence is expensive.

Fact Verification

FoveaTer: Foveated Transformer for Image Classification

no code implementations29 May 2021 Aditya Jonnalagadda, William Yang Wang, B. S. Manjunath, Miguel P. Eckstein

We propose Foveated Transformer (FoveaTer) model, which uses pooling regions and eye movements to perform object classification tasks using a Vision Transformer architecture.

Classification Image Classification

Comparing Visual Reasoning in Humans and AI

no code implementations29 Apr 2021 Shravan Murlidaran, William Yang Wang, Miguel P. Eckstein

Results show that the machine/human agreement scene descriptions are much lower than human/human agreement for our complex scenes.

Sentence Visual Reasoning

Gaze Perception in Humans and CNN-Based Model

no code implementations17 Apr 2021 Nicole X. Han, William Yang Wang, Miguel P. Eckstein

Making accurate inferences about other individuals' locus of attention is essential for human social interactions and will be important for AI to effectively interact with humans.

M3L: Language-based Video Editing via Multi-Modal Multi-Level Transformers

no code implementations CVPR 2022 Tsu-Jui Fu, Xin Eric Wang, Scott T. Grafton, Miguel P. Eckstein, William Yang Wang

LBVE contains two features: 1) the scenario of the source video is preserved instead of generating a completely different video; 2) the semantic is presented differently in the target video, and all changes are controlled by the given instruction.

Video Editing Video Understanding

On Hallucination and Predictive Uncertainty in Conditional Language Generation

no code implementations EACL 2021 Yijun Xiao, William Yang Wang

Despite improvements in performances on different natural language generation tasks, deep neural models are prone to hallucinating facts that are incorrect or nonexistent.

Data-to-Text Generation Hallucination +1

They, Them, Theirs: Rewriting with Gender-Neutral English

no code implementations12 Feb 2021 Tony Sun, Kellie Webster, Apu Shah, William Yang Wang, Melvin Johnson

Responsible development of technology involves applications being inclusive of the diverse set of users they hope to support.

L2C: Describing Visual Differences Needs Semantic Understanding of Individuals

no code implementations EACL 2021 An Yan, Xin Eric Wang, Tsu-Jui Fu, William Yang Wang

Recent advances in language and vision push forward the research of captioning a single image to describing visual differences between image pairs.

Image Captioning

DOC2PPT: Automatic Presentation Slides Generation from Scientific Documents

no code implementations28 Jan 2021 Tsu-Jui Fu, William Yang Wang, Daniel McDuff, Yale Song

Creating presentation materials requires complex multimodal reasoning skills to summarize key concepts and arrange them in a logical and visually pleasing manner.

Document Summarization Multimodal Reasoning +2

Modeling Disclosive Transparency in NLP Application Descriptions

1 code implementation EMNLP 2021 Michael Saxon, Sharon Levy, Xinyi Wang, Alon Albalak, William Yang Wang

Broader disclosive transparency$-$truth and clarity in communication regarding the function of AI systems$-$is widely considered desirable.

Fairness Language Modelling +1

Towards Understanding Sample Variance in Visually Grounded Language Generation: Evaluations and Observations

no code implementations EMNLP 2020 Wanrong Zhu, Xin Eric Wang, Pradyumna Narayana, Kazoo Sone, Sugato Basu, William Yang Wang

A major challenge in visually grounded language generation is to build robust benchmark datasets and models that can generalize well in real-world settings.

Text Generation

Investigating African-American Vernacular English in Transformer-Based Text Generation

1 code implementation EMNLP 2020 Sophie Groenwold, Lily Ou, Aesha Parekh, Samhita Honnavalli, Sharon Levy, Diba Mirza, William Yang Wang

The growth of social media has encouraged the written use of African American Vernacular English (AAVE), which has traditionally been used only in oral contexts.

Text Generation

KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation

1 code implementation EMNLP 2020 Wenhu Chen, Yu Su, Xifeng Yan, William Yang Wang

We propose a knowledge-grounded pre-training (KGPT), which consists of two parts, 1) a general knowledge-grounded generation model to generate knowledge-enriched text.

General Knowledge KG-to-Text Generation +1

Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval

1 code implementation ICLR 2021 Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Wen-tau Yih, Sebastian Riedel, Douwe Kiela, Barlas Oğuz

We propose a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions, which achieves state-of-the-art performance on two multi-hop datasets, HotpotQA and multi-evidence FEVER.

Question Answering Retrieval

SSCR: Iterative Language-Based Image Editing via Self-Supervised Counterfactual Reasoning

1 code implementation EMNLP 2020 Tsu-Jui Fu, Xin Eric Wang, Scott Grafton, Miguel Eckstein, William Yang Wang

In this paper, we introduce a Self-Supervised Counterfactual Reasoning (SSCR) framework that incorporates counterfactual thinking to overcome data scarcity.

counterfactual Counterfactual Reasoning

Multimodal Text Style Transfer for Outdoor Vision-and-Language Navigation

1 code implementation EACL 2021 Wanrong Zhu, Xin Eric Wang, Tsu-Jui Fu, An Yan, Pradyumna Narayana, Kazoo Sone, Sugato Basu, William Yang Wang

Outdoor vision-and-language navigation (VLN) is such a task where an agent follows natural language instructions and navigates a real-life urban environment.

Ranked #4 on Vision and Language Navigation on Touchdown Dataset (using extra training data)

Style Transfer Text Style Transfer +1

Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake News Detection

no code implementations LREC 2020 Kai Nakamura, Sharon Levy, William Yang Wang

We construct hybrid text+image models and perform extensive experiments for multiple variations of classification, demonstrating the importance of the novel aspect of multimodality and fine-grained classification unique to Fakeddit.

Classification Cultural Vocal Bursts Intensity Prediction +2

Counterfactual Off-Policy Training for Neural Response Generation

no code implementations29 Apr 2020 Qingfu Zhu, Wei-Nan Zhang, Ting Liu, William Yang Wang

Open-domain dialogue generation suffers from the data insufficiency problem due to the vast size of potential responses.

counterfactual Counterfactual Reasoning +2

Evaluating Transformer-Based Multilingual Text Classification

no code implementations29 Apr 2020 Sophie Groenwold, Samhita Honnavalli, Lily Ou, Aesha Parekh, Sharon Levy, Diba Mirza, William Yang Wang

As NLP tools become ubiquitous in today's technological landscape, they are increasingly applied to languages with a variety of typological structures.

General Classification Language Modelling +4

Logical Natural Language Generation from Open-Domain Tables

1 code implementation ACL 2020 Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, William Yang Wang

To facilitate the study of the proposed logical NLG problem, we use the existing TabFact dataset \cite{chen2019tabfact} featured with a wide range of logical/symbolic inferences as our testbed, and propose new automatic metrics to evaluate the fidelity of generation models w. r. t.\ logical inference.

Text Generation

On the Encoder-Decoder Incompatibility in Variational Text Modeling and Beyond

1 code implementation ACL 2020 Chen Wu, Prince Zizhuang Wang, William Yang Wang

To this end, we propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure and improves the encoder and decoder parameterizations via encoder weight sharing and decoder signal matching.

Dialogue Generation Language Modelling +1

Environment-agnostic Multitask Learning for Natural Language Grounded Navigation

1 code implementation ECCV 2020 Xin Eric Wang, Vihan Jain, Eugene Ie, William Yang Wang, Zornitsa Kozareva, Sujith Ravi

Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e. g., following natural language instructions or dialog.

Vision-Language Navigation

Disentangled Representation Learning with Wasserstein Total Correlation

no code implementations30 Dec 2019 Yijun Xiao, William Yang Wang

However, Kullback-Leibler (KL) divergence-based total correlation is metric-agnostic and sensitive to data samples.

Disentanglement

Unsupervised Reinforcement Learning of Transferable Meta-Skills for Embodied Navigation

no code implementations CVPR 2020 Juncheng Li, Xin Wang, Siliang Tang, Haizhou Shi, Fei Wu, Yueting Zhuang, William Yang Wang

Visual navigation is a task of training an embodied agent by intelligently navigating to a target object (e. g., television) using only visual observations.

Object reinforcement-learning +3

Counterfactual Vision-and-Language Navigation via Adversarial Path Sampling

no code implementations17 Nov 2019 Tsu-Jui Fu, Xin Eric Wang, Matthew Peterson, Scott Grafton, Miguel Eckstein, William Yang Wang

In particular, we present a model-agnostic adversarial path sampler (APS) that learns to sample challenging paths that force the navigator to improve based on the navigation performance.

counterfactual Counterfactual Reasoning +2

r/Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake News Detection

3 code implementations10 Nov 2019 Kai Nakamura, Sharon Levy, William Yang Wang

We construct hybrid text+image models and perform extensive experiments for multiple variations of classification, demonstrating the importance of the novel aspect of multimodality and fine-grained classification unique to Fakeddit.

Classification Cultural Vocal Bursts Intensity Prediction +2

Towards Understanding Gender Bias in Relation Extraction

1 code implementation ACL 2020 Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, William Yang Wang

We use WikiGenderBias to evaluate systems for bias and find that NRE systems exhibit gender biased predictions and lay groundwork for future evaluation of bias in NRE.

counterfactual Data Augmentation +3

Table-to-Text Natural Language Generation with Unseen Schemas

no code implementations9 Nov 2019 Tianyu Liu, Wei Wei, William Yang Wang

In this paper, we propose the new task of table-to-text NLG with unseen schemas, which specifically aims to test the generalization of NLG for input tables with attribute types that never appear during training.

Attribute Text Generation

Cross-Lingual Vision-Language Navigation

2 code implementations24 Oct 2019 An Yan, Xin Eric Wang, Jiangtao Feng, Lei LI, William Yang Wang

Commanding a robot to navigate with natural language instructions is a long-term goal for grounded language understanding and robotics.

Domain Adaptation Navigate +2

Simple yet Effective Bridge Reasoning for Open-Domain Multi-Hop Question Answering

no code implementations WS 2019 Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Hong Wang, Shiyu Chang, Murray Campbell, William Yang Wang

To resolve this issue, we introduce a new sub-problem of open-domain multi-hop QA, which aims to recognize the bridge (\emph{i. e.}, the anchor that links to the answer passage) from the context of a set of start passages with a reading comprehension model.

Information Retrieval Multi-hop Question Answering +3

Neural Correction Model for Open-Domain Named Entity Recognition

1 code implementation13 Sep 2019 Mengdi Zhu, Zheye Deng, Wenhan Xiong, Mo Yu, Ming Zhang, William Yang Wang

In this work, to address the low precision and recall problems, we first utilize DBpedia as the source of distant supervision to annotate abstracts from Wikipedia and design a neural correction model trained with a human-annotated NER dataset, DocRED, to correct the false entity labels.

Multi-Task Learning named-entity-recognition +4

A Benchmark Dataset for Learning to Intervene in Online Hate Speech

1 code implementation IJCNLP 2019 Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Belding, William Yang Wang

In this paper, we also analyze the datasets to understand the common intervention strategies and explore the performance of common automatic response generation methods on these new datasets to provide a benchmark for future research.

Response Generation

Neural Gaussian Copula for Variational Autoencoder

no code implementations IJCNLP 2019 Prince Zizhuang Wang, William Yang Wang

We argue that this would cause a typical training problem called posterior collapse observed in all other variational language models.

TabFact: A Large-scale Dataset for Table-based Fact Verification

1 code implementation ICLR 2020 Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, William Yang Wang

To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED.

Fact Checking Fact Verification +3

Deep Reinforcement Learning with Distributional Semantic Rewards for Abstractive Summarization

no code implementations IJCNLP 2019 Siyao Li, Deren Lei, Pengda Qin, William Yang Wang

Deep reinforcement learning (RL) has been a commonly-used strategy for the abstractive summarization task to address both the exposure bias and non-differentiable task issues.

Abstractive Text Summarization reinforcement-learning +2

Text Modeling with Syntax-Aware Variational Autoencoders

no code implementations27 Aug 2019 Yijun Xiao, William Yang Wang

We propose syntax-aware variational autoencoders (SAVAEs) that dedicate a subspace in the latent dimensions dubbed syntactic latent to represent syntactic structures of sentences.

Representation Learning

Meta Reasoning over Knowledge Graphs

no code implementations13 Aug 2019 Hong Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang

The ability to reason over learned knowledge is an innate ability for humans and humans can easily master new reasoning rules with only a few demonstrations.

Few-Shot Learning Knowledge Base Completion +1

What Should I Ask? Using Conversationally Informative Rewards for Goal-Oriented Visual Dialog

no code implementations28 Jul 2019 Pushkar Shukla, Carlos Elmadjian, Richika Sharan, Vivek Kulkarni, Matthew Turk, William Yang Wang

In this work, we focus on the task of goal-oriented visual dialogue, aiming to automatically generate a series of questions about an image with a single objective.

Visual Dialog

TWEETQA: A Social Media Focused Question Answering Dataset

no code implementations ACL 2019 Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang

With social media becoming increasingly pop-ular on which lots of news and real-time eventsare reported, developing automated questionanswering systems is critical to the effective-ness of many applications that rely on real-time knowledge.

Question Answering

What Should I Ask? Using Conversationally Informative Rewards for Goal-oriented Visual Dialog.

no code implementations ACL 2019 Pushkar Shukla, Carlos Elmadjian, Richika Sharan, Vivek Kulkarni, Matthew Turk, William Yang Wang

In this work, we focus on the task of goal-oriented visual dialogue, aiming to automatically generate a series of questions about an image with a single objective.

Visual Dialog

Self-Supervised Dialogue Learning

no code implementations ACL 2019 Jiawei Wu, Xin Wang, William Yang Wang

The sequential order of utterances is often meaningful in coherent dialogues, and the order changes of utterances could lead to low-quality and incoherent conversations.

Self-Supervised Learning

Self-Supervised Learning for Contextualized Extractive Summarization

2 code implementations ACL 2019 Hong Wang, Xin Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang

Existing models for extractive summarization are usually trained from scratch with a cross-entropy loss, which does not explicitly capture the global context at the document level.

Extractive Summarization Self-Supervised Learning

Deep Adversarial Learning for NLP

no code implementations NAACL 2019 William Yang Wang, Sameer Singh, Jiwei Li

Adversarial learning is a game-theoretic learning paradigm, which has achieved huge successes in the field of Computer Vision recently.

Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader

2 code implementations ACL 2019 Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang

We propose a new end-to-end question answering model, which learns to aggregate answer evidence from an incomplete knowledge base (KB) and a set of retrieved text snippets.

Question Answering

REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments

1 code implementation CVPR 2020 Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, Chunhua Shen, Anton Van Den Hengel

One of the long-term challenges of robotics is to enable robots to interact with humans in the visual world via natural language, as humans are visual animals that communicate through language.

Referring Expression Vision and Language Navigation

Few-Shot NLG with Pre-Trained Language Model

2 code implementations ACL 2020 Zhiyu Chen, Harini Eavani, Wenhu Chen, Yinyin Liu, William Yang Wang

Neural-based end-to-end approaches to natural language generation (NLG) from structured data or knowledge are data-hungry, making their adoption for real-world applications difficult with limited data.

Few-Shot Learning Language Modelling +1

VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research

2 code implementations ICCV 2019 Xin Wang, Jiawei Wu, Junkun Chen, Lei LI, Yuan-Fang Wang, William Yang Wang

We also introduce two tasks for video-and-language research based on VATEX: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context.

Machine Translation Translation +3

Extract and Edit: An Alternative to Back-Translation for Unsupervised Neural Machine Translation

no code implementations NAACL 2019 Jiawei Wu, Xin Wang, William Yang Wang

The overreliance on large parallel corpora significantly limits the applicability of machine translation systems to the majority of language pairs.

Sentence Translation +1

Learning to Decipher Hate Symbols

no code implementations NAACL 2019 Jing Qian, Mai ElSherief, Elizabeth Belding, William Yang Wang

Furthermore, we propose a novel Variational Decipher and show how it can generalize better to unseen hate symbols in a more challenging testing setting.

General Classification

Riemannian Normalizing Flow on Variational Wasserstein Autoencoder for Text Modeling

1 code implementation NAACL 2019 Prince Zizhuang Wang, William Yang Wang

The RNF transforms a latent variable into a space that respects the geometric characteristics of input space, which makes posterior impossible to collapse to the non-informative prior.

Language Modelling Text Generation

Sentence Embedding Alignment for Lifelong Relation Extraction

2 code implementations NAACL 2019 Hong Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Yang Wang

We formulate such a challenging problem as lifelong relation extraction and investigate memory-efficient incremental learning methods without catastrophically forgetting knowledge learned from previous tasks.

Incremental Learning Relation +4

Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing

1 code implementation NAACL 2019 Wenhan Xiong, Jiawei Wu, Deren Lei, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang

Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance.

Entity Typing Inductive Bias

Quantifying Uncertainties in Natural Language Processing Tasks

no code implementations18 Nov 2018 Yijun Xiao, William Yang Wang

Reliable uncertainty quantification is a first step towards building explainable, transparent, and accountable artificial intelligent systems.

Language Modelling named-entity-recognition +4

Learning to Compose Topic-Aware Mixture of Experts for Zero-Shot Video Captioning

no code implementations7 Nov 2018 Xin Wang, Jiawei Wu, Da Zhang, Yu Su, William Yang Wang

Although promising results have been achieved in video captioning, existing models are limited to the fixed inventory of activities in the training corpus, and do not generalize to open vocabulary scenarios.

Video Captioning

SafeRoute: Learning to Navigate Streets Safely in an Urban Environment

1 code implementation3 Nov 2018 Sharon Levy, Wenhan Xiong, Elizabeth Belding, William Yang Wang

We propose SafeRoute, a novel solution to the problem of navigating cities and avoiding street harassment and crime.

Navigate Representation Learning

A Survey on Natural Language Processing for Fake News Detection

1 code implementation LREC 2020 Ray Oshikawa, Jing Qian, William Yang Wang

We also highlight the difference between fake news detection and other related tasks, and the importance of NLP solutions for fake news detection.

Fake News Detection

Towards Explainable NLP: A Generative Explanation Framework for Text Classification

no code implementations ACL 2019 Hui Liu, Qingyu Yin, William Yang Wang

Building explainable systems is a critical problem in the field of Natural Language Processing (NLP), since most machine learning models provide no explanations for the predictions.

BIG-bench Machine Learning General Classification +2

DOLORES: Deep Contextualized Knowledge Graph Embeddings

no code implementations AKBC 2020 Haoyu Wang, Vivek Kulkarni, William Yang Wang

We introduce a new method DOLORES for learning knowledge graph embeddings that effectively captures contextual cues and dependencies among entities and relations.

Knowledge Graph Embeddings Knowledge Graphs +3

Dirichlet Variational Autoencoder for Text Modeling

no code implementations31 Oct 2018 Yijun Xiao, Tiancheng Zhao, William Yang Wang

We introduce an improved variational autoencoder (VAE) for text modeling with topic information explicitly modeled as a Dirichlet latent variable.

WikiHow: A Large Scale Text Summarization Dataset

9 code implementations18 Oct 2018 Mahnaz Koupaee, William Yang Wang

Sequence-to-sequence models have recently gained the state of the art performance in summarization.

Text Summarization

Hierarchical CVAE for Fine-Grained Hate Speech Classification

no code implementations EMNLP 2018 Jing Qian, Mai ElSherief, Elizabeth Belding, William Yang Wang

Existing work on automated hate speech detection typically focuses on binary classification or on differentiating among a small set of categories.

Binary Classification Classification +2

One-Shot Relational Learning for Knowledge Graphs

1 code implementation EMNLP 2018 Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang

Knowledge graphs (KGs) are the key components of various natural language processing applications.

Relational Reasoning

XL-NBT: A Cross-lingual Neural Belief Tracking Framework

1 code implementation EMNLP 2018 Wenhu Chen, Jianshu Chen, Yu Su, Xin Wang, Dong Yu, Xifeng Yan, William Yang Wang

Then, we pre-train a state tracker for the source language as a teacher, which is able to exploit easy-to-access parallel data.

Transfer Learning

Zero Pronoun Resolution with Attention-based Neural Network

1 code implementation COLING 2018 Qingyu Yin, Yu Zhang, Wei-Nan Zhang, Ting Liu, William Yang Wang

Recent neural network methods for zero pronoun resolution explore multiple models for generating representation vectors for zero pronouns and their candidate antecedents.

Chinese Zero Pronoun Resolution

Deep Reinforcement Learning for NLP

no code implementations ACL 2018 William Yang Wang, Jiwei Li, Xiaodong He

Many Natural Language Processing (NLP) tasks (including generation, language grounding, reasoning, information extraction, coreference resolution, and dialog) can be formulated as deep reinforcement learning (DRL) problems.

Atari Games coreference-resolution +7

Scheduled Policy Optimization for Natural Language Communication with Intelligent Agents

3 code implementations16 Jun 2018 Wenhan Xiong, Xiaoxiao Guo, Mo Yu, Shiyu Chang, Bo-Wen Zhou, William Yang Wang

We investigate the task of learning to follow natural language instructions by jointly reasoning with visual observations and language inputs.

Efficient Exploration reinforcement-learning +1

Scalable Construction and Reasoning of Massive Knowledge Bases

no code implementations NAACL 2018 Xiang Ren, Nanyun Peng, William Yang Wang

In today{'}s information-based society, there is abundant knowledge out there carried in the form of natural language texts (e. g., news articles, social media posts, scientific publications), which spans across various domains (e. g., corporate documents, advertisements, legal acts, medical reports), which grows at an astonishing rate.

Simple Models for Word Formation in Slang

1 code implementation NAACL 2018 Vivek Kulkarni, William Yang Wang

We propose the first generative models for three types of extra-grammatical word formation phenomena abounding in slang: Blends, Clippings, and Reduplicatives.

Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning

2 code implementations ACL 2018 Pengda Qin, Weiran Xu, William Yang Wang

The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems.

reinforcement-learning Reinforcement Learning (RL) +3

No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling

2 code implementations ACL 2018 Xin Wang, Wenhu Chen, Yuan-Fang Wang, William Yang Wang

Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem.

Image Captioning Visual Storytelling

Reinforced Co-Training

no code implementations NAACL 2018 Jiawei Wu, Lei LI, William Yang Wang

However, the selection of samples in existing co-training methods is based on a predetermined policy, which ignores the sampling bias between the unlabeled and the labeled subsets, and fails to explore the data space.

Clickbait Detection General Classification +3

Hate Lingo: A Target-based Linguistic Analysis of Hate Speech in Social Media

2 code implementations11 Apr 2018 Mai ElSherief, Vivek Kulkarni, Dana Nguyen, William Yang Wang, Elizabeth Belding

While social media empowers freedom of expression and individual voices, it also enables anti-social behavior, online harassment, cyberbullying, and hate speech.

Cannot find the paper you are looking for? You can Submit a new open access paper.