Search Results for author: Jae Sung Park

Found 16 papers, 11 papers with code

Exposing the Limits of Video-Text Models through Contrast Sets

1 code implementation NAACL 2022 Jae Sung Park, Sheng Shen, Ali Farhadi, Trevor Darrell, Yejin Choi, Anna Rohrbach

We test the robustness of recent methods on the proposed automatic contrast sets, and compare them to additionally collected human-generated counterparts, to assess their effectiveness.

Language Modelling Multiple-choice +2

Agent AI: Surveying the Horizons of Multimodal Interaction

1 code implementation7 Jan 2024 Zane Durante, Qiuyuan Huang, Naoki Wake, Ran Gong, Jae Sung Park, Bidipta Sarkar, Rohan Taori, Yusuke Noda, Demetri Terzopoulos, Yejin Choi, Katsushi Ikeuchi, Hoi Vo, Li Fei-Fei, Jianfeng Gao

To accelerate research on agent-based multimodal intelligence, we define "Agent AI" as a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data, and can produce meaningful embodied actions.

Localized Symbolic Knowledge Distillation for Visual Commonsense Models

2 code implementations NeurIPS 2023 Jae Sung Park, Jack Hessel, Khyathi Raghavi Chandu, Paul Pu Liang, Ximing Lu, Peter West, Youngjae Yu, Qiuyuan Huang, Jianfeng Gao, Ali Farhadi, Yejin Choi

Empirical results and human evaluations in a zero-shot setup demonstrate that our distillation method results in more precise VL models of reasoning compared to a baseline of passing a generated referring expression to an LLM.

Instruction Following Knowledge Distillation +3

ArK: Augmented Reality with Knowledge Interactive Emergent Ability

no code implementations1 May 2023 Qiuyuan Huang, Jae Sung Park, Abhinav Gupta, Paul Bennett, Ran Gong, Subhojit Som, Baolin Peng, Owais Khan Mohammed, Chris Pal, Yejin Choi, Jianfeng Gao

In this study, we develop an infinite agent that learns to transfer knowledge memory from general foundation models (e. g. GPT4, DALLE) to novel domains or scenarios for scene understanding and generation in the physical or virtual world.

Mixed Reality Scene Generation +1

Fusing Pre-Trained Language Models With Multimodal Prompts Through Reinforcement Learning

1 code implementation CVPR 2023 Youngjae Yu, Jiwan Chung, Heeseung Yun, Jack Hessel, Jae Sung Park, Ximing Lu, Rowan Zellers, Prithviraj Ammanabrolu, Ronan Le Bras, Gunhee Kim, Yejin Choi

Language models are capable of commonsense reasoning: while domain-specific models can learn from explicit knowledge (e. g. commonsense graphs [6], ethical norms [25]), and larger models like GPT-3 manifest broad commonsense reasoning capacity.

Language Modelling reinforcement-learning +2

The Abduction of Sherlock Holmes: A Dataset for Visual Abductive Reasoning

no code implementations10 Feb 2022 Jack Hessel, Jena D. Hwang, Jae Sung Park, Rowan Zellers, Chandra Bhagavatula, Anna Rohrbach, Kate Saenko, Yejin Choi

We present Sherlock, an annotated corpus of 103K images for testing machine capacity for abductive reasoning beyond literal image contents.

Visual Abductive Reasoning Visual Reasoning

MERLOT: Multimodal Neural Script Knowledge Models

1 code implementation NeurIPS 2021 Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, Yejin Choi

As humans, we understand events in the visual world contextually, performing multimodal reasoning across time to make inferences about the past, present, and future.

Multimodal Reasoning Visual Commonsense Reasoning

Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs

1 code implementation Findings of the Association for Computational Linguistics 2020 Ana Marasović, Chandra Bhagavatula, Jae Sung Park, Ronan Le Bras, Noah A. Smith, Yejin Choi

Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights.

Language Modelling Natural Language Inference +4

Identity-Aware Multi-Sentence Video Description

1 code implementation ECCV 2020 Jae Sung Park, Trevor Darrell, Anna Rohrbach

This auxiliary task allows us to propose a two-stage approach to Identity-Aware Video Description.

Gender Prediction Sentence +1

LSTM-based Anomaly Detection for Non-linear Dynamical System

no code implementations5 Jun 2020 Yue Tan, Chunjing Hu, Kuan Zhang, Kan Zheng, Ethan A. Davis, Jae Sung Park

Anomaly detection for non-linear dynamical system plays an important role in ensuring the system stability.

Anomaly Detection

VisualCOMET: Reasoning about the Dynamic Context of a Still Image

no code implementations ECCV 2020 Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, Yejin Choi

In addition, we provide person-grounding (i. e., co-reference links) between people appearing in the image and people mentioned in the textual commonsense descriptions, allowing for tighter integration between images and text.

Visual Commonsense Reasoning

Joint High Dynamic Range Imaging and Super-Resolution from a Single Image

1 code implementation2 May 2019 Jae Woong Soh, Jae Sung Park, Nam Ik Cho

This paper presents a new framework for jointly enhancing the resolution and the dynamic range of an image, i. e., simultaneous super-resolution (SR) and high dynamic range imaging (HDRI), based on a convolutional neural network (CNN).

Super-Resolution

Adversarial Inference for Multi-Sentence Video Description

1 code implementation CVPR 2019 Jae Sung Park, Marcus Rohrbach, Trevor Darrell, Anna Rohrbach

Among the main issues are the fluency and coherence of the generated descriptions, and their relevance to the video.

Image Captioning Sentence +1

Generation of High Dynamic Range Illumination from a Single Image for the Enhancement of Undesirably Illuminated Images

1 code implementation2 Aug 2017 Jae Sung Park, Nam Ik Cho

This paper presents an algorithm that enhances undesirably illuminated images by generating and fusing multi-level illuminations from a single image. The input image is first decomposed into illumination and reflectance components by using an edge-preserving smoothing filter.

Efficient Generation of Motion Plans from Attribute-Based Natural Language Instructions Using Dynamic Constraint Mapping

no code implementations8 Jul 2017 Jae Sung Park, Biao Jia, Mohit Bansal, Dinesh Manocha

We generate a factor graph from natural language instructions called the Dynamic Grounding Graph (DGG), which takes latent parameters into account.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.