Search Results for author: Jeongeun Park

Found 7 papers, 3 papers with code

Towards Embedding Dynamic Personas in Interactive Robots: Masquerading Animated Social Kinematics (MASK)

no code implementations15 Mar 2024 Jeongeun Park, Taemoon Jeong, Hyeonseong Kim, Taehyun Byun, Seungyoon Shin, Keunjun Choi, Jaewoon Kwon, Taeyoon Lee, Matthew Pan, Sungjoon Choi

This paper presents the design and development of an innovative interactive robotic system to enhance audience engagement using character-like personas.

SPOTS: Stable Placement of Objects with Reasoning in Semi-Autonomous Teleoperation Systems

no code implementations25 Sep 2023 Joonhyung Lee, Sangbeom Park, Jeongeun Park, Kyungjae Lee, Sungjoon Choi

Particularly, we focus on two aspects of the place task: stability robustness and contextual reasonableness of object placements.

CLARA: Classifying and Disambiguating User Commands for Reliable Interactive Robotic Agents

1 code implementation17 Jun 2023 Jeongeun Park, Seungwon Lim, Joonhyung Lee, Sangbeom Park, Minsuk Chang, Youngjae Yu, Sungjoon Choi

In this paper, we focus on inferring whether the given user command is clear, ambiguous, or infeasible in the context of interactive robotic agents utilizing large language models (LLMs).

Question Generation Uncertainty Quantification

SOCRATES: Text-based Human Search and Approach using a Robot Dog

no code implementations10 Feb 2023 Jeongeun Park, Jefferson Silveria, Matthew Pan, Sungjoon Choi

In this paper, we propose a SOCratic model for Robots Approaching humans based on TExt System (SOCRATES) focusing on the human search and approach based on free-form textual description; the robot first searches for the target user, then the robot proceeds to approach in a human-friendly manner.

Knowledge Distillation

Zero-shot Active Visual Search (ZAVIS): Intelligent Object Search for Robotic Assistants

1 code implementation19 Sep 2022 Jeongeun Park, Taerim Yoon, Jejoon Hong, Youngjae Yu, Matthew Pan, Sungjoon Choi

In this paper, we focus on the problem of efficiently locating a target object described with free-form language using a mobile robot equipped with vision sensors (e. g., an RGBD camera).

Object Robot Navigation

Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation

1 code implementation2 Nov 2021 Jeongeun Park, Seungyoun Shin, Sangheum Hwang, Sungjoon Choi

Robust learning methods aim to learn a clean target distribution from noisy and corrupted training data where a specific corruption pattern is often assumed a priori.

Semi-Autonomous Teleoperation via Learning Non-Prehensile Manipulation Skills

no code implementations27 Sep 2021 Sangbeom Park, Yoonbyung Chai, Sunghyun Park, Jeongeun Park, Kyungjae Lee, Sungjoon Choi

In this paper, we present a semi-autonomous teleoperation framework for a pick-and-place task using an RGB-D sensor.

Cannot find the paper you are looking for? You can Submit a new open access paper.