Search Results for author: Hyemin Ahn

Found 12 papers, 6 papers with code

Can only LLMs do Reasoning?: Potential of Small Language Models in Task Planning

1 code implementation5 Apr 2024 Gawon Choi, Hyemin Ahn

This leads us to pose a question: If small LMs can be trained to reason in chains within a single domain, would even small LMs be good task planners for the robots?

A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis

no code implementations14 Aug 2023 Esteve Valls Mascaro, Hyemin Ahn, Dongheui Lee

Experimental results show that our model successfully forecasts human motion on the Human3. 6M dataset.

Motion Synthesis

Can We Use Diffusion Probabilistic Models for 3D Motion Prediction?

no code implementations28 Feb 2023 Hyemin Ahn, Esteve Valls Mascaro, Dongheui Lee

After many researchers observed fruitfulness from the recent diffusion probabilistic model, its effectiveness in image generation is actively studied these days.

Image Generation motion prediction

Robust Human Motion Forecasting using Transformer-based Model

no code implementations16 Feb 2023 Esteve Valls Mascaro, Shuo Ma, Hyemin Ahn, Dongheui Lee

In addition, our model is tested in conditions where the human motion is severely occluded, demonstrating its robustness in reconstructing and predicting 3D human motion in a highly noisy environment.

Motion Forecasting

Intention-Conditioned Long-Term Human Egocentric Action Forecasting

1 code implementation25 Jul 2022 Esteve Valls Mascaro, Hyemin Ahn, Dongheui Lee

Our framework first extracts two level of human information over the N observed videos human actions through a Hierarchical Multi-task MLP Mixer (H3M).

Action Anticipation Long Term Action Anticipation

Self-Supervised Motion Retargeting with Safety Guarantee

no code implementations11 Mar 2021 Sungjoon Choi, Min Jae Song, Hyemin Ahn, Joohyung Kim

In this paper, we present self-supervised shared latent embedding (S3LE), a data-driven motion retargeting method that enables the generation of natural motions in humanoid robots from motion capture data or RGB videos.

motion retargeting Position +1

Refining Action Segmentation With Hierarchical Video Representations

1 code implementation ICCV 2021 Hyemin Ahn, Dongheui Lee

In this paper, we propose Hierarchical Action Segmentation Refiner (HASR), which can refine temporal action segmentation results from various models by understanding the overall context of a given video in a hierarchical way.

Action Segmentation Segmentation

Visually Grounding Language Instruction for History-Dependent Manipulation

no code implementations16 Dec 2020 Hyemin Ahn, Obin Kwon, Kyoungdo Kim, Jaeyeon Jeong, Howoong Jun, Hongjung Lee, Dongheui Lee, Songhwai Oh

We also suggest a relevant dataset and model which can be a baseline, and show that our model trained with the proposed dataset can also be applied to the real world based on the CycleGAN.

Generative Autoregressive Networks for 3D Dancing Move Synthesis from Music

no code implementations11 Nov 2019 Hyemin Ahn, Jaehun Kim, Kihyun Kim, Songhwai Oh

The trained dance pose generator, which is a generative autoregressive model, is able to synthesize a dance sequence longer than 5, 000 pose frames.

Interactive Text2Pickup Network for Natural Language based Human-Robot Collaboration

2 code implementations28 May 2018 Hyemin Ahn, Sungjoon Choi, Nuri Kim, Geonho Cha, Songhwai Oh

To handle the inherent ambiguity in human language commands, a suitable question which can resolve the ambiguity is generated.

Object Position

Text2Action: Generative Adversarial Synthesis from Language to Action

1 code implementation15 Oct 2017 Hyemin Ahn, Timothy Ha, Yunho Choi, Hwiyeon Yoo, Songhwai Oh

We demonstrate that the network can generate human-like actions which can be transferred to a Baxter robot, such that the robot performs an action based on a provided sentence.

Generative Adversarial Network Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.