no code implementations • 12 Nov 2021 • Dongchan Kim, Kunsoo Huh
This paper presents a hybrid motion planning strategy that combines a deep generative network with a conventional motion planning method.
no code implementations • 8 Apr 2020 • Hayoung Kim, Dongchan Kim, Gihoon Kim, Jeongmin Cho, Kunsoo Huh
This paper presents online-capable deep learning model for probabilistic vehicle trajectory prediction.
no code implementations • 13 Dec 2018 • JIhwan Lee, Dongchan Kim, Ruhi Sarikaya, Young-Bum Kim
Our proposed model learns the vector representation of intents based on the slots tied to these intents by aggregating the representations of the slots.
no code implementations • ACL 2018 • Young-Bum Kim, Dongchan Kim, Anjishnu Kumar, Ruhi Sarikaya
In this paper, we explore the task of mapping spoken language utterances to one of thousands of natural language understanding domains in intelligent personal digital assistants (IPDAs).
no code implementations • 22 Apr 2018 • Young-Bum Kim, Dongchan Kim, Anjishnu Kumar, Ruhi Sarikaya
In this paper, we explore the task of mapping spoken language utterances to one of thousands of natural language understanding domains in intelligent personal digital assistants (IPDAs).
no code implementations • NAACL 2018 • Young-Bum Kim, Dongchan Kim, Joo-Kyung Kim, Ruhi Sarikaya
Intelligent personal digital assistants (IPDAs), a popular real-life application with spoken language understanding capabilities, can cover potentially thousands of overlapping domains for natural language understanding, and the task of finding the best domain to handle an utterance becomes a challenging problem on a large scale.
no code implementations • ACL 2017 • Young-Bum Kim, Karl Stratos, Dongchan Kim
When given domain K + 1, our model uses a weighted combination of the K domain experts{'} feedback along with its own opinion to make predictions on the new domain.
no code implementations • ACL 2017 • Young-Bum Kim, Karl Stratos, Dongchan Kim
Both cause a distribution mismatch between training and evaluation, leading to a model that overfits the flawed training data and performs poorly on the test data.