1 code implementation • 25 Apr 2024 • Huiyu Zhai, Mo Chen, Xingxing Yang, Gusheng Kang
The NIR-to-RGB spectral domain translation is a formidable task due to the inherent spectral mapping ambiguities within NIR inputs and RGB outputs.
no code implementations • 12 Apr 2024 • Frank J. Jiang, Kaj Munhoz Arfvidsson, Chong He, Mo Chen, Karl H. Johansson
By ensuring a temporal logic tree has no leaking corners, we know the temporal logic tree correctly verifies the existence of control policies that satisfy the specified task.
1 code implementation • 20 Mar 2024 • Yimeng Fan, Pedram Agand, Mo Chen, Edward J. Park, Allison Kennedy, Chanwoo Bae
The maritime industry's continuous commitment to sustainability has led to a dedicated exploration of methods to reduce vessel fuel consumption.
1 code implementation • 19 Oct 2023 • Pedram Agand, Alexey Iskrov, Mo Chen
Nowadays, transportation networks face the challenge of sub-optimal control policies that can have adverse effects on human health, the environment, and contribute to traffic congestion.
1 code implementation • 19 Oct 2023 • Pedram Agand, Allison Kennedy, Trevor Harris, Chanwoo Bae, Mo Chen, Edward J Park
As the importance of eco-friendly transportation increases, providing an efficient approach for marine vessel operation is essential.
1 code implementation • 19 Oct 2023 • Pedram Agand, Mohammad Mahdavian, Manolis Savva, Mo Chen
In end-to-end autonomous driving, the utilization of existing sensor fusion techniques and navigational control methods for imitation learning proves inadequate in challenging situations that involve numerous dynamic agents.
no code implementations • 28 Sep 2023 • Xubo Lyu, Hanyang Hu, Seth Siriya, Ye Pu, Mo Chen
We present task-oriented Koopman-based control that utilizes end-to-end reinforcement learning and contrastive encoder to simultaneously learn the Koopman latent embedding, operator, and associated linear controller within an iterative loop.
1 code implementation • 22 Sep 2023 • Hanyang Hu, Minh Bui, Mo Chen
Utilizing this result and previous results for the 1 vs. 1 game, we further propose solving the general multi-agent reach-avoid game by determining the defender assignments that can maximize the number of attackers captured via a Mixed Integer Program (MIP).
no code implementations • 11 Nov 2022 • Xinyu Zhao, Razvan C. Fetecau, Mo Chen
Our proposed network architecture includes the incorporation of LSTM and self-attention, which allows the trained policy to adapt to a variable number of agents.
Multi-agent Reinforcement Learning Reinforcement Learning (RL)
no code implementations • 23 Oct 2022 • Pedram Agand, Michael Chang, Mo Chen
However, these cues can be misleading for objects with wide-range variation or adversarial situations, which is a challenging aspect of object-agnostic distance estimation.
1 code implementation • 23 Oct 2022 • Pedram Agand, Mo Chen, Hamid D. Taghirad
We suggest the Adaptive Recursive Markov Chain Monte Carlo (ARMCMC) method, which eliminates the shortcomings of conventional online techniques while computing the entire probability density function of model parameters.
1 code implementation • 15 Sep 2022 • Mohammad Mahdavian, Payam Nikdel, Mahdi TaherAhmadi, Mo Chen
The proposed architecture divides human motion prediction into two parts: 1) the human trajectory, which is the hip joint 3D position over time and 2) the human pose which is the all other joints 3D positions over time with respect to a fixed hip joint.
no code implementations • 13 Sep 2022 • Payam Nikdel, Mohammad Mahdavian, Mo Chen
We show that our system outperforms the state-of-the-art in human motion prediction while it can predict diverse multi-motion future trajectories with hip movements
no code implementations • 15 Aug 2022 • Saba Akhyani, Mehryar Abbasi Boroujeni, Mo Chen, Angelica Lim
Robots and artificial agents that interact with humans should be able to do so without bias and inequity, but facial perception systems have notoriously been found to work more poorly for certain groups of people than others.
1 code implementation • 12 Apr 2022 • Minh Bui, George Giovanis, Mo Chen, Arrvindh Shriraman
This paper introduces OptimizedDP, a high-performance software library that solves time-dependent Hamilton-Jacobi partial differential equation (PDE), computes backward reachable sets with application in robotics, and contains value iterations algorithm implementation for continuous action-state space Markov Decision Process (MDP) while leveraging user-friendliness of Python for different problem specifications without sacrificing efficiency of the core computation.
no code implementations • 29 Mar 2022 • Xubo Lyu, Amin Banitalebi-Dehkordi, Mo Chen, Yong Zhang
In complex problems with large state and action spaces, it is advantageous to extend MAPG methods to use higher-level actions, also known as options, to improve the policy search efficiency.
Hierarchical Reinforcement Learning Multi-agent Reinforcement Learning +3
no code implementations • 29 Sep 2021 • Pedram Agand, Mo Chen, Hamid Taghirad
Our method shows at-least 70\% improvement in parameter point estimation accuracy and approximately 55\% reduction in tracking error of the value of interest compared to recursive least squares and conventional MCMC.
1 code implementation • 29 Sep 2021 • Payam Jome Yazdian, Mo Chen, Angelica Lim
We propose a vector-quantized variational autoencoder structure as well as training techniques to learn a rigorous representation of gesture sequences.
no code implementations • 1 Jan 2021 • Pedram Agand, Mo Chen, Hamid D. Taghirad
We demonstrate our approach on a challenging benchmark: estimation of parameters in the Hunt-Crossley dynamic model, which models both on/off contact forces applied to soft materials.
no code implementations • 5 Nov 2020 • Payam Nikdel, Richard Vaughan, Mo Chen
Our deep RL module implicitly estimates human trajectory and produces short-term navigational goals to guide the robot.
no code implementations • 4 Nov 2020 • Xubo Lyu, Site Li, Seth Siriya, Ye Pu, Mo Chen
On the other end, "classical methods" such as optimal control generate solutions without collecting data, but assume that an accurate model of the system and environment is known and are mostly limited to problems with low-dimensional (lo-dim) state spaces.
no code implementations • 28 Oct 2020 • Zhitian Zhang, Jimin Rhim, Taher Ahmadi, Kefan Yang, Angelica Lim, Mo Chen
This article describes a dataset collected in a set of experiments that involves human participants and a robot.
no code implementations • L4DC 2020 • Anjian Li, Somil Bansal, Georgios Giovanis, Varun Tolani, Claire Tomlin, Mo Chen
In Bansal et al. (2019), a novel visual navigation framework that combines learning-based and model-based approaches has been proposed.
no code implementations • WS 2018 • Kaiyin Zhou, Sheng Zhang, Xiangyu Meng, Qi Luo, Yuxing Wang, Ke Ding, Yukun Feng, Mo Chen, Kevin Cohen, Jingbo Xia
Sequence labeling of biomedical entities, e. g., side effects or phenotypes, was a long-term task in BioNLP and MedNLP communities.
1 code implementation • 16 Jun 2018 • Boris Ivanovic, James Harrison, Apoorva Sharma, Mo Chen, Marco Pavone
Our Backward Reachability Curriculum (BaRC) begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamically-consistent manner once the policy optimization algorithm demonstrates sufficient performance.
1 code implementation • 21 Sep 2017 • Somil Bansal, Mo Chen, Sylvia Herbert, Claire J. Tomlin
Hamilton-Jacobi (HJ) reachability analysis is an important formal verification method for guaranteeing performance and safety properties of dynamical systems; it has been applied to many small-scale systems in the past decade.
Systems and Control Dynamical Systems Optimization and Control
no code implementations • 21 Mar 2017 • Sylvia L. Herbert, Mo Chen, SooJean Han, Somil Bansal, Jaime F. Fisac, Claire J. Tomlin
We propose a new algorithm FaSTrack: Fast and Safe Tracking for High Dimensional systems.
Robotics
no code implementations • 10 Nov 2016 • Frank Jiang, Glen Chou, Mo Chen, Claire J. Tomlin
To sidestep the curse of dimensionality when computing solutions to Hamilton-Jacobi-Bellman partial differential equations (HJB PDE), we propose an algorithm that leverages a neural network to approximate the value function.