no code implementations • 18 Jan 2024 • Tochukwu Elijah Ogri, Muzaffar Qureshi, Zachary I. Bell, Kristy Waters, Rushikesh Kamalapurkar
This paper presents an integral concurrent learning (ICL)-based observer for a monocular camera to accurately estimate the Euclidean distance to features on a stationary object, under the restriction that state information is unavailable.
no code implementations • 24 Jul 2023 • Jared Town, Zachary Morrison, Rushikesh Kamalapurkar
The observer is shown to converge to one of the equivalent solutions of the IRL problem.
no code implementations • 4 Apr 2023 • Tochukwu Elijah Ogri, Zachary I. Bell, Rushikesh Kamalapurkar
Real-world control applications in complex and uncertain environments require adaptability to handle model uncertainties and robustness against disturbances.
1 code implementation • 20 Mar 2023 • Zachary Morrison, Benjamin P. Russo, Yingzhao Lian, Rushikesh Kamalapurkar
The reliable operation of automatic systems is heavily dependent on the ability to detect faults in the underlying dynamical system.
no code implementations • 28 Oct 2022 • Jared Town, Zachary Morrison, Rushikesh Kamalapurkar
A key challenge in solving the deterministic inverse reinforcement learning (IRL) problem online and in real-time is the existence of multiple solutions.
no code implementations • 13 Oct 2022 • Tochukwu Elijah Ogri, S. M. Nahid Mahmud, Zachary I. Bell, Rushikesh Kamalapurkar
Real-world control applications in complex and uncertain environments require adaptability to handle model uncertainties and robustness against disturbances.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • 30 May 2022 • Moad Abudia, Joel A. Rosenfeld, Rushikesh Kamalapurkar
This paper concerns identification of uncontrolled or closed loop nonlinear systems using a set of trajectories that are generated by the system in a domain of attraction.
no code implementations • 4 Apr 2022 • S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar
Safe model-based reinforcement learning techniques based on a barrier transformation have recently been developed to address this problem.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • 1 Oct 2021 • S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar
The ability to learn and execute optimal control policies safely is critical to realization of complex autonomy, especially where task restarts are not available and/or the systems are safety-critical.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • 6 Jun 2021 • Joel A. Rosenfeld, Rushikesh Kamalapurkar
This manuscript is aimed at addressing several long standing limitations of dynamic mode decompositions in the application of Koopman analysis.
no code implementations • 31 May 2021 • Efrain Gonzalez, Moad Abudia, Michael Jury, Rushikesh Kamalapurkar, Joel A. Rosenfeld
This manuscript revisits theoretical assumptions concerning dynamic mode decomposition (DMD) of Koopman operators, including the existence of lattices of eigenfunctions, common eigenfunctions between Koopman operators, and boundedness and compactness of Koopman operators.
no code implementations • 31 May 2021 • Moad Abudia, Tejasvi Channagiri, Joel A. Rosenfeld, Rushikesh Kamalapurkar
As the fundamental basis elements leveraged in approximation, higher order control occupation kernels represent iterated integration after multiplication by a given controller in a vector valued reproducing kernel Hilbert space.
no code implementations • 7 Jan 2021 • Joel A. Rosenfeld, Rushikesh Kamalapurkar
A given feedback controller is represented through a multiplication operator and a composition of the control Liouville operator and the multiplication operator is used to express the nonlinear closed-loop system as a linear total derivative operator on RKHSs.
Optimization and Control Functional Analysis 37N35, 93B30
no code implementations • 7 Jan 2021 • Benjamin P. Russo, Rushikesh Kamalapurkar, Dongsik Chang, Joel A. Rosenfeld
The goal of motion tomography is to recover the description of a vector flow field using information about the trajectory of a sensing unit.
Optimization and Control Functional Analysis 93-08, 46E22
no code implementations • 7 Jan 2021 • Joel A. Rosenfeld, Rushikesh Kamalapurkar, Benjamin P. Russo
Conventionally, data driven identification and control problems for higher order dynamical systems are solved by augmenting the system state by the derivatives of the output to formulate first order dynamical systems in higher dimensions.
Optimization and Control Functional Analysis 93-08, 46E22
no code implementations • 3 Nov 2020 • Ryan Self, Kevin Coleman, He Bai, Rushikesh Kamalapurkar
In this paper, a novel approach to the output-feedback inverse reinforcement learning (IRL) problem is developed by casting the IRL problem, for linear systems with quadratic cost functions, as a state estimation problem.
no code implementations • 24 Jul 2020 • S M Nahid Mahmud, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar
In recent years, reinforcement learning approaches that rely on persistent excitation have been combined with a barrier transformation to learn the optimal control policies under state constraints.
Model-based Reinforcement Learning reinforcement-learning +2
no code implementations • 9 Feb 2015 • Rushikesh Kamalapurkar, Joel A. Rosenfeld, Warren E. Dixon
In this paper the infinite horizon optimal regulation problem is solved online for a deterministic control-affine nonlinear dynamical system using the state following (StaF) kernel method to approximate the value function.
Model-based Reinforcement Learning reinforcement-learning +1