Search Results for author: Rushikesh Kamalapurkar

Found 18 papers, 1 papers with code

An adaptive optimal control approach to monocular depth observability maximization

no code implementations18 Jan 2024 Tochukwu Elijah Ogri, Muzaffar Qureshi, Zachary I. Bell, Kristy Waters, Rushikesh Kamalapurkar

This paper presents an integral concurrent learning (ICL)-based observer for a monocular camera to accurately estimate the Euclidean distance to features on a stationary object, under the restriction that state information is unavailable.

State and Parameter Estimation for Affine Nonlinear Systems

no code implementations4 Apr 2023 Tochukwu Elijah Ogri, Zachary I. Bell, Rushikesh Kamalapurkar

Real-world control applications in complex and uncertain environments require adaptability to handle model uncertainties and robustness against disturbances.

Model-based Reinforcement Learning reinforcement-learning

Fault Detection via Occupation Kernel Principal Component Analysis

1 code implementation20 Mar 2023 Zachary Morrison, Benjamin P. Russo, Yingzhao Lian, Rushikesh Kamalapurkar

The reliable operation of automatic systems is heavily dependent on the ability to detect faults in the underlying dynamical system.

Fault Detection

Nonuniqueness and Convergence to Equivalent Solutions in Observer-based Inverse Reinforcement Learning

no code implementations28 Oct 2022 Jared Town, Zachary Morrison, Rushikesh Kamalapurkar

A key challenge in solving the deterministic inverse reinforcement learning (IRL) problem online and in real-time is the existence of multiple solutions.

reinforcement-learning Reinforcement Learning (RL)

Carleman Lifting for Nonlinear System Identification with Guaranteed Error Bounds

no code implementations30 May 2022 Moad Abudia, Joel A. Rosenfeld, Rushikesh Kamalapurkar

This paper concerns identification of uncontrolled or closed loop nonlinear systems using a set of trajectories that are generated by the system in a domain of attraction.

Safety aware model-based reinforcement learning for optimal control of a class of output-feedback nonlinear systems

no code implementations1 Oct 2021 S M Nahid Mahmud, Moad Abudia, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar

The ability to learn and execute optimal control policies safely is critical to realization of complex autonomy, especially where task restarts are not available and/or the systems are safety-critical.

Model-based Reinforcement Learning reinforcement-learning +1

Singular Dynamic Mode Decompositions

no code implementations6 Jun 2021 Joel A. Rosenfeld, Rushikesh Kamalapurkar

This manuscript is aimed at addressing several long standing limitations of dynamic mode decompositions in the application of Koopman analysis.

The kernel perspective on dynamic mode decomposition

no code implementations31 May 2021 Efrain Gonzalez, Moad Abudia, Michael Jury, Rushikesh Kamalapurkar, Joel A. Rosenfeld

This manuscript revisits theoretical assumptions concerning dynamic mode decomposition (DMD) of Koopman operators, including the existence of lattices of eigenfunctions, common eigenfunctions between Koopman operators, and boundedness and compactness of Koopman operators.

Misconceptions

Control Occupation Kernel Regression for Nonlinear Control-Affine Systems

no code implementations31 May 2021 Moad Abudia, Tejasvi Channagiri, Joel A. Rosenfeld, Rushikesh Kamalapurkar

As the fundamental basis elements leveraged in approximation, higher order control occupation kernels represent iterated integration after multiplication by a given controller in a vector valued reproducing kernel Hilbert space.

regression

Dynamic Mode Decomposition with Control Liouville Operators

no code implementations7 Jan 2021 Joel A. Rosenfeld, Rushikesh Kamalapurkar

A given feedback controller is represented through a multiplication operator and a composition of the control Liouville operator and the multiplication operator is used to express the nonlinear closed-loop system as a linear total derivative operator on RKHSs.

Optimization and Control Functional Analysis 37N35, 93B30

Motion Tomography via Occupation Kernels

no code implementations7 Jan 2021 Benjamin P. Russo, Rushikesh Kamalapurkar, Dongsik Chang, Joel A. Rosenfeld

The goal of motion tomography is to recover the description of a vector flow field using information about the trajectory of a sensing unit.

Optimization and Control Functional Analysis 93-08, 46E22

Theoretical Foundations for the Dynamic Mode Decomposition of High Order Dynamical Systems

no code implementations7 Jan 2021 Joel A. Rosenfeld, Rushikesh Kamalapurkar, Benjamin P. Russo

Conventionally, data driven identification and control problems for higher order dynamical systems are solved by augmenting the system state by the derivatives of the output to formulate first order dynamical systems in higher dimensions.

Optimization and Control Functional Analysis 93-08, 46E22

Online Observer-Based Inverse Reinforcement Learning

no code implementations3 Nov 2020 Ryan Self, Kevin Coleman, He Bai, Rushikesh Kamalapurkar

In this paper, a novel approach to the output-feedback inverse reinforcement learning (IRL) problem is developed by casting the IRL problem, for linear systems with quadratic cost functions, as a state estimation problem.

reinforcement-learning Reinforcement Learning (RL)

Safe Model-Based Reinforcement Learning for Systems with Parametric Uncertainties

no code implementations24 Jul 2020 S M Nahid Mahmud, Scott A Nivison, Zachary I. Bell, Rushikesh Kamalapurkar

In recent years, reinforcement learning approaches that rely on persistent excitation have been combined with a barrier transformation to learn the optimal control policies under state constraints.

Model-based Reinforcement Learning reinforcement-learning +2

Efficient model-based reinforcement learning for approximate online optimal

no code implementations9 Feb 2015 Rushikesh Kamalapurkar, Joel A. Rosenfeld, Warren E. Dixon

In this paper the infinite horizon optimal regulation problem is solved online for a deterministic control-affine nonlinear dynamical system using the state following (StaF) kernel method to approximate the value function.

Model-based Reinforcement Learning reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.