1 code implementation • 19 Apr 2024 • Jing Cheng, Ruigang Wang, Ian R. Manchester
We take a recently proposed Polyak Lojasiewicz network (PLNet) as an Lyapunov function and then parameterize the vector field as the descent directions of the Lyapunov function.
no code implementations • 22 Mar 2024 • Dongjun Wu, Bowen Yi, Ian R. Manchester
The results extend the applicability of the CCM approach and provide a framework for analyzing the behavior of control systems with Lie group structures.
no code implementations • 2 Feb 2024 • Ruigang Wang, Krishnamurthy Dvijotham, Ian R. Manchester
This paper presents a new \emph{bi-Lipschitz} invertible neural network, the BiLipNet, which has the ability to control both its \emph{Lipschitzness} (output sensitivity to input perturbations) and \emph{inverse Lipschitzness} (input distinguishability from different outputs).
no code implementations • 16 Jan 2024 • Fletcher Fan, Bowen Yi, David Rye, Guodong Shi, Ian R. Manchester
Whereas most existing works on Koopman learning do not take into account the stability or stabilizability of the model -- two fundamental pieces of prior knowledge about a given system to be identified -- in this paper, we propose new classes of Koopman models that have built-in guarantees of these properties.
no code implementations • 9 Jul 2023 • Bowen Yi, Ian R. Manchester
The inertial measurement unit (IMU) preintegration approach nowadays is widely used in various robotic applications.
no code implementations • 22 Jun 2023 • Bowen Yi, Chi Jin, Lei Wang, Guodong Shi, Viorela Ila, Ian R. Manchester
This paper introduces a new linear parameterization to the problem of visual inertial simultaneous localization and mapping (VI-SLAM) -- without any approximation -- for the case only using information from a single monocular camera and an inertial measurement unit.
1 code implementation • 22 Jun 2023 • Nicholas H. Barbara, Max Revay, Ruigang Wang, Jing Cheng, Ian R. Manchester
Neural networks are typically sensitive to small input perturbations, leading to unexpected or brittle behaviour.
1 code implementation • 12 Apr 2023 • Nicholas H. Barbara, Ruigang Wang, Ian R. Manchester
This paper presents a policy parameterization for learning-based control on nonlinear, partially-observed dynamical systems.
1 code implementation • 6 Apr 2023 • Daniele Martinelli, Clara Lucía Galimberti, Ian R. Manchester, Luca Furieri, Giancarlo Ferrari-Trecate
We validate the properties of NodeRENs, including the possibility of handling irregularly sampled data, in a case study in nonlinear system identification.
1 code implementation • 20 Mar 2023 • Patricia Pauli, Ruigang Wang, Ian R. Manchester, Frank Allgöwer
We establish a layer-wise parameterization for 1D convolutional neural networks (CNNs) with built-in end-to-end robustness guarantees.
no code implementations • 4 Feb 2023 • Vera L. J. Somers, Ian R. Manchester
Spreading processes, e. g. epidemics, wildfires and rumors, are often modeled on static networks.
2 code implementations • 27 Jan 2023 • Ruigang Wang, Ian R. Manchester
This paper introduces a new parameterization of deep neural networks (both fully-connected and convolutional) with guaranteed $\ell^2$ Lipschitz bounds, i. e. limited sensitivity to input perturbations.
no code implementations • 27 Jun 2022 • Bowen Yi, Lei Wang, Ian R. Manchester
The paper addresses the problem of attitude estimation for rigid bodies using (possibly time-varying) vector measurements, for which we provide a necessary and sufficient condition of distinguishability.
no code implementations • 23 Dec 2021 • Bowen Yi, Chi Jin, Ian R. Manchester
The design of a globally convergent position observer for feature points from visual information is a challenging problem, especially for the case with only inertial measurements and without assumptions of uniform observability, which remained open for a long time.
no code implementations • 8 Dec 2021 • Ruigang Wang, Nicholas H. Barbara, Max Revay, Ian R. Manchester
This paper proposes a nonlinear policy architecture for control of partially-observed linear dynamical systems providing built-in closed-loop stability guarantees.
no code implementations • 2 Dec 2021 • Ruigang Wang, Ian R. Manchester
This paper presents a parameterization of nonlinear controllers for uncertain systems building on a recently developed neural network architecture, called the recurrent equilibrium network (REN), and a nonlinear version of the Youla parameterization.
no code implementations • 14 Oct 2021 • Vera L. J. Somers, Ian R. Manchester
In this paper we propose a method for sparse dynamic allocation of resources to bound the risk of spreading processes, such as epidemics and wildfires, using convex optimization and dynamic programming techniques.
1 code implementation • 13 Oct 2021 • Fletcher Fan, Bowen Yi, David Rye, Guodong Shi, Ian R. Manchester
In this paper, we present a new data-driven method for learning stable models of nonlinear systems.
no code implementations • 1 Oct 2021 • Ian R. Manchester, Max Revay, Ruigang Wang
This tutorial paper provides an introduction to recently developed tools for machine learning, especially learning dynamical systems (system identification), with stability and robustness constraints.
no code implementations • 29 Jul 2021 • Max Revay, Jack Umenberger, Ian R. Manchester
This paper proposes methods for identification of large-scale networked systems with guarantees that the resulting model will be contracting -- a strong form of nonlinear stability -- and/or monotone, i. e. order relations between states are preserved.
no code implementations • 13 Jul 2021 • Vera L. J. Somers, Ian R. Manchester
Here, risk is considered the risk of an undetected outbreak, i. e. the product of the probability of an outbreak and the impact of that outbreak, and we can bound or minimize the risk by resource allocation and persistent monitoring schedules.
1 code implementation • 13 Apr 2021 • Max Revay, Ruigang Wang, Ian R. Manchester
RENs are otherwise very flexible: they can represent all stable linear systems, all previously-known sets of contracting recurrent neural networks and echo state networks, all deep feedforward neural networks, and all stable Wiener/Hammerstein models, and can approximate all fading-memory and contracting nonlinear systems.
no code implementations • 11 Apr 2021 • Ruigang Wang, Patrick J. W. Koelwijn, Ian R. Manchester, Roland Tóth
In this paper, we present a virtual control contraction metric (VCCM) based nonlinear parameter-varying (NPV) approach to design a state-feedback controller for a control moment gyroscope (CMG) to track a user-defined trajectory set.
no code implementations • 28 Mar 2021 • Bowen Yi, Ian R. Manchester
control contraction metric) for the nonlinear system.
no code implementations • 5 Oct 2020 • Max Revay, Ruigang Wang, Ian R. Manchester
In image classification experiments we show that the Lipschitz bounds are very accurate and improve robustness to adversarial attacks.
no code implementations • 11 Apr 2020 • Max Revay, Ruigang Wang, Ian R. Manchester
Recurrent neural networks (RNNs) are a class of nonlinear dynamical systems often used to model sequence-to-sequence maps.
no code implementations • 18 Mar 2020 • Ruigang Wang, Roland Tóth, Patrick J. W. Koelwijn, Ian R. Manchester
This paper presents a systematic approach to nonlinear state-feedback control design that has three main advantages: (i) it ensures exponential stability and $ \mathcal{L}_2 $-gain performance with respect to a user-defined set of reference trajectories, and (ii) it provides constructive conditions based on convex optimization and a path-integral-based control realization, and (iii) it is less restrictive than previous similar approaches.
1 code implementation • 17 Mar 2020 • Vera L. J. Somers, Ian R. Manchester
In this letter we propose a method for sparse allocation of resources to control spreading processes -- such as epidemics and wildfires -- using convex optimization, in particular exponential cone programming.
Systems and Control Systems and Control Dynamical Systems Optimization and Control
1 code implementation • L4DC 2020 • Max Revay, Ian R. Manchester
Stability of recurrent models is closely linked with trainability, generalizability and in some applications, safety.
no code implementations • 2 Mar 2018 • Jack Umenberger, Ian R. Manchester
Estimation of nonlinear dynamic models from data poses many challenges, including model instability and non-convexity of long-term simulation fidelity.
no code implementations • 22 Nov 2017 • Ian R. Manchester
A new approach to design of nonlinear observers (state estimators) is proposed.
no code implementations • 23 Jan 2017 • Mark M. Tobenkin, Ian R. Manchester, Alexandre Megretski
Model instability and poor prediction of long-term behavior are common problems when modeling dynamical systems using nonlinear "black-box" techniques.