Search Results for author: Stas Tiomkin

Found 17 papers, 4 papers with code

Taming Waves: A Physically-Interpretable Machine Learning Framework for Realizable Control of Wave Dynamics

no code implementations27 Nov 2023 Tristan Shah, Feruza Amirkulova, Stas Tiomkin

Specifically, control of wave dynamics is challenging due to additional physical constraints and intrinsic properties of wave phenomena such as dissipation, attenuation, reflection, and scattering.

Interpretable Machine Learning

Controllability-Constrained Deep Network Models for Enhanced Control of Dynamical Systems

1 code implementation11 Nov 2023 Suruchi Sharma, Volodymyr Makarenko, Gautam Kumar, Stas Tiomkin

That is achieved by augmenting the model estimation objective with a controllability constraint, which penalizes models with a low degree of controllability.

Multi-Resolution Diffusion for Privacy-Sensitive Recommender Systems

no code implementations6 Nov 2023 Derek Lilienthal, Paul Mello, Magdalini Eirinaki, Stas Tiomkin

While recommender systems have become an integral component of the Web experience, their heavy reliance on user data raises privacy and security concerns.

Recommendation Systems

Bounding the Optimal Value Function in Compositional Reinforcement Learning

1 code implementation5 Mar 2023 Jacob Adamczyk, Volodymyr Makarenko, Argenis Arriojas, Stas Tiomkin, Rahul V. Kulkarni

In order to quickly obtain solutions to unseen problems with new reward functions, a popular approach involves functional composition of previously solved tasks.

reinforcement-learning Reinforcement Learning (RL)

Intrinsic Motivation in Dynamical Control Systems

no code implementations29 Dec 2022 Stas Tiomkin, Ilya Nemenman, Daniel Polani, Naftali Tishby

Biological systems often choose actions without an explicit reward signal, a phenomenon known as intrinsic motivation.

Multi-Objective Policy Gradients with Topological Constraints

no code implementations15 Sep 2022 Kyle Hollins Wray, Stas Tiomkin, Mykel J. Kochenderfer, Pieter Abbeel

Multi-objective optimization models that encode ordered sequential constraints provide a solution to model various challenging problems including encoding preferences, modeling a curriculum, and enforcing measures of safety.

Entropy Regularized Reinforcement Learning Using Large Deviation Theory

2 code implementations7 Jun 2021 Argenis Arriojas, Jacob Adamczyk, Stas Tiomkin, Rahul V. Kulkarni

The mapping established in this work connects current research in reinforcement learning and non-equilibrium statistical mechanics, thereby opening new avenues for the application of analytical and computational approaches from one field to cutting-edge problems in the other.

reinforcement-learning Reinforcement Learning (RL)

GEM: Group Enhanced Model for Learning Dynamical Control Systems

no code implementations7 Apr 2021 Philippe Hansen-Estruch, Wenling Shang, Lerrel Pinto, Pieter Abbeel, Stas Tiomkin

In this work, we take advantage of these structures to build effective dynamical models that are amenable to sample-based learning.

Continuous Control Model-based Reinforcement Learning

Dynamics Generalization via Information Bottleneck in Deep Reinforcement Learning

no code implementations3 Aug 2020 Xingyu Lu, Kimin Lee, Pieter Abbeel, Stas Tiomkin

Despite the significant progress of deep reinforcement learning (RL) in solving sequential decision making problems, RL agents often overfit to training environments and struggle to adapt to new, unseen environments.

Decision Making reinforcement-learning +1

Efficient Empowerment Estimation for Unsupervised Stabilization

no code implementations ICLR 2021 Ruihan Zhao, Kevin Lu, Pieter Abbeel, Stas Tiomkin

We demonstrate our solution for sample-based unsupervised stabilization on different dynamical control systems and show the advantages of our method by comparing it to the existing VLB approaches.

AvE: Assistance via Empowerment

1 code implementation NeurIPS 2020 Yuqing Du, Stas Tiomkin, Emre Kiciman, Daniel Polani, Pieter Abbeel, Anca Dragan

One difficulty in using artificial agents for human-assistive applications lies in the challenge of accurately assisting with a person's goal(s).

Preventing Imitation Learning with Adversarial Policy Ensembles

no code implementations31 Jan 2020 Albert Zhan, Stas Tiomkin, Pieter Abbeel

To our knowledge, this is the first work regarding the protection of policies in Reinforcement Learning.

Imitation Learning reinforcement-learning +1

Predictive Coding for Boosting Deep Reinforcement Learning with Sparse Rewards

no code implementations21 Dec 2019 Xingyu Lu, Stas Tiomkin, Pieter Abbeel

While recent progress in deep reinforcement learning has enabled robots to learn complex behaviors, tasks with long horizons and sparse rewards remain an ongoing challenge.

reinforcement-learning Reinforcement Learning (RL)

Learning Efficient Representation for Intrinsic Motivation

no code implementations4 Dec 2019 Ruihan Zhao, Stas Tiomkin, Pieter Abbeel

The core idea is to represent the relation between action sequences and future states using a stochastic dynamic model in latent space with a specific form.

Dynamical System Embedding for Efficient Intrinsically Motivated Artificial Agents

no code implementations25 Sep 2019 Ruihan Zhao, Stas Tiomkin, Pieter Abbeel

In this work, we develop a novel approach for the estimation of empowerment in unknown arbitrary dynamics from visual stimulus only, without sampling for the estimation of MIAS.

Cannot find the paper you are looking for? You can Submit a new open access paper.