Search Results for author: Ruihan Zhao

Found 11 papers, 1 papers with code

An Efficient Reconstructed Differential Evolution Variant by Some of the Current State-of-the-art Strategies for Solving Single Objective Bound Constrained Problems

no code implementations25 Apr 2024 Sichen Tao, Ruihan Zhao, Kaiyu Wang, Shangce Gao

In this paper, we propose a strategy recombination and reconstruction differential evolution algorithm called reconstructed differential evolution (RDE) to solve single-objective bounded optimization problems.

Online Poisoning Attacks Against Data-Driven Predictive Control

no code implementations19 Sep 2022 Yue Yu, Ruihan Zhao, Sandeep Chinchali, Ufuk Topcu

Data-driven predictive control (DPC) is a feedback control method for systems with unknown dynamics.

MEGAN: Memory Enhanced Graph Attention Network for Space-Time Video Super-Resolution

no code implementations28 Oct 2021 Chenyu You, Lianyi Han, Aosong Feng, Ruihan Zhao, Hui Tang, Wei Fan

Space-time video super-resolution (STVSR) aims to construct a high space-time resolution video sequence from the corresponding low-frame-rate, low-resolution video sequence.

Graph Attention Space-time Video Super-resolution +1

SimCVD: Simple Contrastive Voxel-Wise Representation Distillation for Semi-Supervised Medical Image Segmentation

no code implementations13 Aug 2021 Chenyu You, Yuan Zhou, Ruihan Zhao, Lawrence Staib, James S. Duncan

However, most existing learning-based approaches usually suffer from limited manually annotated medical data, which poses a major practical problem for accurate and robust medical image segmentation.

Data Augmentation Image Generation +5

Hierarchical Few-Shot Imitation with Skill Transition Models

1 code implementation ICML Workshop URL 2021 Kourosh Hakhamaneshi, Ruihan Zhao, Albert Zhan, Pieter Abbeel, Michael Laskin

To this end, we present Few-shot Imitation with Skill Transition Models (FIST), an algorithm that extracts skills from offline data and utilizes them to generalize to unseen tasks given a few downstream demonstrations.

Momentum Contrastive Voxel-wise Representation Learning for Semi-supervised Volumetric Medical Image Segmentation

no code implementations14 May 2021 Chenyu You, Ruihan Zhao, Lawrence Staib, James S. Duncan

In this work, we present a novel Contrastive Voxel-wise Representation Learning (CVRL) method to effectively learn low-level and high-level features by capturing 3D spatial context and rich anatomical information along both the feature and the batch dimensions.

Contrastive Learning Image Segmentation +4

Learning Visual Robotic Control Efficiently with Contrastive Pre-training and Data Augmentation

no code implementations14 Dec 2020 Albert Zhan, Ruihan Zhao, Lerrel Pinto, Pieter Abbeel, Michael Laskin

We present Contrastive Pre-training and Data Augmentation for Efficient Robotic Learning (CoDER), a method that utilizes data augmentation and unsupervised learning to achieve sample-efficient training of real-robot arm policies from sparse rewards.

Data Augmentation reinforcement-learning +2

Efficient Empowerment Estimation for Unsupervised Stabilization

no code implementations ICLR 2021 Ruihan Zhao, Kevin Lu, Pieter Abbeel, Stas Tiomkin

We demonstrate our solution for sample-based unsupervised stabilization on different dynamical control systems and show the advantages of our method by comparing it to the existing VLB approaches.

Learning Efficient Representation for Intrinsic Motivation

no code implementations4 Dec 2019 Ruihan Zhao, Stas Tiomkin, Pieter Abbeel

The core idea is to represent the relation between action sequences and future states using a stochastic dynamic model in latent space with a specific form.

Dynamical System Embedding for Efficient Intrinsically Motivated Artificial Agents

no code implementations25 Sep 2019 Ruihan Zhao, Stas Tiomkin, Pieter Abbeel

In this work, we develop a novel approach for the estimation of empowerment in unknown arbitrary dynamics from visual stimulus only, without sampling for the estimation of MIAS.

Cannot find the paper you are looking for? You can Submit a new open access paper.