Search Results for author: Adam Crespi

Found 5 papers, 4 papers with code

PSP-HDRI$+$: A Synthetic Dataset Generator for Pre-Training of Human-Centric Computer Vision Models

1 code implementation11 Jul 2022 Salehe Erfanian Ebadi, Saurav Dhakad, Sanjay Vishwakarma, Chunpu Wang, You-Cyuan Jhang, Maciek Chociej, Adam Crespi, Alex Thaman, Sujoy Ganguly

We introduce a new synthetic data generator PSP-HDRI$+$ that proves to be a superior pre-training alternative to ImageNet and other large-scale synthetic data counterparts.

Keypoint Estimation

PeopleSansPeople: A Synthetic Data Generator for Human-Centric Computer Vision

1 code implementation17 Dec 2021 Salehe Erfanian Ebadi, You-Cyuan Jhang, Alex Zook, Saurav Dhakad, Adam Crespi, Pete Parisi, Steven Borkman, Jonathan Hogins, Sujoy Ganguly

We found that pre-training a network using synthetic data and fine-tuning on various sizes of real-world data resulted in a keypoint AP increase of $+38. 03$ ($44. 43 \pm 0. 17$ vs. $6. 40$) for few-shot transfer (limited subsets of COCO-person train [2]), and an increase of $+1. 47$ ($63. 47 \pm 0. 19$ vs. $62. 00$) for abundant real data regimes, outperforming models trained with the same real data alone.

Human Detection Pose Estimation +2

Unity Perception: Generate Synthetic Data for Computer Vision

no code implementations9 Jul 2021 Steve Borkman, Adam Crespi, Saurav Dhakad, Sujoy Ganguly, Jonathan Hogins, You-Cyuan Jhang, Mohsen Kamalzadeh, Bowen Li, Steven Leal, Pete Parisi, Cesar Romero, Wesley Smith, Alex Thaman, Samuel Warren, Nupur Yadav

We introduce the Unity Perception package which aims to simplify and accelerate the process of generating synthetic datasets for computer vision tasks by offering an easy-to-use and highly customizable toolset.

object-detection Object Detection +1

Obstacle Tower: A Generalization Challenge in Vision, Control, and Planning

3 code implementations4 Feb 2019 Arthur Juliani, Ahmed Khalifa, Vincent-Pierre Berges, Jonathan Harper, Ervin Teng, Hunter Henry, Adam Crespi, Julian Togelius, Danny Lange

Unlike other benchmarks such as the Arcade Learning Environment, evaluation of agent performance in Obstacle Tower is based on an agent's ability to perform well on unseen instances of the environment.

Atari Games Board Games

Cannot find the paper you are looking for? You can Submit a new open access paper.