DeepFashion is a dataset containing around 800K diverse fashion images with their rich annotations (46 categories, 1,000 descriptive attributes, bounding boxes and landmark information) ranging from well-posed product images to real-world-like consumer photos.
364 PAPERS • 6 BENCHMARKS
VITON was a dataset for virtual try-on of clothing items. It consisted of 16,253 pairs of images of a person and a clothing item .
77 PAPERS • 1 BENCHMARK
VITON-HD dataset is a dataset for high-resolution (i.e., 1024x768) virtual try-on of clothing items. Specifically, it consists of 13,679 frontal-view woman and top clothing image pairs.
42 PAPERS • 1 BENCHMARK
Dress Code is a new dataset for image-based virtual try-on composed of image pairs coming from different catalogs of YOOX NET-A-PORTER. The dataset contains more than 50k high resolution model clothing images pairs divided into three different categories (i.e. dresses, upper-body clothes, lower-body clothes).
15 PAPERS • NO BENCHMARKS YET
Contains 60 female and 30 male actors performing a collection of 20 predefined everyday actions and sports movements, and one self-chosen movement.
10 PAPERS • 1 BENCHMARK
Consists of 37,723/14,360 person/clothes images, with the resolution of 256x192. Each person has different poses. We split them into the train/test set 52,236/10,544 three-tuples, respectively. You can download the dataset at MPV(Google Drive)
4 PAPERS • 1 BENCHMARK
StreetTryOn, the new in-the-wild Virtual Try-On dataset, consists of 12,364 and 2,089 street person images for training and validation, respectively. It is derived from the large fashion retrieval dataset DeepFashion2, from which we filter out over 90% of DeepFashion2 images that are infeasible for try-on tasks (e.g., non-frontal view, large occlusion, dark environment, etc.). Combining with the garment and person images in VITON-HD, we obtain a comprehensive suite of in-domain and cross-domain try-on tasks that have garment and person inputs from various sources, including Shop2Model, Model2Model, Shop2Street, and Street2Street.
1 PAPER • 4 BENCHMARKS