Domain adaptation is the task of adapting models between domains.
( Image credit: Unsupervised Image-to-Image Translation Networks )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks.
We propose to using high level semantic and contextual features including segmentation and detection masks obtained by off-the-shelf state-of-the-art vision as observations and use deep network to learn the navigation policy.
With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations.
#4 best model for Image-to-Image Translation on Cityscapes Photo-to-Labels
In this paper, we propose a new loss function called generalized end-to-end (GE2E) loss, which makes the training of speaker verification models more efficient than our previous tuple-based end-to-end (TE2E) loss function.
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.
#2 best model for Multimodal Unsupervised Image-To-Image Translation on Cats-and-Dogs
Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions.
#5 best model for Domain Adaptation on ImageCLEF-DA
In this paper, we propose a practically Easy Transfer Learning (EasyTL) approach which requires no model selection and hyperparameter tuning, while achieving competitive performance.
#3 best model for Transfer Learning on Office-Home