Search Results for author: Mert Bulent Sariyildiz

Found 6 papers, 4 papers with code

Fake it till you make it: Learning transferable representations from synthetic ImageNet clones

no code implementations CVPR 2023 Mert Bulent Sariyildiz, Karteek Alahari, Diane Larlus, Yannis Kalantidis

We show that with minimal and class-agnostic prompt engineering, ImageNet clones are able to close a large part of the gap between models produced by synthetic images and models trained with real images, for the several standard classification benchmarks that we consider in this study.

Classification Image Generation +1

No Reason for No Supervision: Improved Generalization in Supervised Models

1 code implementation30 Jun 2022 Mert Bulent Sariyildiz, Yannis Kalantidis, Karteek Alahari, Diane Larlus

We consider the problem of training a deep neural network on a given classification task, e. g., ImageNet-1K (IN1K), so that it excels at both the training task as well as at other (future) transfer tasks.

Data Augmentation Self-Supervised Learning +1

Concept Generalization in Visual Representation Learning

1 code implementation ICCV 2021 Mert Bulent Sariyildiz, Yannis Kalantidis, Diane Larlus, Karteek Alahari

In this paper, we argue that the semantic relationships between seen and unseen concepts affect generalization performance and propose ImageNet-CoG, a novel benchmark on the ImageNet-21K (IN-21K) dataset that enables measuring concept generalization in a principled way.

Representation Learning Self-Supervised Learning

Hard Negative Mixing for Contrastive Learning

1 code implementation NeurIPS 2020 Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus

Based on these observations, and motivated by the success of data mixing, we propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead.

Contrastive Learning Data Augmentation +5

Learning Visual Representations with Caption Annotations

no code implementations ECCV 2020 Mert Bulent Sariyildiz, Julien Perez, Diane Larlus

Starting from the observation that captioned images are easily crawlable, we argue that this overlooked source of information can be exploited to supervise the training of visual representations.

Image Captioning Language Modelling +1

Gradient Matching Generative Networks for Zero-Shot Learning

1 code implementation CVPR 2019 Mert Bulent Sariyildiz, Ramazan Gokberk Cinbis

In contrast, we propose a generative model that can naturally learn from unsupervised examples, and synthesize training examples for unseen classes purely based on their class embeddings, and therefore, reduce the zero-shot learning problem into a supervised classification task.

Domain Adaptation General Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.