Search Results for author: Oğuzhan Fatih Kar

Found 7 papers, 1 papers with code

BRAVE: Broadening the visual encoding of vision-language models

no code implementations10 Apr 2024 Oğuzhan Fatih Kar, Alessio Tonioni, Petra Poklukar, Achin Kulshrestha, Amir Zamir, Federico Tombari

Our results highlight the potential of incorporating different visual biases for a more broad and contextualized visual understanding of VLMs.

Hallucination Language Modelling +1

Unraveling the Key Components of OOD Generalization via Diversification

no code implementations26 Dec 2023 Harold Benoit, Liangze Jiang, Andrei Atanov, Oğuzhan Fatih Kar, Mattia Rigotti, Amir Zamir

We show that (1) diversification methods are highly sensitive to the distribution of the unlabeled data used for diversification and can underperform significantly when away from a method-specific sweet spot.

4M: Massively Multimodal Masked Modeling

no code implementations NeurIPS 2023 David Mizrahi, Roman Bachmann, Oğuzhan Fatih Kar, Teresa Yeo, Mingfei Gao, Afshin Dehghan, Amir Zamir

Current machine learning models for vision are often highly specialized and limited to a single modality and task.

Decoder

3D Common Corruptions and Data Augmentation

1 code implementation CVPR 2022 Oğuzhan Fatih Kar, Teresa Yeo, Andrei Atanov, Amir Zamir

We introduce a set of image transformations that can be used as corruptions to evaluate the robustness of models as well as data augmentation mechanisms for training neural networks.

Benchmarking Data Augmentation

Robustness via Cross-Domain Ensembles

no code implementations ICCV 2021 Teresa Yeo, Oğuzhan Fatih Kar, Alexander Sax, Amir Zamir

We present a method for making neural network predictions robust to shifts from the training data distribution.

Cannot find the paper you are looking for? You can Submit a new open access paper.