Search Results for author: Ondrej Bohdal

Found 14 papers, 10 papers with code

VL-ICL Bench: The Devil in the Details of Benchmarking Multimodal In-Context Learning

1 code implementation19 Mar 2024 Yongshuo Zong, Ondrej Bohdal, Timothy Hospedales

Built on top of LLMs, vision large language models (VLLMs) have advanced significantly in areas such as recognition, reasoning, and grounding.

Benchmarking Image Captioning +3

Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models

1 code implementation3 Feb 2024 Yongshuo Zong, Ondrej Bohdal, Tingyang Yu, Yongxin Yang, Timothy Hospedales

Our experiments demonstrate that integrating this dataset into standard vision-language fine-tuning or utilizing it for post-hoc fine-tuning effectively safety aligns VLLMs.

Instruction Following

Label Calibration for Semantic Segmentation Under Domain Shift

no code implementations20 Jul 2023 Ondrej Bohdal, Da Li, Timothy Hospedales

Performance of a pre-trained semantic segmentation model is likely to substantially decrease on data from a new domain.

Segmentation Semantic Segmentation

Feed-Forward Source-Free Domain Adaptation via Class Prototypes

no code implementations20 Jul 2023 Ondrej Bohdal, Da Li, Timothy Hospedales

Source-free domain adaptation has become popular because of its practical usefulness and no need to access source data.

Source-Free Domain Adaptation

Navigating Noise: A Study of How Noise Influences Generalisation and Calibration of Neural Networks

1 code implementation30 Jun 2023 Martin Ferianc, Ondrej Bohdal, Timothy Hospedales, Miguel Rodrigues

Enhancing the generalisation abilities of neural networks (NNs) through integrating noise such as MixUp or Dropout during training has emerged as a powerful and adaptable technique.

Data Augmentation

Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn

1 code implementation CVPR 2023 Ondrej Bohdal, Yinbing Tian, Yongshuo Zong, Ruchika Chavhan, Da Li, Henry Gouk, Li Guo, Timothy Hospedales

Meta-learning and other approaches to few-shot learning are widely studied for image recognition, and are increasingly applied to other vision tasks such as pose estimation and dense prediction.

Few-Shot Learning Pose Estimation +1

Fairness in AI and Its Long-Term Implications on Society

no code implementations16 Apr 2023 Ondrej Bohdal, Timothy Hospedales, Philip H. S. Torr, Fazl Barez

Successful deployment of artificial intelligence (AI) in various settings has led to numerous positive outcomes for individuals and society.

Decision Making Fairness

Feed-Forward Latent Domain Adaptation

no code implementations15 Jul 2022 Ondrej Bohdal, Da Li, Shell Xu Hu, Timothy Hospedales

Recognizing that device's data are likely to come from multiple latent domains that include a mixture of unlabelled domain-relevant and domain-irrelevant examples, we focus on the comparatively under-studied problem of latent domain adaptation.

Source-Free Domain Adaptation

PASHA: Efficient HPO and NAS with Progressive Resource Allocation

2 code implementations14 Jul 2022 Ondrej Bohdal, Lukas Balles, Martin Wistuba, Beyza Ermis, Cédric Archambeau, Giovanni Zappella

Hyperparameter optimization (HPO) and neural architecture search (NAS) are methods of choice to obtain the best-in-class machine learning models, but in practice they can be costly to run.

BIG-bench Machine Learning Hyperparameter Optimization +1

A Channel Coding Benchmark for Meta-Learning

1 code implementation15 Jul 2021 Rui Li, Ondrej Bohdal, Rajesh Mishra, Hyeji Kim, Da Li, Nicholas Lane, Timothy Hospedales

We use our MetaCC benchmark to study several aspects of meta-learning, including the impact of task distribution breadth and shift, which can be controlled in the coding problem.

Meta-Learning

EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization

1 code implementation NeurIPS 2021 Ondrej Bohdal, Yongxin Yang, Timothy Hospedales

Gradient-based meta-learning and hyperparameter optimization have seen significant progress recently, enabling practical end-to-end training of neural networks together with many hyperparameters.

cross-domain few-shot learning Hyperparameter Optimization

Meta-Calibration: Learning of Model Calibration Using Differentiable Expected Calibration Error

1 code implementation17 Jun 2021 Ondrej Bohdal, Yongxin Yang, Timothy Hospedales

The problem is especially noticeable when using modern neural networks, for which there is a significant difference between the confidence of the model and the probability of correct prediction.

Meta-Learning

Flexible Dataset Distillation: Learn Labels Instead of Images

2 code implementations15 Jun 2020 Ondrej Bohdal, Yongxin Yang, Timothy Hospedales

In particular, we study the problem of label distillation - creating synthetic labels for a small set of real images, and show it to be more effective than the prior image-based approach to dataset distillation.

Data Summarization Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.