no code implementations • 15 May 2023 • Ibrahim Batuhan Akkaya, Senthilkumar S. Kathiresan, Elahe Arani, Bahram Zonooz
Vision transformers (ViTs) achieve remarkable performance on large datasets, but tend to perform worse than convolutional neural networks (CNNs) when trained from scratch on smaller datasets, possibly due to a lack of local inductive bias in the architecture.
no code implementations • 8 Dec 2022 • Ibrahim Batuhan Akkaya, Ugur Halici
Instead of employing thresholding on predictions, we introduce a method to weight the gradients calculated from pseudo-labels based on the reliability of the teacher's predictions.
1 code implementation • 14 Jun 2021 • Ibrahim Batuhan Akkaya, Fazil Altinel, Ugur Halici
To this end, we propose a self-training guided adversarial domain adaptation method to promote generalization capabilities of adversarial domain adaptation methods.
no code implementations • 2 Sep 2020 • Ibrahim Batuhan Akkaya, Ugur Halici
In this work, we propose an unsupervised IIT method that preserves the uniform regions after the translation.