no code implementations • 1 Jan 2021 • Ozan Arkan Can, Ilker Kesen, Deniz Yuret
How to best integrate linguistic and perceptual processing in multimodal tasks is an important open problem.
1 code implementation • 28 Mar 2020 • İlker Kesen, Ozan Arkan Can, Erkut Erdem, Aykut Erdem, Deniz Yuret
Our experiments reveal that using language to control the filters for bottom-up visual processing in addition to top-down attention leads to better results on both tasks and achieves competitive performance.
no code implementations • SEMEVAL 2019 • Osman Mutlu, Ozan Arkan Can, Erenay Dayanik
This paper describes our system for SemEval-2019 Task 4: Hyperpartisan News Detection (Kiesel et al., 2019).
no code implementations • WS 2019 • Ozan Arkan Can, Pedro Zuidberg Dos Martires, Andreas Persson, Julian Gaal, Amy Loutfi, Luc De Raedt, Deniz Yuret, Alessandro Saffiotti
Therefore, we further propose Bayesian learning to resolve such inconsistencies between the natural language grounding and a robot's world representation by exploiting spatio-relational information that is implicitly present in instructions given by a human.
1 code implementation • 21 May 2018 • Ozan Arkan Can, Deniz Yuret
Our goal is to develop a model that can learn to follow new instructions given prior instruction-perception-action examples.
1 code implementation • COLING 2016 • Onur Kuru, Ozan Arkan Can, Deniz Yuret
We describe and evaluate a character-level tagger for language-independent Named Entity Recognition (NER).