1 code implementation • 5 Mar 2024 • Yakir Yehuda, Itzik Malkiel, Oren Barkan, Jonathan Weill, Royi Ronen, Noam Koenigstein
Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons.
1 code implementation • ICCV 2023 • Oren Barkan, Tal Reiss, Jonathan Weill, Ori Katz, Roy Hirsch, Itzik Malkiel, Noam Koenigstein
Given an image of a certain object, the goal of VSD is to retrieve images of different objects with high perceptual visual similarity.
no code implementations • 28 Jun 2023 • Oren Barkan, Avi Caciularu, Idan Rejwan, Ori Katz, Jonathan Weill, Itzik Malkiel, Noam Koenigstein
We present Variational Bayesian Network (VBN) - a novel Bayesian entity representation learning model that utilizes hierarchical and relational side information and is particularly useful for modeling entities in the ``long-tail'', where the data is scarce.
no code implementations • 9 Jun 2023 • Itzik Malkiel, Uri Alon, Yakir Yehuda, Shahar Keren, Oren Barkan, Royi Ronen, Noam Koenigstein
The online phase is applied to every call separately and scores the similarity between the transcripted conversation and the topic anchors found in the offline phase.
no code implementations • 13 Aug 2022 • Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Jonathan Weill, Noam Koenigstein
Recently, there has been growing interest in the ability of Transformer-based models to produce meaningful embeddings of text with several applications, such as text similarity.
no code implementations • 13 Aug 2022 • Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Yoni Weill, Noam Koenigstein
We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the ``traditional'' masked-language task.
no code implementations • 23 Apr 2022 • Oren Barkan, Edan Hauon, Avi Caciularu, Ori Katz, Itzik Malkiel, Omri Armstrong, Noam Koenigstein
Transformer-based language models significantly advanced the state-of-the-art in many linguistic tasks.
2 code implementations • 10 Dec 2021 • Itzik Malkiel, Gony Rosenman, Lior Wolf, Talma Hendler
We present TFF, which is a Transformer framework for the analysis of functional Magnetic Resonance Imaging (fMRI) data.
1 code implementation • EMNLP 2021 • Efrat Blaier, Itzik Malkiel, Lior Wolf
The recently introduced hateful meme challenge demonstrates the difficulty of determining whether a meme is hateful or not.
no code implementations • 2 Sep 2021 • Oren Barkan, Omri Armstrong, Amir Hertz, Avi Caciularu, Ori Katz, Itzik Malkiel, Noam Koenigstein
The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.
1 code implementation • Findings (ACL) 2021 • Dvir Ginzburg, Itzik Malkiel, Oren Barkan, Avi Caciularu, Noam Koenigstein
Hence, we introduce SDR, a self-supervised method for document similarity that can be applied to documents of arbitrary length.
1 code implementation • 5 Apr 2021 • Itzik Malkiel, Sangtae Ahn, Valentina Taviani, Anne Menini, Lior Wolf, Christopher J. Hardy
Recent accelerated MRI reconstruction models have used Deep Neural Networks (DNNs) to reconstruct relatively high-quality images from highly undersampled k-space data, enabling much faster MRI scanning.
Generative Adversarial Network Image-to-Image Translation +2
no code implementations • EACL 2021 • Itzik Malkiel, Lior Wolf
Language modeling with BERT consists of two phases of (i) unsupervised pre-training on unlabeled text, and (ii) fine-tuning for a specific supervised task.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Itzik Malkiel, Oren Barkan, Avi Caciularu, Noam Razin, Ori Katz, Noam Koenigstein
In addition, we introduce a new language understanding task for wine recommendations using similarities based on professional wine reviews.
1 code implementation • EMNLP 2021 • Itzik Malkiel, Lior Wolf
When training neural models, it is common to combine multiple loss terms.
no code implementations • 26 Nov 2019 • Itzik Malkiel, Michael Mrejen, Lior Wolf, Haim Suchowski
Our model architecture is not limited to a closed set of nanostructure shapes, and can be trained for the design of any geometry.
1 code implementation • 5 Nov 2019 • Itzik Malkiel, Lior Wolf
In this work, we present a method that leverages BERT's fine-tuning phase to its fullest, by applying an extensive number of parallel classifier heads, which are enforced to be orthogonal, while adaptively eliminating the weaker heads during training.
1 code implementation • 14 Aug 2019 • Oren Barkan, Noam Razin, Itzik Malkiel, Ori Katz, Avi Caciularu, Noam Koenigstein
In this paper, we introduce Distilled Sentence Embedding (DSE) - a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks.
no code implementations • 2 May 2019 • Itzik Malkiel, Sangtae Ahn, Valentina Taviani, Anne Menini, Lior Wolf, Christopher J. Hardy
Recent sparse MRI reconstruction models have used Deep Neural Networks (DNNs) to reconstruct relatively high-quality images from highly undersampled k-space data, enabling much faster MRI scanning.