Search Results for author: Itzik Malkiel

Found 19 papers, 9 papers with code

In Search of Truth: An Interrogation Approach to Hallucination Detection

1 code implementation5 Mar 2024 Yakir Yehuda, Itzik Malkiel, Oren Barkan, Jonathan Weill, Royi Ronen, Noam Koenigstein

Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons.

Hallucination

Representation Learning via Variational Bayesian Networks

no code implementations28 Jun 2023 Oren Barkan, Avi Caciularu, Idan Rejwan, Ori Katz, Jonathan Weill, Itzik Malkiel, Noam Koenigstein

We present Variational Bayesian Network (VBN) - a novel Bayesian entity representation learning model that utilizes hierarchical and relational side information and is particularly useful for modeling entities in the ``long-tail'', where the data is scarce.

Bayesian Inference Representation Learning

GPT-Calls: Enhancing Call Segmentation and Tagging by Generating Synthetic Conversations via Large Language Models

no code implementations9 Jun 2023 Itzik Malkiel, Uri Alon, Yakir Yehuda, Shahar Keren, Oren Barkan, Royi Ronen, Noam Koenigstein

The online phase is applied to every call separately and scores the similarity between the transcripted conversation and the topic anchors found in the offline phase.

Segmentation TAG

Interpreting BERT-based Text Similarity via Activation and Saliency Maps

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Recently, there has been growing interest in the ability of Transformer-based models to produce meaningful embeddings of text with several applications, such as text similarity.

text similarity

MetricBERT: Text Representation Learning via Self-Supervised Triplet Training

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Yoni Weill, Noam Koenigstein

We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the ``traditional'' masked-language task.

Representation Learning

Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps

no code implementations23 Apr 2022 Oren Barkan, Edan Hauon, Avi Caciularu, Ori Katz, Itzik Malkiel, Omri Armstrong, Noam Koenigstein

Transformer-based language models significantly advanced the state-of-the-art in many linguistic tasks.

Self-Supervised Transformers for fMRI representation

2 code implementations10 Dec 2021 Itzik Malkiel, Gony Rosenman, Lior Wolf, Talma Hendler

We present TFF, which is a Transformer framework for the analysis of functional Magnetic Resonance Imaging (fMRI) data.

Gender Prediction

Caption Enriched Samples for Improving Hateful Memes Detection

1 code implementation EMNLP 2021 Efrat Blaier, Itzik Malkiel, Lior Wolf

The recently introduced hateful meme challenge demonstrates the difficulty of determining whether a meme is hateful or not.

Image Captioning

GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

no code implementations2 Sep 2021 Oren Barkan, Omri Armstrong, Amir Hertz, Avi Caciularu, Ori Katz, Itzik Malkiel, Noam Koenigstein

The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.

Classification

Adaptive Gradient Balancing for UndersampledMRI Reconstruction and Image-to-Image Translation

1 code implementation5 Apr 2021 Itzik Malkiel, Sangtae Ahn, Valentina Taviani, Anne Menini, Lior Wolf, Christopher J. Hardy

Recent accelerated MRI reconstruction models have used Deep Neural Networks (DNNs) to reconstruct relatively high-quality images from highly undersampled k-space data, enabling much faster MRI scanning.

Generative Adversarial Network Image-to-Image Translation +2

Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models

no code implementations EACL 2021 Itzik Malkiel, Lior Wolf

Language modeling with BERT consists of two phases of (i) unsupervised pre-training on unlabeled text, and (ii) fine-tuning for a specific supervised task.

Language Modelling Unsupervised Pre-training

MTAdam: Automatic Balancing of Multiple Training Loss Terms

1 code implementation EMNLP 2021 Itzik Malkiel, Lior Wolf

When training neural models, it is common to combine multiple loss terms.

Spectra2pix: Generating Nanostructure Images from Spectra

no code implementations26 Nov 2019 Itzik Malkiel, Michael Mrejen, Lior Wolf, Haim Suchowski

Our model architecture is not limited to a closed set of nanostructure shapes, and can be trained for the design of any geometry.

MML: Maximal Multiverse Learning for Robust Fine-Tuning of Language Models

1 code implementation5 Nov 2019 Itzik Malkiel, Lior Wolf

In this work, we present a method that leverages BERT's fine-tuning phase to its fullest, by applying an extensive number of parallel classifier heads, which are enforced to be orthogonal, while adaptively eliminating the weaker heads during training.

Unsupervised Pre-training

Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding

1 code implementation14 Aug 2019 Oren Barkan, Noam Razin, Itzik Malkiel, Ori Katz, Avi Caciularu, Noam Koenigstein

In this paper, we introduce Distilled Sentence Embedding (DSE) - a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks.

Knowledge Distillation Natural Language Understanding +4

Conditional WGANs with Adaptive Gradient Balancing for Sparse MRI Reconstruction

no code implementations2 May 2019 Itzik Malkiel, Sangtae Ahn, Valentina Taviani, Anne Menini, Lior Wolf, Christopher J. Hardy

Recent sparse MRI reconstruction models have used Deep Neural Networks (DNNs) to reconstruct relatively high-quality images from highly undersampled k-space data, enabling much faster MRI scanning.

Generative Adversarial Network MRI Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.