Search Results for author: Noam Koenigstein

Found 29 papers, 13 papers with code

In Search of Truth: An Interrogation Approach to Hallucination Detection

1 code implementation5 Mar 2024 Yakir Yehuda, Itzik Malkiel, Oren Barkan, Jonathan Weill, Royi Ronen, Noam Koenigstein

Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons.

Hallucination

DiffMoog: a Differentiable Modular Synthesizer for Sound Matching

1 code implementation23 Jan 2024 Noy Uzrad, Oren Barkan, Almog Elharar, Shlomi Shvartzman, Moshe Laufer, Lior Wolf, Noam Koenigstein

We introduce an open-source platform that comprises DiffMoog and an end-to-end sound matching framework.

Audio Synthesis

Visual Explanations via Iterated Integrated Attributions

1 code implementation ICCV 2023 Oren Barkan, Yehonatan Elisha, Yuval Asher, Amit Eshel, Noam Koenigstein

We introduce Iterated Integrated Attributions (IIA) - a generic method for explaining the predictions of vision models.

Learning to Explain: A Model-Agnostic Framework for Explaining Black Box Models

1 code implementation25 Oct 2023 Oren Barkan, Yuval Asher, Amit Eshel, Yehonatan Elisha, Noam Koenigstein

We present Learning to Explain (LTX), a model-agnostic framework designed for providing post-hoc explanations for vision models.

counterfactual

Deep Integrated Explanations

1 code implementation23 Oct 2023 Oren Barkan, Yehonatan Elisha, Jonathan Weill, Yuval Asher, Amit Eshel, Noam Koenigstein

This paper presents Deep Integrated Explanations (DIX) - a universal method for explaining vision models.

Representation Learning via Variational Bayesian Networks

no code implementations28 Jun 2023 Oren Barkan, Avi Caciularu, Idan Rejwan, Ori Katz, Jonathan Weill, Itzik Malkiel, Noam Koenigstein

We present Variational Bayesian Network (VBN) - a novel Bayesian entity representation learning model that utilizes hierarchical and relational side information and is particularly useful for modeling entities in the ``long-tail'', where the data is scarce.

Bayesian Inference Representation Learning

GPT-Calls: Enhancing Call Segmentation and Tagging by Generating Synthetic Conversations via Large Language Models

no code implementations9 Jun 2023 Itzik Malkiel, Uri Alon, Yakir Yehuda, Shahar Keren, Oren Barkan, Royi Ronen, Noam Koenigstein

The online phase is applied to every call separately and scores the similarity between the transcripted conversation and the topic anchors found in the offline phase.

Segmentation TAG

Detecting Security Patches via Behavioral Data in Code Repositories

1 code implementation4 Feb 2023 Nitzan Farhi, Noam Koenigstein, Yuval Shavitt

The absolute majority of software today is developed collaboratively using collaborative version control tools such as Git.

Time Series

MetricBERT: Text Representation Learning via Self-Supervised Triplet Training

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Yoni Weill, Noam Koenigstein

We present MetricBERT, a BERT-based model that learns to embed text under a well-defined similarity metric while simultaneously adhering to the ``traditional'' masked-language task.

Representation Learning

Interpreting BERT-based Text Similarity via Activation and Saliency Maps

no code implementations13 Aug 2022 Itzik Malkiel, Dvir Ginzburg, Oren Barkan, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Recently, there has been growing interest in the ability of Transformer-based models to produce meaningful embeddings of text with several applications, such as text similarity.

text similarity

Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps

no code implementations23 Apr 2022 Oren Barkan, Edan Hauon, Avi Caciularu, Ori Katz, Itzik Malkiel, Omri Armstrong, Noam Koenigstein

Transformer-based language models significantly advanced the state-of-the-art in many linguistic tasks.

Cold Item Integration in Deep Hybrid Recommenders via Tunable Stochastic Gates

no code implementations12 Dec 2021 Oren Barkan, Roy Hirsch, Ori Katz, Avi Caciularu, Jonathan Weill, Noam Koenigstein

Next, we propose a novel hybrid recommendation algorithm that bridges these two conflicting objectives and enables a harmonized balance between preserving high accuracy for warm items while effectively promoting completely cold items.

Collaborative Filtering

GAM: Explainable Visual Similarity and Classification via Gradient Activation Maps

no code implementations2 Sep 2021 Oren Barkan, Omri Armstrong, Amir Hertz, Avi Caciularu, Ori Katz, Itzik Malkiel, Noam Koenigstein

The algorithmic advantages of GAM are explained in detail, and validated empirically, where it is shown that GAM outperforms its alternatives across various tasks and datasets.

Classification

Forecasting CPI Inflation Components with Hierarchical Recurrent Neural Networks

1 code implementation16 Nov 2020 Oren Barkan, Jonathan Benchimol, Itamar Caspi, Eliya Cohen, Allon Hammer, Noam Koenigstein

We present a hierarchical architecture based on Recurrent Neural Networks (RNNs) for predicting disaggregated inflation components of the Consumer Price Index (CPI).

Bayesian Hierarchical Words Representation Learning

no code implementations ACL 2020 Oren Barkan, Idan Rejwan, Avi Caciularu, Noam Koenigstein

BHWR facilitates Variational Bayes word representation learning combined with semantic taxonomy modeling via hierarchical priors.

Representation Learning

Autoencoders

no code implementations12 Mar 2020 Dor Bank, Noam Koenigstein, Raja Giryes

An autoencoder is a specific type of a neural network, which is mainly designed to encode the input into a compressed and meaningful representation, and then decode it back such that the reconstructed input is similar as possible to the original one.

Neural Attentive Multiview Machines

no code implementations18 Feb 2020 Oren Barkan, Ori Katz, Noam Koenigstein

An important problem in multiview representation learning is finding the optimal combination of views with respect to the specific task at hand.

Representation Learning

Attentive Item2Vec: Neural Attentive User Representations

no code implementations15 Feb 2020 Oren Barkan, Avi Caciularu, Ori Katz, Noam Koenigstein

However, it is possible that a certain early movie may become suddenly more relevant in the presence of a popular sequel movie.

Recommendation Systems

Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding

1 code implementation14 Aug 2019 Oren Barkan, Noam Razin, Itzik Malkiel, Ori Katz, Avi Caciularu, Noam Koenigstein

In this paper, we introduce Distilled Sentence Embedding (DSE) - a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks.

Knowledge Distillation Natural Language Understanding +4

The Bayesian Low-Rank Determinantal Point Process Mixture Model

no code implementations15 Aug 2016 Mike Gartrell, Ulrich Paquet, Noam Koenigstein

Determinantal point processes (DPPs) are an elegant model for encoding probabilities over subsets, such as shopping baskets, of a ground set, such as an item catalog.

Point Processes Product Recommendation

Item2Vec: Neural Item Embedding for Collaborative Filtering

7 code implementations14 Mar 2016 Oren Barkan, Noam Koenigstein

Many Collaborative Filtering (CF) algorithms are item-based in the sense that they analyze item-item relations in order to produce item similarities.

Collaborative Filtering

Low-Rank Factorization of Determinantal Point Processes for Recommendation

1 code implementation17 Feb 2016 Mike Gartrell, Ulrich Paquet, Noam Koenigstein

In this work we present a new method for learning the DPP kernel from observed data using a low-rank factorization of this kernel.

Point Processes Product Recommendation

Scalable Bayesian Modelling of Paired Symbols

no code implementations9 Sep 2014 Ulrich Paquet, Noam Koenigstein, Ole Winther

We present a novel, scalable and Bayesian approach to modelling the occurrence of pairs of symbols (i, j) drawn from a large vocabulary.

One-class Collaborative Filtering with Random Graphs: Annotated Version

no code implementations26 Sep 2013 Ulrich Paquet, Noam Koenigstein

The bane of one-class collaborative filtering is interpreting and modelling the latent signal from the missing class.

Collaborative Filtering Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.