no code implementations • 2 Nov 2023 • Ella Rabinovich, Samuel Ackerman, Orna Raz, Eitan Farchi, Ateret Anaby-Tavor
Semantic consistency of a language model is broadly defined as the model's ability to produce semantically-equivalent outputs, given semantically-equivalent inputs.
1 code implementation • 23 Oct 2023 • Samuel Ackerman, George Kour, Eitan Farchi
We quantify this quality by constructing a Known-Similarity Corpora set from two paraphrase corpora and calculating the distance between paired corpora from it.
no code implementations • 17 Oct 2023 • Dipak Wani, Samuel Ackerman, Eitan Farchi, Xiaotong Liu, Hau-wen Chang, Sarasi Lalithsena
Logs enable the monitoring of infrastructure status and the performance of associated applications.
no code implementations • 28 May 2023 • Ella Rabinovich, Matan Vetzler, Samuel Ackerman, Ateret Anaby-Tavor
Data drift is the change in model input data that is one of the key factors leading to machine learning models performance degradation over time.
no code implementations • 14 May 2023 • Samuel Ackerman, Axel Bendavid, Eitan Farchi, Orna Raz
The approach we propose is to separate the observations that are the most likely to be predicted incorrectly into 'attention sets'.
2 code implementations • 29 Nov 2022 • George Kour, Samuel Ackerman, Orna Raz, Eitan Farchi, Boaz Carmeli, Ateret Anaby-Tavor
The ability to compare the semantic similarity between text corpora is important in a variety of natural language processing applications.
no code implementations • 2 Jan 2022 • Samuel Ackerman, Guy Barash, Eitan Farchi, Orna Raz, Onn Shehory
The crafting of machine learning (ML) based systems requires statistical control throughout its life cycle.
no code implementations • 22 Dec 2021 • George Kour, Marcel Zalmanovici, Orna Raz, Samuel Ackerman, Ateret Anaby-Tavor
Testing Machine Learning (ML) models and AI-Infused Applications (AIIAs), or systems that contain ML models, is highly challenging.
no code implementations • 10 Nov 2021 • Samuel Ackerman, Orna Raz, Marcel Zalmanovici, Aviad Zlotnick
The assumption underlying statistical ML resulting in theoretical or empirical performance guarantees is that the distribution of the training data is representative of the production data distribution.
no code implementations • 9 Nov 2021 • Samuel Ackerman, Parijat Dube, Eitan Farchi
It is thus desirable to monitor the usage patterns and identify when the system is used in a way that was never used before.
no code implementations • 24 Oct 2021 • Eliran Roffe, Samuel Ackerman, Orna Raz, Eitan Farchi
We thus use a set of learned strong polynomial relations to identify drift.
no code implementations • 11 Oct 2021 • Samuel Ackerman, Eitan Farchi, Orna Raz, Marcel Zalmanovici, Maya Zohar
A user may want to know where in the feature space observations are concentrated, and where it is sparse or empty.
no code implementations • 6 Sep 2021 • Samuel Ackerman, Sanjib Choudhury, Nirmit Desai, Eitan Farchi, Dan Gisolfi, Andrew Hicks, Saritha Route, Diptikalyan Saha
API economy is driving the digital transformation of business applications across the hybrid Cloud and edge environments.
no code implementations • 12 Aug 2021 • Samuel Ackerman, Orna Raz, Marcel Zalmanovici
In this paper we show the feasibility of automatically extracting feature models that result in explainable data slices over which the ML solution under-performs.
no code implementations • 11 Aug 2021 • Samuel Ackerman, Parijat Dube, Eitan Farchi, Orna Raz, Marcel Zalmanovici
Detecting drift in performance of Machine Learning (ML) models is an acknowledged challenge.
no code implementations • 16 Dec 2020 • Samuel Ackerman, Eitan Farchi, Orna Raz, Marcel Zalmanovici, Parijat Dube
Drift is distribution change between the training and deployment data, which is concerning if model performance changes.
no code implementations • 31 Jul 2020 • Samuel Ackerman, Parijat Dube, Eitan Farchi
We utilize neural network embeddings to detect data drift by formulating the drift detection within an appropriate sequential decision framework.