Search Results for author: Ariel Larey

Found 7 papers, 1 papers with code

Facial Expression Re-targeting from a Single Character

no code implementations21 Jun 2023 Ariel Larey, Omri Asraf, Adam Kelder, Itzik Wilf, Ofer Kruzel, Nati Daniel

Video retargeting for digital face animation is used in virtual reality, social media, gaming, movies, and video conference, aiming to animate avatars' facial expressions based on videos of human faces.

Between Generating Noise and Generating Images: Noise in the Correct Frequency Improves the Quality of Synthetic Histopathology Images for Digital Pathology

no code implementations13 Feb 2023 Nati Daniel, Eliel Aknin, Ariel Larey, Yoni Peretz, Guy Sela, Yael Fisher, Yonatan Savir

In this work, we show that introducing random single-pixel noise with the appropriate spatial frequency into a polygon semantic mask can dramatically improve the quality of the synthetic images.

Semantic Segmentation

DEPAS: De-novo Pathology Semantic Masks using a Generative Model

no code implementations13 Feb 2023 Ariel Larey, Nati Daniel, Eliel Aknin, Yael Fisher, Yonatan Savir

In this work, we introduce a scalable generative model, coined as DEPAS, that captures tissue structure and generates high-resolution semantic masks with state-of-the-art quality.

Decision Making Translation

Harnessing Artificial Intelligence to Infer Novel Spatial Biomarkers for the Diagnosis of Eosinophilic Esophagitis

no code implementations26 May 2022 Ariel Larey, Eliel Aknin, Nati Daniel, Garrett A. Osswald, Julie M. Caldwell, Mark Rochman, Tanya Wasserman, Margaret H. Collins, Nicoleta C. Arva, Guang-Yu Yang, Marc E. Rothenberg, Yonatan Savir

Our approach highlights the importance of systematically analyzing the distribution of biopsy features over the entire slide and paves the way towards a personalized decision support system that will assist not only in counting cells but can also potentially improve diagnosis and provide treatment prediction.

Semantic Segmentation Specificity

Prune Once for All: Sparse Pre-Trained Language Models

2 code implementations10 Nov 2021 Ofir Zafrir, Ariel Larey, Guy Boudoukh, Haihao Shen, Moshe Wasserblat

We show how the compressed sparse pre-trained models we trained transfer their knowledge to five different downstream natural language tasks with minimal accuracy loss.

Natural Language Inference Quantization +3

Cannot find the paper you are looking for? You can Submit a new open access paper.