Search Results for author: Amittai Axelrod

Found 7 papers, 2 papers with code

MEEP: An Open-Source Platform for Human-Human Dialog Collection and End-to-End Agent Training

1 code implementation9 Oct 2020 Arkady Arkhangorodsky, Amittai Axelrod, Christopher Chu, Scot Fang, Yiqi Huang, Ajay Nagesh, Xing Shi, Boliang Zhang, Kevin Knight

We create a new task-oriented dialog platform (MEEP) where agents are given considerable freedom in terms of utterances and API calls, but are constrained to work within a push-button environment.

DiDi Labs' End-to-end System for the IWSLT 2020 Offline Speech TranslationTask

no code implementations WS 2020 Arkady Arkhangorodsky, Yiqi Huang, Amittai Axelrod

This paper describes the system that was submitted by DiDi Labs to the offline speech translation task for IWSLT 2020.

Translation

FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN

no code implementations WS 2020 Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond{\v{r}}ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian St{\"u}ker, Marco Turchi, Alex Waibel, er, Changhan Wang

The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2020) featured this year six challenge tracks: (i) Simultaneous speech translation, (ii) Video speech translation, (iii) Offline speech translation, (iv) Conversational speech translation, (v) Open domain translation, and (vi) Non-native speech translation.

Translation

Dual Monolingual Cross-Entropy Delta Filtering of Noisy Parallel Data

no code implementations WS 2019 Amittai Axelrod, Anish Kumar, Steve Sloto

We introduce a purely monolingual approach to filtering for parallel data from a noisy corpus in a low-resource scenario.

Data Selection with Cluster-Based Language Difference Models and Cynical Selection

no code implementations IWSLT 2017 Lucía Santamaría, Amittai Axelrod

Building on existing work on class-based language difference models, we first introduce a cluster-based method that uses Brown clusters to condense the vocabulary of the corpora.

Machine Translation POS +2

Cynical Selection of Language Model Training Data

1 code implementation7 Sep 2017 Amittai Axelrod

(2) The selected sentences are not guaranteed to be able to model the in-domain data, nor to even cover the in-domain data.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.