Search Results for author: Kazuya Kawakami

Found 9 papers, 2 papers with code

Towards Learning Universal Hyperparameter Optimizers with Transformers

1 code implementation26 May 2022 Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Qiuyi Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc'Aurelio Ranzato, Sagi Perel, Nando de Freitas

Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution.

Hyperparameter Optimization Meta-Learning

Learning Robust and Multilingual Speech Representations

no code implementations Findings of the Association for Computational Linguistics 2020 Kazuya Kawakami, Luyu Wang, Chris Dyer, Phil Blunsom, Aaron van den Oord

Unsupervised speech representation learning has shown remarkable success at finding representations that correlate with phonetic structures and improve downstream speech recognition performance.

Representation Learning speech-recognition +1

Unsupervised Learning of Efficient and Robust Speech Representations

no code implementations25 Sep 2019 Kazuya Kawakami, Luyu Wang, Chris Dyer, Phil Blunsom, Aaron van den Oord

We present an unsupervised method for learning speech representations based on a bidirectional contrastive predictive coding that implicitly discovers phonetic structure from large-scale corpora of unlabelled raw audio signals.

speech-recognition Speech Recognition

Learning to Discover, Ground and Use Words with Segmental Neural Language Models

no code implementations ACL 2019 Kazuya Kawakami, Chris Dyer, Phil Blunsom

We propose a segmental neural language model that combines the generalization power of neural networks with the ability to discover word-like units that are latent in unsegmented character sequences.

Language Modelling Segmentation

Unsupervised Word Discovery with Segmental Neural Language Models

no code implementations27 Sep 2018 Kazuya Kawakami, Chris Dyer, Phil Blunsom

We propose a segmental neural language model that combines the representational power of neural networks and the structure learning mechanism of Bayesian nonparametrics, and show that it learns to discover semantically meaningful units (e. g., morphemes and words) from unsegmented character sequences.

Language Modelling

Learning to Create and Reuse Words in Open-Vocabulary Neural Language Modeling

no code implementations ACL 2017 Kazuya Kawakami, Chris Dyer, Phil Blunsom

Fixed-vocabulary language models fail to account for one of the most characteristic statistical facts of natural language: the frequent creation and reuse of new word types.

Language Modelling

Neural Architectures for Named Entity Recognition

43 code implementations NAACL 2016 Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer

State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available.

Named Entity Recognition

Learning to Represent Words in Context with Multilingual Supervision

no code implementations14 Nov 2015 Kazuya Kawakami, Chris Dyer

We present a neural network architecture based on bidirectional LSTMs to compute representations of words in the sentential contexts.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.