no code implementations • WAT 2022 • Francis Zheng, Edison Marrese-Taylor, Yutaka Matsuo
Jejueo is a critically endangered language spoken on Jeju Island and is closely related to but mutually unintelligible with Korean.
no code implementations • NAACL (AmericasNLP) 2021 • Francis Zheng, Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo
This paper describes UTokyo’s submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas.
no code implementations • insights (ACL) 2022 • Itsuki Okimura, Machel Reid, Makoto Kawano, Yutaka Matsuo
The reason for this is that within NLP, the impact of proposed data augmentation methods on performance has not been evaluated in a unified manner, and effective data augmentation methods are unclear.
no code implementations • ICLR 2019 • Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo
To solve this, we propose a concept to learn a mapping that embeds both images and attributes to the shared representation space that can be generalized even for unseen classes by interpolating from the information of seen classes, which we refer to shared manifold learning.
1 code implementation • 3 Apr 2024 • Takeshi Kojima, Itsuki Okimura, Yusuke Iwasawa, Hitomi Yanaka, Yutaka Matsuo
Additionally, we tamper with less than 1% of the total neurons in each model during inference and demonstrate that tampering with a few language-specific neurons drastically changes the probability of target language occurrence in text generation.
1 code implementation • 12 Mar 2024 • Yuta Oshima, Shohei Taniguchi, Masahiro Suzuki, Yutaka Matsuo
In the experiments, we first evaluate our SSM-based model with UCF101, a standard benchmark of video generation.
no code implementations • 9 Mar 2024 • Rui Yang, Haoran Liu, Edison Marrese-Taylor, Qingcheng Zeng, Yu He Ke, Wanxin Li, Lechao Cheng, Qingyu Chen, James Caverlee, Yutaka Matsuo, Irene Li
Large Language Models (LLMs) have significantly advanced healthcare innovation on generation capabilities.
1 code implementation • 26 Feb 2024 • Hiroki Furuta, Gouki Minegishi, Yusuke Iwasawa, Yutaka Matsuo
Grokking has been actively explored to reveal the mystery of delayed generalization.
1 code implementation • 31 Jan 2024 • Toshinori Kitamura, Tadashi Kozuno, Masahiro Kato, Yuki Ichihara, Soichiro Nishimori, Akiyoshi Sannai, Sho Sonoda, Wataru Kumagai, Yutaka Matsuo
We study a primal-dual reinforcement learning (RL) algorithm for the online constrained Markov decision processes (CMDP) problem, wherein the agent explores an optimal policy that maximizes return while satisfying constraints.
1 code implementation • 30 Nov 2023 • Qi Cao, Takeshi Kojima, Yutaka Matsuo, Yusuke Iwasawa
While Large Language Models (LLMs) have achieved remarkable performance in many tasks, much about their inner workings remains unclear.
1 code implementation • 30 Nov 2023 • Hiroki Furuta, Yutaka Matsuo, Aleksandra Faust, Izzeddin Gur
We show that while existing prompted LMAs (gpt-3. 5-turbo or gpt-4) achieve 94. 0% average success rate on base tasks, their performance degrades to 24. 9% success rate on compositional tasks.
1 code implementation • 30 Oct 2023 • Gouki Minegishi, Yusuke Iwasawa, Yutaka Matsuo
We aim to analyze the mechanism of grokking from the lottery ticket hypothesis, identifying the process to find the lottery tickets (good sparse subnetworks) as the key to describing the transitional phase between memorization and generalization.
no code implementations • 2 Oct 2023 • Iffat Maab, Edison Marrese-Taylor, Yutaka Matsuo
Sentence-level political bias detection in news is no exception, and has proven to be a challenging task that requires an understanding of bias in consideration of the context.
1 code implementation • 29 Sep 2023 • Jiaxian Guo, Bo Yang, Paul Yoo, Bill Yuchen Lin, Yusuke Iwasawa, Yutaka Matsuo
Unlike perfect information games, where all elements are known to every player, imperfect information games emulate the real-world complexities of decision-making under uncertain or incomplete information.
no code implementations • 16 Sep 2023 • So Kuroki, Jiaxian Guo, Tatsuya Matsushima, Takuya Okubo, Masato Kobayashi, Yuya Ikeda, Ryosuke Takanami, Paul Yoo, Yutaka Matsuo, Yusuke Iwasawa
Due to the inherent uncertainty in their deformability during motion, previous methods in deformable object manipulation, such as rope and cloth, often required hundreds of real-world demonstrations to train a manipulation policy for each object, which hinders their applications in our ever-changing world.
no code implementations • 24 Jul 2023 • Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, Aleksandra Faust
Pre-trained large language models (LLMs) have recently achieved better generalization and sample efficiency in autonomous web automation.
Ranked #1 on on Mind2Web
no code implementations • 14 Jun 2023 • So Kuroki, Jiaxian Guo, Tatsuya Matsushima, Takuya Okubo, Masato Kobayashi, Yuya Ikeda, Ryosuke Takanami, Paul Yoo, Yutaka Matsuo, Yusuke Iwasawa
To achieve this, we augment the policy by conditioning it on deformable rope parameters and training it with a diverse range of simulated deformable ropes so that the policy can adjust actions based on different rope parameters.
no code implementations • 13 Jun 2023 • Xin Zhang, Jiaxian Guo, Paul Yoo, Yutaka Matsuo, Yusuke Iwasawa
To guarantee the visual coherence of the generated or edited image, we introduce an inpainting and harmonizing module to guide the pre-trained diffusion model to seamlessly blend the inserted subject into the scene naturally.
no code implementations • 6 Jun 2023 • Paul Yoo, Jiaxian Guo, Yutaka Matsuo, Shixiang Shane Gu
Leveraging the strong image priors in the pre-trained diffusion models, DreamSparse is capable of synthesizing high-quality novel views for both object and scene-level images and generalising to open-set images.
1 code implementation • 31 May 2023 • Shohei Taniguchi, Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo
We address the problem of biased gradient estimation in deep Boltzmann machines (DBMs).
1 code implementation • 22 May 2023 • Toshinori Kitamura, Tadashi Kozuno, Yunhao Tang, Nino Vieillard, Michal Valko, Wenhao Yang, Jincheng Mei, Pierre Ménard, Mohammad Gheshlaghi Azar, Rémi Munos, Olivier Pietquin, Matthieu Geist, Csaba Szepesvári, Wataru Kumagai, Yutaka Matsuo
Mirror descent value iteration (MDVI), an abstraction of Kullback-Leibler (KL) and entropy-regularized reinforcement learning (RL), has served as the basis for recent high-performing practical RL algorithms.
no code implementations • 19 May 2023 • Hiroki Furuta, Kuang-Huei Lee, Ofir Nachum, Yutaka Matsuo, Aleksandra Faust, Shixiang Shane Gu, Izzeddin Gur
The progress of autonomous web navigation has been hindered by the dependence on billions of exploratory interactions via online reinforcement learning, and domain-specific model designs that make it difficult to leverage generalization from rich out-of-domain data.
no code implementations • 29 Dec 2022 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
This paper proposes using multimodal generative models for semi-supervised learning in the instruction following tasks.
no code implementations • 28 Nov 2022 • Xinrui Wang, Zhuoru Li, Xiao Zhou, Yusuke Iwasawa, Yutaka Matsuo
Previous learning based stylization methods suffer from the geometric and semantic gaps between portrait domain and style domain, which obstacles the style information to be correctly transferred to the portrait images, leading to poor stylization quality.
1 code implementation • 28 Nov 2022 • So Kuroki, Tatsuya Matsushima, Jumpei Arima, Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu, Yujin Tang
While natural systems often present collective intelligence that allows them to self-organize and adapt to changes, the equivalent is missing in most artificial systems.
1 code implementation • 25 Nov 2022 • Hiroki Furuta, Yusuke Iwasawa, Yutaka Matsuo, Shixiang Shane Gu
The rise of generalist large-scale models in natural language and vision has made us expect that a massive data-driven approach could achieve broader generalization in other domains such as continuous control.
1 code implementation • 15 Sep 2022 • Shohei Taniguchi, Yusuke Iwasawa, Wataru Kumagai, Yutaka Matsuo
Based on the ALD, we also present a new deep latent variable model named the Langevin autoencoder (LAE).
no code implementations • 13 Aug 2022 • Hiroshi Yamakawa, Yutaka Matsuo
Human-level AI will have significant impacts on human society.
no code implementations • 8 Aug 2022 • Naruya Kondo, So Kuroki, Ryosuke Hyakuta, Yutaka Matsuo, Shixiang Shane Gu, Yoichi Ochiai
An aspirational goal for virtual reality (VR) is to bring in a rich diversity of real world objects losslessly.
no code implementations • 20 Jul 2022 • Tatsuya Matsushima, Yuki Noguchi, Jumpei Arima, Toshiki Aoki, Yuki Okita, Yuya Ikeda, Koki Ishimoto, Shohei Taniguchi, Yuki Yamashita, Shoichi Seto, Shixiang Shane Gu, Yusuke Iwasawa, Yutaka Matsuo
Tidying up a household environment using a mobile manipulator poses various challenges in robotics, such as adaptation to large real-world environmental variations, and safe and robust deployment in the presence of humans. The Partner Robot Challenge in World Robot Challenge (WRC) 2020, a global competition held in September 2021, benchmarked tidying tasks in the real home environments, and importantly, tested for full system performances. For this challenge, we developed an entire household service robot system, which leverages a data-driven approach to adapt to numerous edge cases that occur during the execution, instead of classical manual pre-programmed solutions.
no code implementations • 5 Jul 2022 • Masahiro Suzuki, Yutaka Matsuo
In recent years, deep generative models, i. e., generative models in which distributions are parameterized by deep neural networks, have attracted much attention, especially variational autoencoders, which are suitable for accomplishing the above challenges because they can consider heterogeneity and infer good representations of data.
1 code implementation • 28 Jun 2022 • Takeshi Kojima, Yutaka Matsuo, Yusuke Iwasawa
Vision Transformer (ViT) is becoming more popular in image processing.
3 code implementations • 24 May 2022 • Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa
Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars.
Ranked #1 on Arithmetic Reasoning on MultiArith
no code implementations • NeurIPS 2021 • Yusuke Iwasawa, Yutaka Matsuo
This paper presents a new algorithm for domain generalization (DG), \textit{test-time template adjuster (T3A)}, aiming to robustify a model to unknown distribution shift.
1 code implementation • 25 Nov 2021 • Xin Zhang, Shixiang Shane Gu, Yutaka Matsuo, Yusuke Iwasawa
We propose Domain Prompt Learning (DPL) as a novel approach for domain inference in the form of conditional prompt generation.
Ranked #4 on Transfer Learning on Office-Home
1 code implementation • 25 Nov 2021 • Naruya Kondo, Yuya Ikeda, Andrea Tagliasacchi, Yutaka Matsuo, Yoichi Ochiai, Shixiang Shane Gu
We hope VaxNeRF -- a careful combination of a classic technique with a deep method (that arguably replaced it) -- can empower and accelerate new NeRF extensions and applications, with its simplicity, portability, and reliable performance gains.
1 code implementation • 19 Nov 2021 • Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu
We present Generalized Decision Transformer (GDT) for solving any HIM problem, and show how different choices for the feature function and the anti-causal aggregator not only recover DT as a special case, but also lead to novel Categorical DT (CDT) and Bi-directional DT (BDT) for matching different statistics of the future.
no code implementations • 13 Oct 2021 • Kazutoshi Shinoda, Yuki Takezawa, Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo
An interactive instruction following task has been proposed as a benchmark for learning to map natural language instructions and first-person vision into sequences of actions to interact with objects in 3D environments.
no code implementations • 29 Sep 2021 • Yuya Kobayashi, Masahiro Suzuki, Yutaka Matsuo
Therefore, we introduce several crucial components which help inference and training with the proposed model.
no code implementations • ICLR 2022 • Hiroki Furuta, Yutaka Matsuo, Shixiang Shane Gu
Inspired by distributional and state-marginal matching literatures in RL, we demonstrate that all these approaches are essentially doing hindsight information matching (HIM) -- training policies that can output the rest of trajectory that matches a given future state information statistics.
no code implementations • 29 Sep 2021 • Masahiro Suzuki, Yutaka Matsuo
A state-of-the-art approach to learning this aggregation of experts is to encourage all modalities to be reconstructed and cross-generated from arbitrary subsets.
1 code implementation • EMNLP 2021 • Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo
Reproducible benchmarks are crucial in driving progress of machine translation research.
no code implementations • 28 Jul 2021 • Masahiro Suzuki, Takaaki Kaneko, Yutaka Matsuo
With the recent rapid progress in the study of deep generative models (DGMs), there is a need for a framework that can implement them in a simple and generic way.
no code implementations • 14 May 2021 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
Therefore, the meta-RL agent faces the challenge of specifying both the hidden task and states based on small amount of experience.
no code implementations • EACL 2021 • Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo
In this paper, we propose a GAN model that aims to improve the approach to generating diverse texts conditioned by the latent space.
1 code implementation • NeurIPS 2021 • Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, Shixiang Shane Gu
These results show which implementation or code details are co-adapted and co-evolved with algorithms, and which are transferable across algorithms: as examples, we identified that tanh Gaussian policy and network sizes are highly adapted to algorithmic types, while layer normalization and ELU are critical for MPO's performances but also transfer to noticeable gains in SAC.
1 code implementation • 23 Mar 2021 • Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, Shixiang Shane Gu
Progress in deep reinforcement learning (RL) research is largely enabled by benchmark task environments.
no code implementations • ICLR 2021 • Makoto Kawano, Wataru Kumagai, Akiyoshi Sannai, Yusuke Iwasawa, Yutaka Matsuo
We present the group equivariant conditional neural process (EquivCNP), a meta-learning method with permutation invariance in a data set as in conventional conditional neural processes (CNPs), and it also has transformation equivariance in data space.
no code implementations • 11 Jan 2021 • Koichi Harada, Yutaka Matsuo, Go Noshita, Akimi Watanabe
It gives the free field representation for $q$-deformed $Y_{L, M, N}$, which is obtained as a reduction of the quantum toroidal algebra.
High Energy Physics - Theory Mathematical Physics Mathematical Physics Quantum Algebra
no code implementations • 11 Jan 2021 • Takumi Watanabe, Hiroki Takahashi, Goh Sato, Yusuke Iwasawa, Yutaka Matsuo, Ikuko Eguchi Yairi
This paper introduces our methodology to estimate sidewalk accessibilities from wheelchair behavior via a triaxial accelerometer in a smartphone installed under a wheelchair seat.
1 code implementation • Findings (EMNLP) 2021 • Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo
In light of this, we explore parameter-sharing methods in Transformers with a specific focus on generative models.
no code implementations • 1 Jan 2021 • Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo
We also perform equally well as Transformer-big with 40% less parameters and outperform the model by 0. 7 BLEU with 12M less parameters.
Ranked #25 on Machine Translation on WMT2014 English-German
no code implementations • 1 Jan 2021 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
However, by analyzing the sequential VAEs from the information theoretic perspective, we can claim that simply maximizing the MI encourages the latent variables to have redundant information and prevents the disentanglement of global and local features.
no code implementations • 1 Jan 2021 • Hitoshi Nakanishi, Masahiro Suzuki, Yutaka Matsuo
Moreover, there is objective mismatching that models are trained to minimize total reconstruction errors while we expect a small deviation on normal pixels and large deviation on anomalous pixels.
no code implementations • 1 Jan 2021 • Shohei Taniguchi, Yusuke Iwasawa, Yutaka Matsuo
Developing a latent variable model and an inference model with neural networks, yields Langevin autoencoders (LAEs), a novel Langevin-based framework for deep generative models.
1 code implementation • EMNLP 2020 • Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo
In this paper, we tackle the task of definition modeling, where the goal is to learn to generate definitions of words and phrases.
no code implementations • 30 Jul 2020 • Kento Doi, Ryuhei Hamaguchi, Shun Iwase, Rio Yokota, Yutaka Matsuo, Ken Sakurada
To cope with the difficulty, we introduce a deep graph matching network that establishes object correspondence between an image pair.
1 code implementation • ICLR 2021 • Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, Shixiang Gu
We propose a novel model-based algorithm, Behavior-Regularized Model-ENsemble (BREMEN) that can effectively optimize a policy offline using 10-20 times fewer data than prior works.
no code implementations • WS 2020 • Edison Marrese-Taylor, Cristian Rodriguez-Opazo, Jorge A. Balazs, Stephen Gould, Yutaka Matsuo
Despite the recent advances in opinion mining for written reviews, few works have tackled the problem on other sources of reviews.
1 code implementation • 20 Apr 2020 • Edison Marrese-Taylor, Machel Reid, Yutaka Matsuo
Document editing has become a pervasive component of the production of information, with version control systems enabling edits to be efficiently stored and applied.
no code implementations • 6 Apr 2020 • Kenya Sakka, Kotaro Nakayama, Nisei Kimura, Taiki Inoue, Yusuke Iwasawa, Ryohei Yamaguchi, Yosimasa Kawazoe, Kazuhiko Ohe, Yutaka Matsuo
And, we were confirmed from the generated findings that the proposed method was able to consider the orthographic variants.
no code implementations • 9 Mar 2020 • Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo
The contrast between the need for large amounts of data for current Natural Language Processing (NLP) techniques, and the lack thereof, is accentuated in the case of African languages, most of which are considered low-resource.
no code implementations • ICLR 2020 • Hirono Okamoto, Masahiro Suzuki, Yutaka Matsuo
However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples.
1 code implementation • ACM 2019 • Hiromi Nakagawa, Yusuke Iwasawa, Yutaka Matsuo
Inspired by the recent successes of the graph neural network (GNN), we herein propose a GNN-based knowledge tracing method, i. e., graph-based knowledge tracing.
no code implementations • 25 Sep 2019 • Masahiro Suzuki, Yutaka Matsuo
However, this relation-based approach presents a difficulty: many of the test images are predicted as biased to the seen domain, i. e., the \emph{domain bias problem}.
no code implementations • 25 Sep 2019 • Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo
An adversarial invariance induction (AII) shows its power on this purpose, which maximizes the proxy of the conditional entropy between representations and attributes by adversarial training between an attribute discriminator and feature extractor.
no code implementations • WS 2019 • Edison Marrese-Taylor, Pablo Loyola, Yutaka Matsuo
We propose an edit-centric approach to assess Wikipedia article quality as a complementary alternative to current full document-based techniques.
no code implementations • ICLR 2019 • Hirono Okamoto, Shohei Ohsawa, Itto Higuchi, Haruka Murakami, Mizuki Sango, Zhenghang Cui, Masahiro Suzuki, Hiroshi Kajino, Yutaka Matsuo
It reformulates the posterior with a natural paring $\langle, \rangle: \mathcal{Z} \times \mathcal{Z}^* \rightarrow \Real$, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation.
no code implementations • 29 Apr 2019 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
However, previous domain-invariance-based methods overlooked the underlying dependency of classes on domains, which is responsible for the trade-off between classification accuracy and domain invariance.
1 code implementation • NAACL 2019 • Jorge A. Balazs, Yutaka Matsuo
In this paper we study how different ways of combining character and word-level representations affect the quality of both final word and sentence representations.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Hirono Okamoto, Masahiro Suzuki, Itto Higuchi, Shohei Ohsawa, Yutaka Matsuo
However, when the dimension of multiclass labels is large, these models cannot change images corresponding to labels, because learning multiple distributions of the corresponding class is necessary to transfer an image.
no code implementations • ICLR Workshop LLD 2019 • Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo
An adversarial feature learning (AFL) is a powerful framework to learn representations invariant to a nuisance attribute, which uses an adversarial game between a feature extractor and a categorical attribute classifier.
no code implementations • ICLR Workshop LLD 2019 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
Learning domain-invariant representation is a dominant approach for domain generalization.
no code implementations • WS 2018 • Pablo Loyola, Edison Marrese-Taylor, Jorge Balazs, Yutaka Matsuo, Fumiko Satoh
We propose to study the generation of descriptions from source code changes by integrating the messages included on code commits and the intra-code documentation inside the source in the form of docstrings.
no code implementations • 27 Sep 2018 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
Learning domain-invariant representation is a dominant approach for domain generalization, where we need to build a classifier that is robust toward domain shifts induced by change of users, acoustic or lighting conditions, etc.
1 code implementation • WS 2018 • Suzana Ilić, Edison Marrese-Taylor, Jorge A. Balazs, Yutaka Matsuo
Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components.
1 code implementation • WS 2018 • Jorge A. Balazs, Edison Marrese-Taylor, Yutaka Matsuo
In this paper we describe our system designed for the WASSA 2018 Implicit Emotion Shared Task (IEST), which obtained 2$^{\text{nd}}$ place out of 26 teams with a test macro F1 score of $0. 710$.
no code implementations • WS 2018 • Edison Marrese-Taylor, Ai Nakajima, Yutaka Matsuo, Ono Yuichi
In this paper we formalize the problem automatic fill-in-the-blank question generation using two standard NLP machine learning schemes, proposing concrete deep learning models for each.
no code implementations • SEMEVAL 2018 • Edison Marrese-Taylor, Suzana Ilic, Jorge A. Balazs, Yutaka Matsuo, Helmut Prendinger
In this paper we introduce our system for the task of Irony detection in English tweets, a part of SemEval 2018.
no code implementations • 6 Apr 2018 • Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo
Recent advances in neural autoregressive models have improve the performance of speech synthesis (SS).
no code implementations • 26 Jan 2018 • Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully.
no code implementations • ICLR 2018 • Joji Toyama, Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo
The partial reward function is a reward function for a partial sequence of a certain length.
no code implementations • ICLR 2018 • Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo
AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary.
no code implementations • ICLR 2018 • Shohei Ohsawa, Kei Akuzawa, Tatsuya Matsushima, Gustavo Bezerra, Yusuke Iwasawa, Hiroshi Kajino, Seiya Takenaka, Yutaka Matsuo
Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments.
no code implementations • EMNLP 2017 • Masaru Isonuma, Toru Fujino, Junichiro Mori, Yutaka Matsuo, Ichiro Sakata
The need for automatic document summarization that can be used for practical applications is increasing rapidly.
1 code implementation • WS 2017 • Edison Marrese-Taylor, Yutaka Matsuo
In this paper we describe a deep learning system that has been designed and built for the WASSA 2017 Emotion Intensity Shared Task.
1 code implementation • WS 2017 • Edison Marrese-Taylor, Jorge A. Balazs, Yutaka Matsuo
These results, as well as further experiments on domain adaptation for aspect extraction, suggest that differences between speech and written text, which have been discussed extensively in the literature, also extend to the domain of product reviews, where they are relevant for fine-grained opinion mining.
1 code implementation • WS 2017 • Jorge A. Balazs, Edison Marrese-Taylor, Pablo Loyola, Yutaka Matsuo
Finally it combines the refined representations of both sentences into a single vector to be used for classification.
no code implementations • 9 Jun 2017 • Mohammadamin Barekatain, Miquel Martí, Hsueh-Fu Shih, Samuel Murray, Kotaro Nakayama, Yutaka Matsuo, Helmut Prendinger
Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios.
1 code implementation • ACL 2017 • Pablo Loyola, Edison Marrese-Taylor, Yutaka Matsuo
We propose a model to automatically describe changes introduced in the source code of a program using natural language.
1 code implementation • EACL 2017 • Edison Marrese-Taylor, Yutaka Matsuo
Reproducing experiments is an important instrument to validate previous work and build upon existing approaches.
no code implementations • 25 Nov 2016 • Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
The report of earlier studies has introduced a latent variable to capture the entire meaning of sentence and achieved improvement on attention-based Neural Machine Translation.
2 code implementations • 7 Nov 2016 • Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation.
no code implementations • 10 Oct 2016 • Masatoshi Uehara, Issei Sato, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo
Generative adversarial networks (GANs) are successful deep generative models.
no code implementations • PACLIC 2015 • Rahul Kamath, Masanao Ochi, Yutaka Matsuo
While previous approaches to obtaining product ratings require either a large number of user ratings or a few review texts, we show that it is possible to predict ratings with few user ratings and no review text.