Audio Captioning using Gated Recurrent Units

5 Jun 2020  ·  Ayşegül Özkaya Eren, Mustafa Sert ·

Audio captioning is a recently proposed task for automatically generating a textual description of a given audio clip. In this study, a novel deep network architecture with audio embeddings is presented to predict audio captions. Within the aim of extracting audio features in addition to log Mel energies, VGGish audio embedding model is used to explore the usability of audio embeddings in the audio captioning task. The proposed architecture encodes audio and text input modalities separately and combines them before the decoding stage. Audio encoding is conducted through Bi-directional Gated Recurrent Unit (BiGRU) while GRU is used for the text encoding phase. Following this, we evaluate our model by means of the newly published audio captioning performance dataset, namely Clotho, to compare the experimental results with the literature. Our experimental results show that the proposed BiGRU-based deep model outperforms the state of the art results.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Ranked #7 on Audio captioning on Clotho (CIDEr metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Audio captioning Clotho RNN-GRU-EncDec + VGGish + Word2Vec CIDEr 0.18 # 7

Methods