Multimodal Deep Learning

66 papers with code • 1 benchmarks • 17 datasets

Multimodal deep learning is a type of deep learning that combines information from multiple modalities, such as text, image, audio, and video, to make more accurate and comprehensive predictions. It involves training deep neural networks on data that includes multiple types of information and using the network to make predictions based on this combined data.

One of the key challenges in multimodal deep learning is how to effectively combine information from multiple modalities. This can be done using a variety of techniques, such as fusing the features extracted from each modality, or using attention mechanisms to weight the contribution of each modality based on its importance for the task at hand.

Multimodal deep learning has many applications, including image captioning, speech recognition, natural language processing, and autonomous vehicles. By combining information from multiple modalities, multimodal deep learning can improve the accuracy and robustness of models, enabling them to perform better in real-world scenarios where multiple types of information are present.

Most implemented papers

MMEA: Entity Alignment for Multi-Modal Knowledge Graphs

liyichen-cly/MMEA 20 Aug 2020

To that end, in this paper, we propose a novel solution called Multi-Modal Entity Alignment (MMEA) to address the problem of entity alignment in a multi-modal view.

Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and Report Dictation for AI Development

cxr-eye-gaze/eye-gaze-dataset 15 Sep 2020

We report deep learning experiments that utilize the attention maps produced by eye gaze dataset to show the potential utility of this data.

Multimodal Emotion Recognition with Transformer-Based Self Supervised Feature Fusion

shamanez/Self-Supervised-Embedding-Fusion-Transformer 27 Oct 2020

Emotion Recognition is a challenging research area given its complex nature, and humans express emotional cues across various modalities such as language, facial expressions, and speech.

Multimodal Learning for Hateful Memes Detection

joannezhouyi/Hateful_Memes_Challenge 25 Nov 2020

Memes are used for spreading ideas through social networks.

Detecting Video Game Player Burnout with the Use of Sensor Data and Machine Learning

smerdov/eSports_Sensors_Dataset 29 Nov 2020

In this article, we propose the methods based on the sensor data analysis for predicting whether a player will win the future encounter.

Piano Skills Assessment

ParitoshParmar/Piano-Skills-Assessment 13 Jan 2021

Can a computer determine a piano player's skill level?

Deep Learning for Android Malware Defenses: a Systematic Literature Review

yueyueL/DL-based-Android-Malware-Defenses-review 9 Mar 2021

In this paper, we conducted a systematic literature review to search and analyze how deep learning approaches have been applied in the context of malware defenses in the Android environment.