Music Information Retrieval

93 papers with code • 0 benchmarks • 23 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

audioLIME: Listenable Explanations Using Source Separation

CPJKU/audioLIME 2 Aug 2020

Deep neural networks (DNNs) are successfully applied in a wide variety of music information retrieval (MIR) tasks but their predictions are usually not interpretable.

Tracing Back Music Emotion Predictions to Sound Sources and Intuitive Perceptual Qualities

CPJKU/audioLIME 14 Jun 2021

In previous work, we have shown how to derive explanations of model predictions in terms of spectrogram image segments that connect to the high-level emotion prediction via a layer of easily interpretable perceptual features.

Sequence-to-Sequence Piano Transcription with Transformers

rlax59us/MT3-MAESTRO-pytorch 19 Jul 2021

Automatic Music Transcription has seen significant progress in recent years by training custom deep neural networks on large datasets.

Learning Sparse Analytic Filters for Piano Transcription

cwitkowitz/sparse-analytic-filters 23 Aug 2021

In this work, several variations of a frontend filterbank learning module are investigated for piano transcription, a challenging low-level music information retrieval task.

Nonnegative Tucker Decomposition with Beta-divergence for Music Structure Analysis of Audio Signals

amarmore/musicntd 27 Oct 2021

Nonnegative Tucker decomposition (NTD), a tensor decomposition model, has received increased interest in the recent years because of its ability to blindly extract meaningful patterns, in particular in Music Information Retrieval.

A Data-Driven Methodology for Considering Feasibility and Pairwise Likelihood in Deep Learning Based Guitar Tablature Transcription Systems

cwitkowitz/guitar-transcription-with-inhibition 17 Apr 2022

This naturally enforces playability constraints for guitar, and yields tablature which is more consistent with the symbolic data used to estimate pairwise likelihoods.

CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval

microsoft/muzic 21 Apr 2023

We introduce CLaMP: Contrastive Language-Music Pre-training, which learns cross-modal representations between natural language and symbolic music using a music encoder and a text encoder trained jointly with a contrastive loss.

A Deep Bag-of-Features Model for Music Auto-Tagging

juhannam/deepbof 20 Aug 2015

Feature learning and deep learning have drawn great attention in recent years as a way of transforming input data into more effective representations using learning algorithms.

Automatic Instrument Recognition in Polyphonic Music Using Convolutional Neural Networks

glennq/instrument-recognition 17 Nov 2015

Traditional methods to tackle many music information retrieval tasks typically follow a two-step architecture: feature engineering followed by a simple learning algorithm.

Deep convolutional neural networks for predominant instrument recognition in polyphonic music

iooops/CS221-Audio-Tagging 31 May 2016

We train our network from fixed-length music excerpts with a single-labeled predominant instrument and estimate an arbitrary number of predominant instruments from an audio signal with a variable length.