Revisiting Distributional Correspondence Indexing: A Python Reimplementation and New Experiments

19 Oct 2018  ·  Alejandro Moreo, Andrea Esuli, Fabrizio Sebastiani ·

This paper introduces PyDCI, a new implementation of Distributional Correspondence Indexing (DCI) written in Python. DCI is a transfer learning method for cross-domain and cross-lingual text classification for which we had provided an implementation (here called JaDCI) built on top of JaTeCS, a Java framework for text classification. PyDCI is a stand-alone version of DCI that exploits scikit-learn and the SciPy stack. We here report on new experiments that we have carried out in order to test PyDCI, and in which we use as baselines new high-performing methods that have appeared after DCI was originally proposed. These experiments show that, thanks to a few subtle ways in which we have improved DCI, PyDCI outperforms both JaDCI and the above-mentioned high-performing methods, and delivers the best known results on the two popular benchmarks on which we had tested DCI, i.e., MultiDomainSentiment (a.k.a. MDS -- for cross-domain adaptation) and Webis-CLS-10 (for cross-lingual adaptation). PyDCI, together with the code allowing to replicate our experiments, is available at https://github.com/AlexMoreo/pydci .

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sentiment Analysis Multi-Domain Sentiment Dataset Distributional Correspondence Indexing DVD 81.00 # 2
Books 81.4 # 2
Electronics 85,06 # 6
Kitchen 85.9 # 2
Average 83.30 # 2

Methods


No methods listed for this paper. Add relevant methods here