no code implementations • 14 May 2022 • Bishwamittra Ghosh, Dmitry Malioutov, Kuldeep S. Meel
The interpretability of rule-based classifiers is in general related to the size of the rules, where smaller rules are considered more interpretable.
1 code implementation • 5 Dec 2018 • Dmitry Malioutov, Kuldeep S. Meel
The wide adoption of machine learning approaches in the industry, government, medicine and science has renewed the interest in interpretable machine learning: many decisions are too important to be delegated to black-box techniques such as deep neural networks or kernel SVMs.
no code implementations • 5 Aug 2017 • Dmitry Malioutov, Tianchi Chen, Jacob Jaffe, Edoardo Airoldi, Steven Carr, Bogdan Budnik, Nikolai Slavov
Many proteoforms - arising from alternative splicing, post-translational modifications (PTMs), or paralogous genes - have distinct biological functions, such as histone PTM proteoforms.
1 code implementation • 3 Jun 2016 • Insu Han, Dmitry Malioutov, Haim Avron, Jinwoo Shin
Computation of the trace of a matrix function plays an important role in many scientific computing applications, including applications in machine learning, computational physics (e. g., lattice quantum chromodynamics), network analysis and computational biology (e. g., protein folding), just to name a few application areas.
Data Structures and Algorithms
1 code implementation • 22 Mar 2015 • Insu Han, Dmitry Malioutov, Jinwoo Shin
Logarithms of determinants of large positive definite matrices appear ubiquitously in machine learning applications including Gaussian graphical and Gaussian process models, partition functions of discrete graphical models, minimum-volume ellipsoids, metric learning and kernel learning.
2 code implementations • 1 Jun 2014 • Dmitry Malioutov, Nikolai Slavov
The special case when all dependent and independent variables have the same level of uncorrelated Gaussian noise, known as ordinary TLS, can be solved by singular value decomposition (SVD).
no code implementations • 5 Dec 2013 • Dmitry Malioutov, Aleksandr Aravkin
Sparse reconstruction approaches using the re-weighted l1-penalty have been shown, both empirically and theoretically, to provide a significant improvement in recovering sparse signals in comparison to the l1-relaxation.
no code implementations • NeurIPS 2007 • Sujay Sanghavi, Dmitry Malioutov, Alan S. Willsky
Loopy belief propagation has been employed in a wide variety of applications with great empirical success, but it comes with few theoretical guarantees.