no code implementations • 3 Jul 2020 • Arthur Choi, Andy Shih, Anchal Goyanka, Adnan Darwiche
Recent work has shown that the input-output behavior of some machine learning systems can be captured symbolically using Boolean expressions or tractable Boolean circuits, which facilitates reasoning about the behavior of these systems.
no code implementations • 12 Jun 2020 • Yujia Shen, Arthur Choi, Adnan Darwiche
We propose to first learn a functional and parameterized representation of a conditional probability table (CPT), such as a neural network.
no code implementations • 5 Apr 2020 • Weijia Shi, Andy Shih, Adnan Darwiche, Arthur Choi
We consider the compilation of a binary neural network's decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs).
no code implementations • 21 Dec 2018 • Arthur Choi, Ruocheng Wang, Adnan Darwiche
A neural network computes a function.
no code implementations • 9 May 2018 • Andy Shih, Arthur Choi, Adnan Darwiche
We propose an approach for explaining Bayesian network classifiers, which is based on compiling such classifiers into decision functions that have a tractable and symbolic form.
no code implementations • NeurIPS 2017 • Arthur Choi, Yujia Shen, Adnan Darwiche
Recently, the Probabilistic Sentential Decision Diagram (PSDD) has been proposed as a framework for systematically inducing and learning distributions over structured objects, including combinatorial objects such as permutations and rankings, paths and matchings on a graph, etc.
no code implementations • ICML 2017 • Arthur Choi, Adnan Darwiche
The past decade has seen a significant interest in learning tractable probabilistic representations.
no code implementations • NeurIPS 2016 • Eunice Yuh-Jie Chen, Yujia Shen, Arthur Choi, Adnan Darwiche
Our approach is based on a recently proposed framework for optimal structure learning based on non-decomposable scores, which is general enough to accommodate ancestral constraints.
no code implementations • NeurIPS 2016 • Yujia Shen, Arthur Choi, Adnan Darwiche
We consider tractable representations of probability distributions and the polytime operations they support.
no code implementations • NeurIPS 2015 • Jessa Bekker, Jesse Davis, Arthur Choi, Adnan Darwiche, Guy Van Den Broeck
We propose a tractable learner that guarantees efficient inference for a broader class of queries.
no code implementations • 5 Apr 2015 • Arthur Choi, Adnan Darwiche
Relax, Compensate and then Recover (RCR) is a paradigm for approximate inference in probabilistic graphical models that has previously provided theoretical and practical insights on iterative belief propagation and some of its generalizations.
no code implementations • NeurIPS 2014 • Khaled S. Refaat, Arthur Choi, Adnan Darwiche
We propose a technique for decomposing the parameter learning problem in Bayesian networks into independent learning problems.
no code implementations • 25 Nov 2014 • Guy Van den Broeck, Karthika Mohan, Arthur Choi, Judea Pearl
In contrast to textbook approaches such as EM and the gradient method, our approach is non-iterative, yields closed form parameter estimates, and eliminates the need for inference in a Bayesian network.
no code implementations • NeurIPS 2013 • Khaled S. Refaat, Arthur Choi, Adnan Darwiche
Second, it facilitates the design of EDML algorithms for new graphical models, leading to a new algorithm for learning parameters in Markov networks.
no code implementations • NeurIPS 2009 • Arthur Choi, Adnan Darwiche
We identify a second approach to compensation that is based on a more refined idealized case, resulting in a new approximation with distinct properties.