no code implementations • 20 Sep 2023 • Urszula Chajewska, Harsh Shrivastava
We develop a FL framework which maintains a global NGM model that learns the averaged information from the local NGM models while keeping the training data within the client's environment.
no code implementations • 10 Aug 2023 • Urszula Chajewska, Harsh Shrivastava
Conditional Independence (CI) graph is a special type of a Probabilistic Graphical Model (PGM) where the feature connections are modeled using an undirected graph and the edge weights show the partial correlation strength between the features.
no code implementations • 23 Apr 2023 • Zhi Chen, Sarah Tan, Urszula Chajewska, Cynthia Rudin, Rich Caruana
Missing values are a fundamental problem in data science.
1 code implementation • 27 Feb 2023 • Harsh Shrivastava, Urszula Chajewska
Sparse graph recovery methods work well where the data follows their assumptions but often they are not designed for doing downstream probabilistic queries.
1 code implementation • 13 Nov 2022 • Harsh Shrivastava, Urszula Chajewska
Conditional Independence (CI) graphs are a type of probabilistic graphical models that are primarily used to gain insights about feature relationships.
2 code implementations • 2 Oct 2022 • Harsh Shrivastava, Urszula Chajewska
Theoretically these models can represent very complex dependency functions, but in practice often simplifying assumptions are made due to computational limitations associated with graph operations.
4 code implementations • 23 May 2022 • Harsh Shrivastava, Urszula Chajewska, Robin Abraham, Xinshi Chen
Our model, uGLAD, builds upon and extends the state-of-the-art model GLAD to the unsupervised setting.
1 code implementation • 4 Feb 2022 • Leo Betthauser, Urszula Chajewska, Maurice Diesendruck, Rohith Pesala
Rapid progress in representation learning has led to a proliferation of embedding models, and to associated challenges of model selection and practical application.
1 code implementation • 22 Oct 2018 • Xuezhou Zhang, Sarah Tan, Paul Koch, Yin Lou, Urszula Chajewska, Rich Caruana
In the first part of this paper, we generalize a state-of-the-art GAM learning algorithm based on boosted trees to the multiclass setting, and show that this multiclass algorithm outperforms existing GAM learning algorithms and sometimes matches the performance of full complexity models such as gradient boosted trees.