no code implementations • 23 Mar 2024 • Florian Chen, Felix Weitkämper, Sagar Malhotra
This behavior emerges from a lack of internal consistency within an MLN when used across different domain sizes.
no code implementations • 21 Feb 2024 • Alessandro Daniele, Tommaso Campari, Sagar Malhotra, Luciano Serafini
Then, a NeSy model is trained on the same task via transfer learning, where the weights of the perceptual part are injected from the pretrained network.
no code implementations • 22 Aug 2023 • Sagar Malhotra, Davide Bizzaro, Luciano Serafini
We expand a vast array of previous results in discrete mathematics literature on directed acyclic graphs, phylogenetic networks, etc.
no code implementations • 20 Feb 2023 • Sagar Malhotra, Luciano Serafini
However, many properties of real-world data can not be modelled in $\mathrm{C^2}$.
no code implementations • 24 Aug 2022 • Alessandro Daniele, Tommaso Campari, Sagar Malhotra, Luciano Serafini
In this paper, we propose Deep Symbolic Learning (DSL), a NeSy system that learns NeSy-functions, i. e., the composition of a (set of) perception functions which map continuous data to discrete symbols, and a symbolic function over the set of symbols.
no code implementations • 8 Apr 2022 • Sagar Malhotra, Luciano Serafini
We show that, in terms of data likelihood maximization, RBM is the best possible projective MLN in the two-variable fragment.
no code implementations • 12 Oct 2021 • Sagar Malhotra, Luciano Serafini
Weighted First-Order Model Counting (WFOMC) computes the weighted sum of the models of a first-order logic theory on a given finite domain.
no code implementations • 25 Sep 2020 • Sagar Malhotra, Luciano Serafini
We introduce the concept of lifted interpretations as a tool for formulating polynomials for WFOMC.