no code implementations • 12 Jun 2023 • Taisuke Sato, Akihiro Takemura, Katsumi Inoue
We propose an end-to-end approach for answer set programming (ASP) and linear algebraically compute stable models satisfying given constraints.
no code implementations • 14 Aug 2021 • Taisuke Sato, Ryosuke Kojima
We propose a new approach to SAT solving which solves SAT problems in vector spaces as a cost minimization problem of a non-negative differentiable cost function J^sat.
no code implementations • 20 Jan 2019 • Ryosuke Kojima, Taisuke Sato
To embody this programming language, we also introduce a new semantics, termed tensorized semantics, which combines the traditional least model semantics in logic programming with the embeddings of tensors.
no code implementations • 28 Nov 2018 • Chiaki Sakama, Hien D. Nguyen, Taisuke Sato, Katsumi Inoue
In this paper, we introduce methods of encoding propositional logic programs in vector spaces.
no code implementations • 9 Mar 2017 • Taisuke Sato
We propose a new linear algebraic approach to the computation of Tarskian semantics in logic.
no code implementations • 30 Jul 2016 • Taisuke Sato
Given a linear Datalog program DB written using N constants and binary predicates, we first translate if-and-only-if completions of clauses in DB into a set Eq(DB) of matrix equations with a non-linear operation where relations in M_DB, the least Herbrand model of DB, are encoded as adjacency matrices.
no code implementations • 15 Oct 2014 • Taisuke Sato, Keiichi Kubota, Yoshitaka Kameya
Our intension is first to provide a unified approach to CRFs for complex modeling through the use of a Turing complete language and second to offer a convenient way of realizing generative-discriminative pairs in machine learning to compare generative and discriminative models and choose the best model.
no code implementations • 22 Mar 2013 • Taisuke Sato, Keiichi Kubota
Third since VT always deals with a single probability of a single explanation, Viterbi explanation, the exclusiveness condition that is imposed on PRISM programs is no more required if we learn parameters by VT. Last but not least we can say that as VT in PRISM is general and applicable to any PRISM program, it largely reduces the need for the user to develop a specific VT algorithm for a specific model.