1 code implementation • 16 Feb 2024 • Joohyung Lee, Heejeong Nam, Kwanhyung Lee, Sangchul Hahn
Using this free annotation, we introduce a semi-supervision signal to de-bias the inter-slide variability and to capture the common factors of variation within normal patches.
no code implementations • 15 Nov 2023 • Joohyung Lee, Mohamed Seif, Jungchan Cho, H. Vincent Poor
However, since the model is split at a specific layer, known as a cut layer, into both client-side and server-side models for the SFL, the choice of the cut layer in SFL can have a substantial impact on the energy consumption of clients and their privacy, as it influences the training burden and the output of the client-side models.
no code implementations • 15 Jul 2023 • Enrico Giunchiglia, Joohyung Lee, Vladimir Lifschitz, Hudson Turner
This paper continues the line of work on representing properties of actions in nonmonotonic formalisms that stresses the distinction between being "true" and being "caused", as in the system of causal logic introduced by McCain and Turner and in the action language C proposed by Giunchiglia and Lifschitz.
no code implementations • 15 Jul 2023 • Michael Bartholomew, Joohyung Lee
We extend the first-order stable model semantics by Ferraris, Lee, and Lifschitz to allow intensional functions -- functions that are specified by a logic program just like predicates are specified.
no code implementations • 15 Jul 2023 • Joohyung Lee, Yunsong Meng
Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences.
1 code implementation • 15 Jul 2023 • Zhun Yang, Adam Ishay, Joohyung Lee
It only needs a few examples to guide the LLM's adaptation to a specific task, along with reusable ASP knowledge modules that can be applied to multiple tasks.
1 code implementation • 15 Jul 2023 • Adam Ishay, Zhun Yang, Joohyung Lee
Specifically, we employ an LLM to transform natural language descriptions of logic puzzles into answer set programs.
1 code implementation • 15 Jul 2023 • Zhun Yang, Adam Ishay, Joohyung Lee
We present NeurASP, a simple extension of answer set programs by embracing neural networks.
no code implementations • 15 Jul 2023 • Joonyoung Kim, Kangwook Lee, Haebin Shin, Hurnjoo Lee, Sechun Kang, Byunguk Choi, Dong Shin, Joohyung Lee
The more new features that are being added to smartphones, the harder it becomes for users to find them.
no code implementations • 15 Jul 2023 • Martin Gebser, Joohyung Lee, Yuliya Lierler
We propose the notion of an elementary set, which is almost equivalent to the notion of an elementary loop for nondisjunctive programs, but is simpler, and, unlike elementary loops, can be extended to disjunctive programs without producing unintuitive results.
no code implementations • 15 Jul 2023 • Joohyung Lee, Vladimir Lifschitz, Ravi Palla
Safe first-order formulas generalize the concept of a safe rule, which plays an important role in the design of answer set solvers.
1 code implementation • 10 Jul 2023 • Zhun Yang, Joohyung Lee, Chiyoun Park
Injecting discrete logical constraints into neural network learning is one of the main challenges in neuro-symbolic AI.
1 code implementation • 10 Jul 2023 • Zhun Yang, Adam Ishay, Joohyung Lee
Constraint satisfaction problems (CSPs) are about finding values of variables that satisfy the given constraints.
no code implementations • 4 May 2023 • Kwanhyung Lee, Soojeong Lee, Sangchul Hahn, Heejung Hyun, Edward Choi, Byungeun Ahn, Joohyung Lee
Electronic Health Record (EHR) provides abundant information through various modalities.
no code implementations • 29 Oct 2022 • Kwanhyung Lee, John Won, Heejung Hyun, Sangchul Hahn, Edward Choi, Joohyung Lee
Accurate time prediction of patients' critical events is crucial in urgent scenarios where timely decision-making is important.
1 code implementation • 13 Sep 2022 • Joohyung Lee, Jieun Oh, Inkyu Shin, You-sung Kim, Dae Kyung Sohn, Tae-sung Kim, In So Kweon
In this study, we present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
no code implementations • 27 Mar 2022 • Sangjun Park, Kihyun Choo, Joohyung Lee, Anton V. Porov, Konstantin Osipov, June Sig Sung
Text-to-Speech (TTS) services that run on edge devices have many advantages compared to cloud TTS, e. g., latency and privacy issues.
1 code implementation • 27 Mar 2020 • Joohyung Lee, Youngmoon Jung, Hoirin Kim
The results show that the focal loss can improve the performance in various imbalance situations compared to the cross entropy loss, a commonly used loss function in VAD.
no code implementations • 18 Sep 2019 • Joohyung Lee, Man Luo
We show that the verification of strong equivalence in LPMLN can be reduced to equivalence checking in classical logic via a reduct and choice rules as well as to equivalence checking under the "soft" logic of here-and-there.
no code implementations • 31 Jul 2019 • Yi Wang, Shiqi Zhang, Joohyung Lee
In this paper, we present a unified framework to integrate icorpp's reasoning and planning components.
no code implementations • 21 Jun 2019 • Naser Ahmadi, Joohyung Lee, Paolo Papotti, Mohammed Saeed
One challenge in fact checking is the ability to improve the transparency of the decision.
no code implementations • 1 Apr 2019 • Yi Wang, Joohyung Lee
Alternatively, the semantics of pBC+ can also be defined in terms of Markov Decision Process (MDP), which in turn allows for representing MDP in a succinct and elaboration tolerant way as well as to leverage an MDP solver to compute pBC+.
no code implementations • 22 Jan 2019 • Joohyung Lee, Ji Eun Oh, Min Ju Kim, Bo Yun Hur, Dae Kyung Sohn
As a result, adding a rectum segmentation task reduced the model variance of the rectal cancer segmentation network within tumor regions by a factor of 0. 90; data augmentation further reduced the variance by a factor of 0. 89.
no code implementations • 14 Aug 2018 • Joohyung Lee, Yi Wang
Learning in LPMLN is in accordance with the stable model semantics, thereby it learns parameters for probabilistic extensions of knowledge-rich domains where answer set programming has shown to be useful but limited to the deterministic case, such as reachability analysis and reasoning about actions in dynamic domains.
no code implementations • 2 May 2018 • Joohyung Lee, Yi Wang
We present a probabilistic extension of action language BC+.
no code implementations • 2 May 2018 • Joohyung Lee, Zhun Yang
Logic Programs with Ordered Disjunction (LPOD) is an extension of standard answer set programs to handle preference using the construct of ordered disjunction, and CR-Prolog2 is an extension of standard answer set programs with consistency restoring rules and LPOD-like ordered disjunction.
no code implementations • 20 Jul 2017 • Joohyung Lee, Nikhil Loney, Yunsong Meng
We first show how to represent linear hybrid automata with convex invariants by an action language modulo theories.
no code implementations • 19 Jul 2017 • Joohyung Lee, Samidh Talsania, Yi Wang
LPMLN is a recent addition to probabilistic logic programming languages.
no code implementations • 28 Jun 2016 • Joohyung Lee, Yi Wang
Markov Logic Networks (MLN) and Probabilistic Soft Logic (PSL) are widely applied formalisms in Statistical Relational Learning, an emerging area in Artificial Intelligence that is concerned with combining logical and statistical AI.
no code implementations • 18 Jan 2014 • Joohyung Lee, Ravi Palla
Based on the discovery that circumscription and the stable model semantics coincide on a class of canonical formulas, we reformulate the situation calculus and the event calculus in the general theory of stable models.
no code implementations • 16 Jan 2014 • Joohyung Lee, Yunsong Meng
Lin and Zhaos theorem on loop formulas states that in the propositional case the stable model semantics of a logic program can be completely characterized by propositional loop formulas, but this result does not fully carry over to the first-order case.
no code implementations • 20 Dec 2013 • Michael Bartholomew, Joohyung Lee
The distinction between strong negation and default negation has been useful in answer set programming.