no code implementations • 20 Sep 2021 • Marko Vasic, Cameron Chalk, Austin Luchsinger, Sarfraz Khurshid, David Soloveichik
Embedding computation in biochemical environments incompatible with traditional electronics is expected to have wide-ranging impact in synthetic biology, medicine, nanofabrication and other fields.
no code implementations • ICML 2020 • Marko Vasic, Cameron Chalk, Sarfraz Khurshid, David Soloveichik
Embedding computation in molecular contexts incompatible with traditional electronics is expected to have wide ranging impact in synthetic biology, medicine, nanofabrication and other fields.
no code implementations • 25 Dec 2019 • Muhammad Usman, Wenxi Wang, Kaiyuan Wang, Marko Vasic, Haris Vikalo, Sarfraz Khurshid
However, MCML metrics based on model counting show that the performance can degrade substantially when tested against the entire (bounded) input space, indicating the high complexity of precisely learning these properties, and the usefulness of model counting in quantifying the true performance.
no code implementations • 25 Sep 2019 • Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, Rishabh Singh, Sarfraz Khurshid
We propose MoET, a more expressive, yet still interpretable model based on Mixture of Experts, consisting of a gating function that partitions the state space, and multiple decision tree experts that specialize on different partitions.
2 code implementations • 16 Jun 2019 • Marko Vasic, Andrija Petrovic, Kaiyuan Wang, Mladen Nikolic, Rishabh Singh, Sarfraz Khurshid
By training Mo\"ET models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models.
2 code implementations • ICLR 2019 • Marko Vasic, Aditya Kanade, Petros Maniatis, David Bieber, Rishabh Singh
We show that it is beneficial to train a model that jointly and directly localizes and repairs variable-misuse bugs.