no code implementations • 2 May 2024 • Liron Mor Yosef, Shashanka Ubaru, Lior Horesh, Haim Avron
In this paper, we present a quantum algorithm for approximating multivariate traces, i. e. the traces of matrix products.
no code implementations • 4 Jan 2024 • Vishal Pallagani, Kaushik Roy, Bharath Muppasani, Francesco Fabiano, Andrea Loreggia, Keerthiram Murugesan, Biplav Srivastava, Francesca Rossi, Lior Horesh, Amit Sheth
Automated Planning and Scheduling is among the growing areas in Artificial Intelligence (AI) where mention of LLMs has gained popularity.
no code implementations • 18 Aug 2023 • Ryan Cory-Wright, Cristina Cornelio, Sanjeeb Dash, Bachir El Khadir, Lior Horesh
The optimization techniques leveraged in this paper allow our approach to run in polynomial time with fully correct background theory under an assumption that the complexity of our derivation is bounded), or non-deterministic polynomial (NP) time with partially correct background theory.
no code implementations • 14 Jul 2023 • Marianna B. Ganapini, Francesco Fabiano, Lior Horesh, Andrea Loreggia, Nicholas Mattei, Keerthiram Murugesan, Vishal Pallagani, Francesca Rossi, Biplav Srivastava, Brent Venable
Values that are relevant to a specific decision scenario are used to decide when and how to use each of these nudging modalities.
no code implementations • 25 May 2023 • Vishal Pallagani, Bharath Muppasani, Keerthiram Murugesan, Francesca Rossi, Biplav Srivastava, Lior Horesh, Francesco Fabiano, Andrea Loreggia
Firstly, we want to understand the extent to which LLMs can be used for plan generation.
no code implementations • 7 Mar 2023 • Francesco Fabiano, Vishal Pallagani, Marianna Bergamaschi Ganapini, Lior Horesh, Andrea Loreggia, Keerthiram Murugesan, Francesca Rossi, Biplav Srivastava
The concept of Artificial Intelligence has gained a lot of attention over the last decade.
no code implementations • 16 Dec 2022 • Vishal Pallagani, Bharath Muppasani, Keerthiram Murugesan, Francesca Rossi, Lior Horesh, Biplav Srivastava, Francesco Fabiano, Andrea Loreggia
Large Language Models (LLMs) have been the subject of active research, significantly advancing the field of Natural Language Processing (NLP).
no code implementations • 29 Nov 2022 • Kenneth L. Clarkson, Cristina Cornelio, Sanjeeb Dash, Joao Goncalves, Lior Horesh, Nimrod Megiddo
This study concerns the formulation and application of Bayesian optimal experimental design to symbolic discovery, which is the inference from observational data of predictive models taking general functional forms.
no code implementations • 19 Sep 2022 • Ismail Yunus Akhalwaya, Shashanka Ubaru, Kenneth L. Clarkson, Mark S. Squillante, Vishnu Jejjala, Yang-Hui He, Kugendran Naidoo, Vasileios Kalantzis, Lior Horesh
In this study, we present NISQ-TDA, a fully implemented end-to-end quantum machine learning algorithm needing only a short circuit-depth, that is applicable to high-dimensional classical data, and with provable asymptotic speedup for certain classes of problems.
2 code implementations • 13 Jun 2022 • Gaoyuan Zhang, Songtao Lu, Yihua Zhang, Xiangyi Chen, Pin-Yu Chen, Quanfu Fan, Lee Martie, Lior Horesh, Mingyi Hong, Sijia Liu
Spurred by that, we propose distributed adversarial training (DAT), a large-batch adversarial training framework implemented over multiple machines.
no code implementations • 10 Feb 2022 • Paz Fink Shustin, Shashanka Ubaru, Vasileios Kalantzis, Lior Horesh, Haim Avron
In this paper, we present a novel surrogate model for representation learning and uncertainty quantification, which aims to deal with data of moderate to high dimensions.
no code implementations • 18 Jan 2022 • Marianna B. Ganapini, Murray Campbell, Francesco Fabiano, Lior Horesh, Jon Lenchner, Andrea Loreggia, Nicholas Mattei, Taher Rahgooy, Francesca Rossi, Biplav Srivastava, Brent Venable
Current AI systems lack several important human capabilities, such as adaptability, generalizability, self-control, consistency, common sense, and causal reasoning.
no code implementations • 5 Oct 2021 • Marianna Bergamaschi Ganapini, Murray Campbell, Francesco Fabiano, Lior Horesh, Jon Lenchner, Andrea Loreggia, Nicholas Mattei, Francesca Rossi, Biplav Srivastava, Kristen Brent Venable
AI systems have seen dramatic advancement in recent years, bringing many applications that pervade our everyday life.
1 code implementation • 3 Sep 2021 • Cristina Cornelio, Sanjeeb Dash, Vernon Austel, Tyler Josephson, Joao Goncalves, Kenneth Clarkson, Nimrod Megiddo, Bachir El Khadir, Lior Horesh
We develop a method to enable principled derivations of models of natural phenomena from axiomatic knowledge and experimental data by combining logical reasoning with symbolic regression.
no code implementations • 5 Aug 2021 • Shashanka Ubaru, Ismail Yunus Akhalwaya, Mark S. Squillante, Kenneth L. Clarkson, Lior Horesh
In this paper, we completely overhaul the QTDA algorithm to achieve an improved exponential speedup and depth complexity of $O(n\log(1/(\delta\epsilon)))$.
no code implementations • 19 Jul 2021 • Francesco Fabiano, Biplav Srivastava, Jonathan Lenchner, Lior Horesh, Francesca Rossi, Marianna Bergamaschi Ganapini
Epistemic Planning (EP) refers to an automated planning setting where the agent reasons in the space of knowledge states and tries to find a plan to reach a desirable state from the current state.
1 code implementation • 29 Dec 2020 • Tom Achache, Lior Horesh, John Smolin
We implement a Quantum Autoencoder (QAE) as a quantum circuit capable of correcting Greenberger-Horne-Zeilinger (GHZ) states subject to various noisy quantum channels : the bit-flip channel and the more general quantum depolarizing channel.
1 code implementation • 25 Nov 2020 • Yunfei Teng, Anna Choromanska, Murray Campbell, Songtao Lu, Parikshit Ram, Lior Horesh
We study the principal directions of the trajectory of the optimizer after convergence and show that traveling along a few top principal directions can quickly bring the parameters outside the cone but this is not the case for the remaining directions.
no code implementations • 9 Nov 2020 • Nadiia Chepurko, Kenneth L. Clarkson, Lior Horesh, Honghao Lin, David P. Woodruff
We create classical (non-quantum) dynamic data structures supporting queries for recommender systems and least-squares regression that are comparable to their quantum analogues.
no code implementations • 13 Oct 2020 • Vassilis Kalantzis, Georgios Kollias, Shashanka Ubaru, Athanasios N. Nikolakopoulos, Lior Horesh, Kenneth L. Clarkson
This paper considers the problem of updating the rank-k truncated Singular Value Decomposition (SVD) of matrices subject to the addition of new rows and/or columns over time.
no code implementations • 12 Oct 2020 • Grady Booch, Francesco Fabiano, Lior Horesh, Kiran Kate, Jon Lenchner, Nick Linck, Andrea Loreggia, Keerthiram Murugesan, Nicholas Mattei, Francesca Rossi, Biplav Srivastava
This paper proposes a research direction to advance AI which draws inspiration from cognitive theories of human decision making.
no code implementations • 10 Sep 2020 • Shashanka Ubaru, Lior Horesh, Guy Cohen
Thus, estimation of state uncertainty is paramount for both eminent risk assessment, as well as for closing the tracing-testing loop by optimal testing prescription.
no code implementations • 11 Jun 2020 • Vernon Austel, Cristina Cornelio, Sanjeeb Dash, Joao Goncalves, Lior Horesh, Tyler Josephson, Nimrod Megiddo
The Symbolic Regression (SR) problem, where the goal is to find a regression function that does not have a pre-specified form but is any function that can be composed of a list of operators, is a hard problem in machine learning, both theoretically and computationally.
1 code implementation • ICLR 2020 • Osman Asif Malik, Shashanka Ubaru, Lior Horesh, Misha E. Kilmer, Haim Avron
In recent years, a variety of graph neural networks (GNNs) have been successfully applied for representation learning and prediction on such graphs.
no code implementations • 29 Apr 2019 • Murphy Yuezhen Niu, Lior Horesh, Isaac Chuang
To understand the fundamental trade-offs between training stability, temporal dynamics and architectural complexity of recurrent neural networks~(RNNs), we directly analyze RNN architectures using numerical methods of ordinary differential equations~(ODEs).
no code implementations • 15 Nov 2018 • Elizabeth Newman, Lior Horesh, Haim Avron, Misha Kilmer
To exemplify the elegant, matrix-mimetic algebraic structure of our $t$-NNs, we expand on recent work (Haber and Ruthotto, 2017) which interprets deep neural networks as discretizations of non-linear differential equations and introduces stable neural networks which promote superior generalization.
no code implementations • 27 Sep 2018 • Murphy Yuezhen Niu, Lior Horesh, Michael O'Keeffe, Isaac Chuang
We show that most of the existing proposals of RNN architectures belong to different orders of $n$-$t$-ORNNs.
no code implementations • 12 Nov 2017 • Remi R. Lam, Lior Horesh, Haim Avron, Karen E. Willcox
This work takes a different perspective and targets the construction of a correction model operator with implicit attributes.
no code implementations • 29 Oct 2017 • Vernon Austel, Sanjeeb Dash, Oktay Gunluk, Lior Horesh, Leo Liberti, Giacomo Nannicini, Baruch Schieber
In this study we introduce a new technique for symbolic regression that guarantees global optimality.
4 code implementations • 16 Oct 2017 • Edwin Pednault, John A. Gunnels, Giacomo Nannicini, Lior Horesh, Thomas Magerlein, Edgar Solomonik, Robert Wisnieff
With the current rate of progress in quantum computing technologies, 50-qubit systems will soon become a reality.
Quantum Physics
no code implementations • 29 Jun 2017 • Elizabeth Newman, Misha Kilmer, Lior Horesh
From linear classifiers to neural networks, image classification has been a widely explored topic in mathematics, and many algorithms have proven to be effective classifiers.
no code implementations • 2 May 2017 • Gal Shulkind, Lior Horesh, Haim Avron
We consider a class of misspecified dynamical models where the governing term is only approximately known.
no code implementations • 5 Sep 2013 • Tara N. Sainath, Lior Horesh, Brian Kingsbury, Aleksandr Y. Aravkin, Bhuvana Ramabhadran
This study aims at speeding up Hessian-free training, both by means of decreasing the amount of data used for training, as well as through reduction of the number of Krylov subspace solver iterations used for implicit estimation of the Hessian.