Search Results for author: Alessandro Luongo

Found 6 papers, 2 papers with code

Do you know what q-means?

no code implementations18 Aug 2023 João F. Doriguello, Alessandro Luongo, Ewin Tang

The time complexity is $O\big(\frac{k^{2}}{\varepsilon^2}(\sqrt{k}d + \log(Nd))\big)$ and maintains the polylogarithmic dependence on $N$ while improving the dependence on most of the other parameters.

Clustering

Quantum algorithms for SVD-based data representation and analysis

1 code implementation19 Apr 2021 Armando Bellante, Alessandro Luongo, Stefano Zanero

This paper narrows the gap between previous literature on quantum linear algebra and practical data analysis on a quantum computer, formalizing quantum procedures that speed-up the solution of eigenproblems for data representations in machine learning.

Dimensionality Reduction General Classification +1

Quantum algorithms for spectral sums

no code implementations12 Nov 2020 Alessandro Luongo, Changpeng Shao

We propose and analyze new quantum algorithms for estimating the most common spectral sums of symmetric positive definite (SPD) matrices.

Quantum Expectation-Maximization for Gaussian Mixture Models

no code implementations19 Aug 2019 Iordanis Kerenidis, Alessandro Luongo, Anupam Prakash

In this work we define and use a quantum version of EM to fit a Gaussian Mixture Model.

q-means: A quantum algorithm for unsupervised machine learning

2 code implementations NeurIPS 2019 Iordanis Kerenidis, Jonas Landman, Alessandro Luongo, Anupam Prakash

For a natural notion of well-clusterable datasets, the running time becomes $\widetilde{O}\left( k^2 d \frac{\eta^{2. 5}}{\delta^3} + k^{2. 5} \frac{\eta^2}{\delta^3} \right)$ per iteration, which is linear in the number of features $d$, and polynomial in the rank $k$, the maximum square norm $\eta$ and the error parameter $\delta$.

BIG-bench Machine Learning Clustering +1

Quantum classification of the MNIST dataset with Slow Feature Analysis

no code implementations22 May 2018 Iordanis Kerenidis, Alessandro Luongo

We simulate the quantum classifier (including errors) and show that it can provide classification of the MNIST handwritten digit dataset, a widely used dataset for benchmarking classification algorithms, with $98. 5\%$ accuracy, similar to the classical case.

Benchmarking Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.