no code implementations • 17 Jan 2024 • Loay Mualem, Murad Tukan, Moran Fledman
In this work, we suggest novel offline and online algorithms that provably provide such an interpolation based on a natural decomposition of the convex body constraint into two distinct convex bodies: a down-closed convex body and a general convex body.
no code implementations • 20 Dec 2023 • Murad Tukan, Fares Fares, Yotam Grufinkle, Ido Talmor, Loay Mualem, Vladimir Braverman, Dan Feldman
In response to this formidable challenge, we introduce a real-time autonomous indoor exploration system tailored for drones equipped with a monocular \emph{RGB} camera.
no code implementations • 16 Jul 2023 • Murad Tukan, Alaa Maalouf, Margarita Osadchy
Deep learning has grown tremendously over recent years, yielding state-of-the-art results in various fields.
no code implementations • 23 May 2023 • Alaa Maalouf, Murad Tukan, Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus
Despite significant empirical progress in recent years, there is little understanding of the theoretical limitations/guarantees of dataset distillation, specifically, what excess risk is achieved by distillation compared to the original dataset, and how large are distilled datasets?
1 code implementation • 19 May 2023 • Alaa Maalouf, Murad Tukan, Vladimir Braverman, Daniela Rus
A coreset is a tiny weighted subset of an input set, that closely resembles the loss function, with respect to a certain set of queries.
1 code implementation • 9 Mar 2023 • Murad Tukan, Samson Zhou, Alaa Maalouf, Daniela Rus, Vladimir Braverman, Dan Feldman
In this paper, we introduce the first algorithm to construct coresets for \emph{RBFNNs}, i. e., small weighted subsets that approximate the loss of the input data on any radial basis function network and thus approximate any function defined by an \emph{RBFNN} on the larger input data.
1 code implementation • 10 Jan 2023 • Murad Tukan, Eli Biton, Roee Diamant
In this paper, we consider a common approach for water current prediction that uses Lagrangian floaters for water current prediction by interpolating the trajectory of the elements to reflect the velocity field.
no code implementations • 18 Sep 2022 • Murad Tukan, Loay Mualem, Alaa Maalouf
Lately, coresets (provable data summarizations) were leveraged for pruning DNNs, adding the advantage of theoretical guarantees on the trade-off between the compression rate and the approximation error.
no code implementations • 8 Mar 2022 • Murad Tukan, Alaa Maalouf, Dan Feldman, Roi Poranne
While this approach is very simple, it can become costly when the obstacles are unknown, since samples hitting these obstacles are wasted.
1 code implementation • 8 Mar 2022 • Murad Tukan, Xuan Wu, Samson Zhou, Vladimir Braverman, Dan Feldman
$(j, k)$-projective clustering is the natural generalization of the family of $k$-clustering and $j$-subspace clustering problems.
no code implementations • 6 Mar 2022 • Alaa Maalouf, Murad Tukan, Eric Price, Daniel Kane, Dan Feldman
The goal (e. g., for anomaly detection) is to approximate the $n$ points received so far in $P$ by a single frequency $\sin$, e. g. $\min_{c\in C}cost(P, c)+\lambda(c)$, where $cost(P, c)=\sum_{i=1}^n \sin^2(\frac{2\pi}{N} p_ic)$, $C\subseteq [N]$ is a feasible set of solutions, and $\lambda$ is a given regularization function.
no code implementations • 11 Sep 2020 • Murad Tukan, Alaa Maalouf, Matan Weksler, Dan Feldman
Here, $d$ is the number of the neurons in the layer, $n$ is the number in the next one, and $A_{k, 2}$ can be stored in $O((n+d)k)$ memory instead of $O(nd)$.
no code implementations • NeurIPS 2020 • Murad Tukan, Alaa Maalouf, Dan Feldman
Coreset is usually a small weighted subset of $n$ input points in $\mathbb{R}^d$, that provably approximates their loss function for a given set of queries (models, classifiers, etc.).
no code implementations • 9 Jun 2020 • Alaa Maalouf, Ibrahim Jubran, Murad Tukan, Dan Feldman
PAC-learning usually aims to compute a small subset ($\varepsilon$-sample/net) from $n$ items, that provably approximates a given loss function for every query (model, classifier, hypothesis) from a given set of queries, up to an additive error $\varepsilon\in(0, 1)$.
no code implementations • ICML 2020 • Ibrahim Jubran, Murad Tukan, Alaa Maalouf, Dan Feldman
The input to the \emph{sets-$k$-means} problem is an integer $k\geq 1$ and a set $\mathcal{P}=\{P_1,\cdots, P_n\}$ of sets in $\mathbb{R}^d$.
no code implementations • 15 Feb 2020 • Murad Tukan, Cenk Baykal, Dan Feldman, Daniela Rus
A coreset is a small, representative subset of the original data points such that a models trained on the coreset are provably competitive with those trained on the original data set.
no code implementations • ICLR 2018 • Cenk Baykal, Murad Tukan, Dan Feldman, Daniela Rus
Support Vector Machines (SVMs) are one of the most popular algorithms for classification and regression analysis.