no code implementations • 11 Feb 2024 • Itay Safran, Daniel Reichman, Paul Valiant
We prove an exponential separation between depth 2 and depth 3 neural networks, when approximating an $\mathcal{O}(1)$-Lipschitz target function to constant accuracy, with respect to a distribution with support in $[0, 1]^{d}$, assuming exponentially bounded weights.
no code implementations • 18 Jul 2023 • Itay Safran, Daniel Reichman, Paul Valiant
Our depth separation results are facilitated by a new lower bound for depth 2 networks approximating the maximum function over the uniform distribution, assuming an exponential upper bound on the size of the weights.
1 code implementation • 12 Jul 2022 • Dan Mikulincer, Daniel Reichman
Our first result establishes that every monotone function over $[0, 1]^d$ can be approximated within arbitrarily small additive error by a depth-4 monotone network.
no code implementations • 30 Jan 2021 • Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir
We show a complexity-theoretic barrier to proving such results beyond size $O(d\log^2(d))$, but also show an explicit benign function, that can be approximated with networks of size $O(d)$ and not with networks of size $o(d/\log d)$.
no code implementations • 27 Nov 2020 • Surbhi Goel, Adam Klivans, Pasin Manurangsi, Daniel Reichman
We are also able to obtain lower bounds on the running time in terms of the desired additive error $\epsilon$.
no code implementations • 22 May 2019 • David D. Bourgin, Joshua C. Peterson, Daniel Reichman, Thomas L. Griffiths, Stuart J. Russell
To solve this problem, what is needed are machine learning models with appropriate inductive biases for capturing human behavior, and larger datasets.
no code implementations • 15 Apr 2019 • Ori Plonsky, Reut Apel, Eyal Ert, Moshe Tennenholtz, David Bourgin, Joshua C. Peterson, Daniel Reichman, Thomas L. Griffiths, Stuart J. Russell, Evan C. Carter, James F. Cavanagh, Ido Erev
Here, we introduce BEAST Gradient Boosting (BEAST-GB), a novel hybrid model that synergizes behavioral theories, specifically the model BEAST, with machine learning techniques.
no code implementations • 4 Jun 2018 • Daniel Reichman, Leslie M. Collins, Jordan M. Malof
Substantial research has been devoted to the development of algorithms that automate buried threat detection (BTD) with ground penetrating radar (GPR) data, resulting in a large number of proposed algorithms.
no code implementations • 30 May 2018 • Bohao Huang, Daniel Reichman, Leslie M. Collins, Kyle Bradbury, Jordan M. Malof
In this work we consider the application of convolutional neural networks (CNNs) for pixel-wise labeling (a. k. a., semantic segmentation) of remote sensing imagery (e. g., aerial color or hyperspectral imagery).
Segmentation Of Remote Sensing Imagery Semantic Segmentation
no code implementations • 10 Mar 2018 • Jordan M. Malof, Daniel Reichman, Andrew Karem, Hichem Frigui, Dominic K. C. Ho, Joseph N. Wilson, Wen-Hsiung Lee, William Cummings, Leslie M. Collins
In this work we report the results of a multi-institutional effort to develop advanced buried threat detection algorithms for a real-world GPR BTD system.
no code implementations • NeurIPS 2017 • Noga Alon, Daniel Reichman, Igor Shinkar, Tal Wagner, Sebastian Musslick, Jonathan D. Cohen, Tom Griffiths, Biswadip Dey, Kayhan Ozcimder
A key feature of neural network architectures is their ability to support the simultaneous interaction among large numbers of units in the learning and processing of representations.
no code implementations • 8 Mar 2017 • Dylan J. Foster, Daniel Reichman, Karthik Sridharan
For two-dimensional grids, our results improve over Globerson et al. (2015) by obtaining optimal recovery in the constant-height regime.
no code implementations • 13 Jul 2016 • Omri Ben-Eliezer, Simon Korman, Daniel Reichman
For any $\epsilon \in [0, 1]$ and any large enough pattern $P$ over any alphabet, other than a very small set of exceptional patterns, we design a tolerant tester that distinguishes between the case that the distance is at least $\epsilon$ and the case that it is at most $a_d \epsilon$, with query complexity and running time $c_d \epsilon^{-1}$, where $a_d < 1$ and $c_d$ depend only on $d$.
no code implementations • NeurIPS 2015 • Andrea Montanari, Daniel Reichman, Ofer Zeitouni
We consider the following detection problem: given a realization of asymmetric matrix $X$ of dimension $n$, distinguish between the hypothesisthat all upper triangular variables are i. i. d.
no code implementations • CVPR 2013 • Simon Korman, Daniel Reichman, Gilad Tsur, Shai Avidan
Fast-Match is a fast algorithm for approximate template matching under 2D affine transformations that minimizes the Sum-of-Absolute-Differences (SAD) error measure.