Search Results for author: Daniel Reichman

Found 15 papers, 1 papers with code

Depth Separations in Neural Networks: Separating the Dimension from the Accuracy

no code implementations11 Feb 2024 Itay Safran, Daniel Reichman, Paul Valiant

We prove an exponential separation between depth 2 and depth 3 neural networks, when approximating an $\mathcal{O}(1)$-Lipschitz target function to constant accuracy, with respect to a distribution with support in $[0, 1]^{d}$, assuming exponentially bounded weights.

How Many Neurons Does it Take to Approximate the Maximum?

no code implementations18 Jul 2023 Itay Safran, Daniel Reichman, Paul Valiant

Our depth separation results are facilitated by a new lower bound for depth 2 networks approximating the maximum function over the uniform distribution, assuming an exponential upper bound on the size of the weights.

Size and depth of monotone neural networks: interpolation and approximation

1 code implementation12 Jul 2022 Dan Mikulincer, Daniel Reichman

Our first result establishes that every monotone function over $[0, 1]^d$ can be approximated within arbitrarily small additive error by a depth-4 monotone network.

Inductive Bias

Size and Depth Separation in Approximating Benign Functions with Neural Networks

no code implementations30 Jan 2021 Gal Vardi, Daniel Reichman, Toniann Pitassi, Ohad Shamir

We show a complexity-theoretic barrier to proving such results beyond size $O(d\log^2(d))$, but also show an explicit benign function, that can be approximated with networks of size $O(d)$ and not with networks of size $o(d/\log d)$.

Tight Hardness Results for Training Depth-2 ReLU Networks

no code implementations27 Nov 2020 Surbhi Goel, Adam Klivans, Pasin Manurangsi, Daniel Reichman

We are also able to obtain lower bounds on the running time in terms of the desired additive error $\epsilon$.

Cognitive Model Priors for Predicting Human Decisions

no code implementations22 May 2019 David D. Bourgin, Joshua C. Peterson, Daniel Reichman, Thomas L. Griffiths, Stuart J. Russell

To solve this problem, what is needed are machine learning models with appropriate inductive biases for capturing human behavior, and larger datasets.

Benchmarking BIG-bench Machine Learning +2

Predicting human decisions with behavioral theories and machine learning

no code implementations15 Apr 2019 Ori Plonsky, Reut Apel, Eyal Ert, Moshe Tennenholtz, David Bourgin, Joshua C. Peterson, Daniel Reichman, Thomas L. Griffiths, Stuart J. Russell, Evan C. Carter, James F. Cavanagh, Ido Erev

Here, we introduce BEAST Gradient Boosting (BEAST-GB), a novel hybrid model that synergizes behavioral theories, specifically the model BEAST, with machine learning techniques.

BIG-bench Machine Learning Decision Making +2

gprHOG and the popularity of Histogram of Oriented Gradients (HOG) for Buried Threat Detection in Ground-Penetrating Radar

no code implementations4 Jun 2018 Daniel Reichman, Leslie M. Collins, Jordan M. Malof

Substantial research has been devoted to the development of algorithms that automate buried threat detection (BTD) with ground penetrating radar (GPR) data, resulting in a large number of proposed algorithms.

GPR

Tiling and Stitching Segmentation Output for Remote Sensing: Basic Challenges and Recommendations

no code implementations30 May 2018 Bohao Huang, Daniel Reichman, Leslie M. Collins, Kyle Bradbury, Jordan M. Malof

In this work we consider the application of convolutional neural networks (CNNs) for pixel-wise labeling (a. k. a., semantic segmentation) of remote sensing imagery (e. g., aerial color or hyperspectral imagery).

Segmentation Of Remote Sensing Imagery Semantic Segmentation

A graph-theoretic approach to multitasking

no code implementations NeurIPS 2017 Noga Alon, Daniel Reichman, Igor Shinkar, Tal Wagner, Sebastian Musslick, Jonathan D. Cohen, Tom Griffiths, Biswadip Dey, Kayhan Ozcimder

A key feature of neural network architectures is their ability to support the simultaneous interaction among large numbers of units in the learning and processing of representations.

Inference in Sparse Graphs with Pairwise Measurements and Side Information

no code implementations8 Mar 2017 Dylan J. Foster, Daniel Reichman, Karthik Sridharan

For two-dimensional grids, our results improve over Globerson et al. (2015) by obtaining optimal recovery in the constant-height regime.

Learning Theory Tree Decomposition

Deleting and Testing Forbidden Patterns in Multi-Dimensional Arrays

no code implementations13 Jul 2016 Omri Ben-Eliezer, Simon Korman, Daniel Reichman

For any $\epsilon \in [0, 1]$ and any large enough pattern $P$ over any alphabet, other than a very small set of exceptional patterns, we design a tolerant tester that distinguishes between the case that the distance is at least $\epsilon$ and the case that it is at most $a_d \epsilon$, with query complexity and running time $c_d \epsilon^{-1}$, where $a_d < 1$ and $c_d$ depend only on $d$.

LEMMA Open-Ended Question Answering

On the Limitation of Spectral Methods: From the Gaussian Hidden Clique Problem to Rank-One Perturbations of Gaussian Tensors

no code implementations NeurIPS 2015 Andrea Montanari, Daniel Reichman, Ofer Zeitouni

We consider the following detection problem: given a realization of asymmetric matrix $X$ of dimension $n$, distinguish between the hypothesisthat all upper triangular variables are i. i. d.

FasT-Match: Fast Affine Template Matching

no code implementations CVPR 2013 Simon Korman, Daniel Reichman, Gilad Tsur, Shai Avidan

Fast-Match is a fast algorithm for approximate template matching under 2D affine transformations that minimizes the Sum-of-Absolute-Differences (SAD) error measure.

Template Matching

Cannot find the paper you are looking for? You can Submit a new open access paper.