no code implementations • NeurIPS 2023 • Daniel Bertschinger, Christoph Hertrich, Paul Jungeblut, Tillmann Miltzow, Simon Weber
We consider the problem of finding weights and biases for a two-layer fully connected neural network to fit a given set of data points as well as possible, also known as EmpiricalRiskMinimization.
no code implementations • 4 Jun 2021 • Tillmann Miltzow, Reinier F. Schmiermann
In an instance of this problem we are given some sentence of the form $\exists x_1, \ldots, x_n \in \mathbb{R} : \Phi(x_1, \ldots, x_n)$, where $\Phi$ is a well-formed quantifier-free formula consisting of the symbols $\{0, 1, +, \cdot, \geq, >, \wedge, \vee, \neg\}$, the goal is to check whether this sentence is true.
no code implementations • NeurIPS 2021 • Mikkel Abrahamsen, Linda Kleist, Tillmann Miltzow
Given a neural network, training data, and a threshold, finding weights for the neural network such that the total error is below the threshold is known to be NP-hard.
no code implementations • NeurIPS 2021 • Mikkel Abrahamsen, Linda Kleist, Tillmann Miltzow
Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weights for the neural network such that the total error is below the threshold.
no code implementations • 2 Jul 2019 • Nicolas Grelier, Saeed Gh. Ilchi, Tillmann Miltzow, Shakhar Smorodinsky
A family S of convex sets in the plane defines a hypergraph H = (S, E) as follows.
Computational Geometry Discrete Mathematics Data Structures and Algorithms Combinatorics