1 code implementation • 26 Oct 2023 • Dániel Rácz, Mihály Petreczky, András Csertán, Bálint Daróczy
Recent advances in deep learning have given us some very promising results on the generalization ability of deep neural networks, however literature still lacks a comprehensive theory explaining why heavily over-parametrized models are able to generalize well while fitting the training data.
no code implementations • 7 Jul 2023 • Dániel Rácz, Mihály Petreczky, Bálint Daróczy
We consider the problem of learning Neural Ordinary Differential Equations (neural ODEs) within the context of Linear Parameter-Varying (LPV) systems in continuous-time.
1 code implementation • 26 Oct 2021 • Dániel Rácz, Bálint Daróczy
Feed-forward networks can be interpreted as mappings with linear decision surfaces at the level of the last layer.
no code implementations • 1 Feb 2021 • Bálint Daróczy, Katalin Friedl, László Kabódi, Attila Pereszlényi, Dániel Szabó
Building on the quantum ensemble based classifier algorithm of Schuld and Petruccione [arXiv:1704. 02146v1], we devise equivalent classical algorithms which show that this quantum ensemble method does not have advantage over classical algorithms.
no code implementations • 11 Jun 2020 • Bálint Daróczy
We derive several easily computable bounds and empirical measures for feed-forward fully connected ReLU (Rectified Linear Unit) networks and connect tangent sensitivity to the distribution of the activation regions in the input space realized by the network.
1 code implementation • 18 Dec 2019 • Bálint Daróczy, Rita Aleksziev, András Benczúr
Hierarchical neural networks are exponentially more efficient than their corresponding "shallow" counterpart with the same expressive power, but involve huge number of parameters and require tedious amounts of training.
no code implementations • 17 Jul 2018 • Bálint Daróczy, Rita Aleksziev, András Benczúr
Hierarchical neural networks are exponentially more efficient than their corresponding "shallow" counterpart with the same expressive power, but involve huge number of parameters and require tedious amounts of training.