no code implementations • 7 Oct 2023 • Joel Niklaus, Robin Mamié, Matthias Stürmer, Daniel Brunner, Marcel Gygli
Releasing court decisions to the public relies on proper anonymization to protect all involved parties, where necessary.
no code implementations • 13 May 2023 • Louis Andreoli, Stéphane Chrétien, Xavier Porte, Daniel Brunner
Hardware implementation of neural network are an essential step to implement next generation efficient and powerful artificial intelligence solutions.
no code implementations • 20 Apr 2022 • Nadezhda Semenova, Daniel Brunner
They therefore are prone to noise with a variety of statistical and architectural properties, and effective strategies leveraging network-inherent assets to mitigate noise in an hardware-efficient manner are important in the pursuit of next generation neural network hardware.
no code implementations • 12 Mar 2021 • Nadezhda Semenova, Laurent Larger, Daniel Brunner
Here, we determine for the first time the propagation of noise in deep neural networks comprising noisy nonlinear neurons in trained fully connected layers.
no code implementations • 21 Dec 2020 • Xavier Porte, Anas Skalli, Nasibeh Haghighi, Stephan Reitzenstein, James A. Lott, Daniel Brunner
Finally, the digital analog conversion can be realized with a standard deviation of only 5. 4 10^-2.
no code implementations • 6 Apr 2020 • Piotr Antonik, Nicolas Marsal, Daniel Brunner, Damien Rontani
The recognition of human actions in video streams is a challenging task in computer vision, with cardinal applications in e. g. brain-computer interface and surveillance.
no code implementations • 6 Apr 2020 • Piotr Antonik, Nicolas Marsal, Daniel Brunner, Damien Rontani
We test this approach on a previously reported large-scale experimental system, compare it to the commonly used grid search, and report notable improvements in performance and the number of experimental iterations required to optimise the hyper-parameters.
no code implementations • 27 Mar 2020 • Louis Andreoli, Xavier Porte, Stéphane Chrétien, Maxime Jacquot, Laurent Larger, Daniel Brunner
A high efficiency hardware integration of neural networks benefits from realizing nonlinearity, network connectivity and learning fully in a physical substrate.
no code implementations • 17 Dec 2019 • Johnny Moughames, Xavier Porte, Michael Thiel, Gwenn Ulliac, Maxime Jacquot, Laurent Larger, Muamer Kadic, Daniel Brunner
Photonic waveguides are prime candidates for integrated and parallel photonic interconnects.
no code implementations • 23 Jul 2019 • Xavier Porte, Louis Andreoli, Maxime Jacquot, Laurent Larger, Daniel Brunner
However, important questions regarding impact of reservoir size and learning routines on the convergence-speed during learning remain unaddressed.
no code implementations • 21 Jul 2019 • Nadezhda Semenova, Xavier Porte, Louis Andreoli, Maxime Jacquot, Laurent Larger, Daniel Brunner
The system under study consists of noisy linear nodes, and we investigate the signal-to-noise ratio at the network's outputs which is the upper limit to such a system's computing accuracy.
no code implementations • 4 May 2018 • Bogdan Penkovsky, Laurent Larger, Daniel Brunner
In this work, we propose a new approach towards the efficient optimization and implementation of reservoir computing hardware reducing the required domain expert knowledge and optimization effort.
no code implementations • 14 Nov 2017 • Julian Bueno, Sheler Maktoobi, Luc Froehly, Ingo Fischer, Maxime Jacquot, Laurent Larger, Daniel Brunner
Realizing photonic Neural Networks with numerous nonlinear nodes in a fully parallel and efficient learning hardware was lacking so far.