Search Results for author: Hugo Silva

Found 5 papers, 3 papers with code

What to Do When Your Discrete Optimization Is the Size of a Neural Network?

1 code implementation15 Feb 2024 Hugo Silva, Martha White

Oftentimes, machine learning applications using neural networks involve solving discrete optimization problems, such as in pruning, parameter-isolation-based continual learning and training of binary networks.

Continual Learning Image Classification +1

Floralens: a Deep Learning Model for the Portuguese Native Flora

no code implementations13 Feb 2024 António Filgueiras, Eduardo R. B. Marques, Luís M. B. Lopes, Miguel Marques, Hugo Silva

Machine-learning techniques, namely deep convolutional neural networks, are pivotal for image-based identification of biological species in many Citizen Science platforms.

AutoML

The MONET dataset: Multimodal drone thermal dataset recorded in rural scenarios

1 code implementation11 Apr 2023 Luigi Riz, Andrea Caraffa, Matteo Bortolon, Mohamed Lamine Mekhalfi, Davide Boscaini, André Moura, José Antunes, André Dias, Hugo Silva, Andreas Leonidou, Christos Constantinides, Christos Keleshis, Dante Abate, Fabio Poiesi

MONET is different from previous thermal drone datasets because it features multimodal data, including rural scenes captured with thermal cameras containing both person and vehicle targets, along with trajectory information and metadata.

object-detection Object Detection +1

Greedification Operators for Policy Optimization: Investigating Forward and Reverse KL Divergences

no code implementations17 Jul 2021 Alan Chan, Hugo Silva, Sungsu Lim, Tadashi Kozuno, A. Rupam Mahmood, Martha White

Approximate Policy Iteration (API) algorithms alternate between (approximate) policy evaluation and (approximate) greedification.

Policy Gradient Methods

Winning the Lottery with Continuous Sparsification

2 code implementations NeurIPS 2020 Pedro Savarese, Hugo Silva, Michael Maire

Additionally, the recent Lottery Ticket Hypothesis conjectures that, for a typically-sized neural network, it is possible to find small sub-networks which, when trained from scratch on a comparable budget, match the performance of the original dense counterpart.

Network Pruning Ticket Search +1

Cannot find the paper you are looking for? You can Submit a new open access paper.