Search Results for author: Marco Cannici

Found 14 papers, 8 papers with code

Mitigating Motion Blur in Neural Radiance Fields with Events and Frames

1 code implementation28 Mar 2024 Marco Cannici, Davide Scaramuzza

Neural Radiance Fields (NeRFs) have shown great potential in novel view synthesis.

Novel View Synthesis

Low-power event-based face detection with asynchronous neuromorphic hardware

no code implementations21 Dec 2023 Caterina Caccavella, Federico Paredes-Vallés, Marco Cannici, Lyes Khacef

We show that the power consumption of the chip is directly proportional to the number of synaptic operations in the spiking neural network, and we explore the trade-off between power consumption and detection precision with different firing rate regularization, achieving an on-chip face detection mAP[0. 5] of ~0. 6 while consuming only ~20 mW.

Face Detection object-detection +1

End-to-end Learned Visual Odometry with Events and Frames

no code implementations18 Sep 2023 Roberto Pellerito, Marco Cannici, Daniel Gehrig, Joris Belhadj, Olivier Dubois-Matra, Massimo Casasco, Davide Scaramuzza

Visual Odometry (VO) is crucial for autonomous robotic navigation, especially in GPS-denied environments like planetary terrains.

Visual Odometry

Revisiting Token Pruning for Object Detection and Instance Segmentation

1 code implementation12 Jun 2023 Yifei Liu, Mathias Gehrig, Nico Messikommer, Marco Cannici, Davide Scaramuzza

In relation to the dense counterpart that utilizes all tokens, our method realizes an increase in inference speed, achieving up to 34% faster performance for the entire network and 46% for the backbone.

Image Classification Instance Segmentation +4

Neural Weighted A*: Learning Graph Costs and Heuristics with Differentiable Anytime A*

2 code implementations4 May 2021 Alberto Archetti, Marco Cannici, Matteo Matteucci

Recently, the trend of incorporating differentiable algorithms into deep learning architectures arose in machine learning research, as the fusion of neural layers and algorithmic layers has been beneficial for handling combinatorial data, such as shortest paths on graphs.

Spatial Temporal Transformer Network for Skeleton-based Action Recognition

1 code implementation11 Dec 2020 Chiara Plizzari, Marco Cannici, Matteo Matteucci

Skeleton-based human action recognition has achieved a great interest in recent years, as skeleton data has been demonstrated to be robust to illumination changes, body scales, dynamic camera views, and complex background.

Action Recognition Skeleton Based Action Recognition +1

Skeleton-based Action Recognition via Spatial and Temporal Transformer Networks

1 code implementation17 Aug 2020 Chiara Plizzari, Marco Cannici, Matteo Matteucci

Skeleton-based Human Activity Recognition has achieved great interest in recent years as skeleton data has demonstrated being robust to illumination changes, body scales, dynamic camera views, and complex background.

Action Recognition In Videos Human Activity Recognition +1

A Differentiable Recurrent Surface for Asynchronous Event-Based Data

1 code implementation ECCV 2020 Marco Cannici, Marco Ciccone, Andrea Romanoni, Matteo Matteucci

Dynamic Vision Sensors (DVSs) asynchronously stream events in correspondence of pixels subject to brightness changes.

Optical Flow Estimation

Attention Mechanisms for Object Recognition with Event-Based Cameras

no code implementations25 Jul 2018 Marco Cannici, Marco Ciccone, Andrea Romanoni, Matteo Matteucci

Event-based cameras are neuromorphic sensors capable of efficiently encoding visual information in the form of sparse sequences of events.

Event-based vision Object Recognition +1

Asynchronous Convolutional Networks for Object Detection in Neuromorphic Cameras

no code implementations21 May 2018 Marco Cannici, Marco Ciccone, Andrea Romanoni, Matteo Matteucci

Event-based cameras, also known as neuromorphic cameras, are bioinspired sensors able to perceive changes in the scene at high frequency with low power consumption.

Object object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.