Event-based vision

44 papers with code • 1 benchmarks • 9 datasets

An event camera, also known as a neuromorphic camera, silicon retina or dynamic vision sensor, is an imaging sensor that responds to local changes in brightness. Event cameras do not capture images using a shutter as conventional cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur and staying silent otherwise. Modern event cameras have microsecond temporal resolution, 120 dB dynamic range, and less under/overexposure and motion blur than frame cameras.

Libraries

Use these libraries to find Event-based vision models and implementations

Most implemented papers

Event Collapse in Contrast Maximization Frameworks

tub-rip/event_based_optical_flow 8 Jul 2022

Contrast maximization (CMax) is a framework that provides state-of-the-art results on several event-based computer vision tasks, such as ego-motion or optical flow estimation.

Secrets of Event-Based Optical Flow

tub-rip/event_based_optical_flow 20 Jul 2022

Event cameras respond to scene dynamics and offer advantages to estimate motion.

Ecsnet: Spatio-temporal feature learning for event camera

happychenpipi/ECSNet IEEE Transactions on Circuits and Systems for Video Technology 2022

To fully exploit their inherent sparsity with reconciling the spatio-temporal information, we introduce a compact event representation, namely 2D-1T event cloud sequence (2D-1T ECS).

Recurrent Vision Transformers for Object Detection with Event Cameras

uzh-rpg/rvt CVPR 2023

By revisiting the high-level design of recurrent vision backbones, we reduce inference time by a factor of 6 while retaining similar performance.

A Fast Geometric Regularizer to Mitigate Event Collapse in the Contrast Maximization Framework

tub-rip/event_collapse 14 Dec 2022

We hope our work opens the door for future applications that unlocks the advantages of event cameras.

Masked Event Modeling: Self-Supervised Pretraining for Event Cameras

tum-vision/mem 20 Dec 2022

The models pretrained with MEM are also label-efficient and generalize well to the dense task of semantic image segmentation.

Adaptive Global Decay Process for Event Cameras

neuromorphic-paris/event_batch CVPR 2023

To achieve this, at least one of three main strategies is applied, namely: 1) constant temporal decay or fixed time window, 2) constant number of events, and 3) flow-based lifetime of events.

Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data

GorkaAbad/Sneaky-Spikes 13 Feb 2023

Deep neural networks (DNNs) have demonstrated remarkable performance across various tasks, including image and speech recognition.

Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks

vlislab22/Deep-Learning-for-Event-based-Vision 17 Feb 2023

Event cameras are bio-inspired sensors that capture the per-pixel intensity changes asynchronously and produce event streams encoding the time, pixel position, and polarity (sign) of the intensity changes.

From Chaos Comes Order: Ordering Event Representations for Object Recognition and Detection

uzh-rpg/event_representation_study ICCV 2023

However, selecting the appropriate representation for the task traditionally requires training a neural network for each representation and selecting the best one based on the validation score, which is very time-consuming.