Event-based vision
44 papers with code • 1 benchmarks • 9 datasets
An event camera, also known as a neuromorphic camera, silicon retina or dynamic vision sensor, is an imaging sensor that responds to local changes in brightness. Event cameras do not capture images using a shutter as conventional cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur and staying silent otherwise. Modern event cameras have microsecond temporal resolution, 120 dB dynamic range, and less under/overexposure and motion blur than frame cameras.
Libraries
Use these libraries to find Event-based vision models and implementationsDatasets
Most implemented papers
Event Collapse in Contrast Maximization Frameworks
Contrast maximization (CMax) is a framework that provides state-of-the-art results on several event-based computer vision tasks, such as ego-motion or optical flow estimation.
Secrets of Event-Based Optical Flow
Event cameras respond to scene dynamics and offer advantages to estimate motion.
Ecsnet: Spatio-temporal feature learning for event camera
To fully exploit their inherent sparsity with reconciling the spatio-temporal information, we introduce a compact event representation, namely 2D-1T event cloud sequence (2D-1T ECS).
Recurrent Vision Transformers for Object Detection with Event Cameras
By revisiting the high-level design of recurrent vision backbones, we reduce inference time by a factor of 6 while retaining similar performance.
A Fast Geometric Regularizer to Mitigate Event Collapse in the Contrast Maximization Framework
We hope our work opens the door for future applications that unlocks the advantages of event cameras.
Masked Event Modeling: Self-Supervised Pretraining for Event Cameras
The models pretrained with MEM are also label-efficient and generalize well to the dense task of semantic image segmentation.
Adaptive Global Decay Process for Event Cameras
To achieve this, at least one of three main strategies is applied, namely: 1) constant temporal decay or fixed time window, 2) constant number of events, and 3) flow-based lifetime of events.
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data
Deep neural networks (DNNs) have demonstrated remarkable performance across various tasks, including image and speech recognition.
Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks
Event cameras are bio-inspired sensors that capture the per-pixel intensity changes asynchronously and produce event streams encoding the time, pixel position, and polarity (sign) of the intensity changes.
From Chaos Comes Order: Ordering Event Representations for Object Recognition and Detection
However, selecting the appropriate representation for the task traditionally requires training a neural network for each representation and selecting the best one based on the validation score, which is very time-consuming.