MDOE: A Spatiotemporal Event Representation Considering the Magnitude and Density of Events

Event-based sensors (e.g., DVS cameras) are capable of higher dynamic range, higher temporal resolution, lower time latency, and better power efficiency compared to conventional devices (e.g., RGB cameras). However, learning from these sensors remains challenging; event-based sensors output a stream of asynchronous events, which cannot be directly used by state-of-the-art convolutional neural networks (CNNs). In this paper, we present a novel event-based representation called MDOE that considers both the magnitude and density of events. Compared to existing representations, which discard one or more types of information about event polarity, temporal information, and/or density, MDOE contains richer information about events. It has two benefits: (i) it is a conceptually-simple generic representation that is task-independent; (ii) it achieves superior performance relative to existing representations on a variety of event-based datasets.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here