Feature Compression

21 papers with code • 0 benchmarks • 0 datasets

Compress data for machine interpretability to perform downstream tasks, rather than for human perception.

Most implemented papers

Supervised Compression for Resource-Constrained Edge Computing Systems

yoshitomo-matsubara/supervised-compression 21 Aug 2021

There has been much interest in deploying deep learning algorithms on low-powered devices, including smartphones, drones, and medical sensors.

VIMI: Vehicle-Infrastructure Multi-view Intermediate Fusion for Camera-based 3D Object Detection

bosszhe/vimi 20 Mar 2023

In autonomous driving, Vehicle-Infrastructure Cooperative 3D Object Detection (VIC3D) makes use of multi-view cameras from both vehicles and traffic infrastructure, providing a global vantage point with rich semantic context of road conditions beyond a single vehicle viewpoint.

Flexible Variable-Rate Image Feature Compression for Edge-Cloud Systems

adnan-hossain/var-feat-comp 30 Mar 2024

By compressing different intermediate features of a pre-trained vision task model, the proposed method can scale the encoding complexity without changing the overall size of the model.

Context-aware Deep Feature Compression for High-speed Visual Tracking

jongwon20000/TRACA CVPR 2018

We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers.

BottleNet++: An End-to-End Approach for Feature Compression in Device-Edge Co-Inference Systems

shaojiawei07/BottleNetPlusPlus 31 Oct 2019

By exploiting the strong sparsity and the fault-tolerant property of the intermediate feature in a deep neural network (DNN), BottleNet++ achieves a much higher compression ratio than existing methods.

Lossy Compression for Lossless Prediction

YannDubs/lossyless NeurIPS 2021

Most data is automatically collected and only ever "seen" by algorithms.

Context-Aware Compilation of DNN Training Pipelines across Edge and Cloud

dixiyao/Context-Aware-Compilation-of-DNN-Training-Pipelines-across-Edge-and-Cloud Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2021

Experimental results show that our system not only adapts well to, but also draws on the varying contexts, delivering a practical and efficient solution to edge-cloud model training.

SC2 Benchmark: Supervised Compression for Split Computing

yoshitomo-matsubara/sc2-benchmark 16 Mar 2022

With the increasing demand for deep learning models on mobile devices, splitting neural network computation between the device and a more powerful edge server has become an attractive solution.

Multi-Agent Collaborative Inference via DNN Decoupling: Intermediate Feature Compression and Edge Learning

Hao840/MAHPPO 24 May 2022

In this paper, we study the multi-agent collaborative inference scenario, where a single edge server coordinates the inference of multiple UEs.

Compressing Features for Learning with Noisy Labels

yingyichen-cyy/Nested-Co-teaching 27 Jun 2022

This decomposition provides three insights: (i) it shows that over-fitting is indeed an issue for learning with noisy labels; (ii) through an information bottleneck formulation, it explains why the proposed feature compression helps in combating label noise; (iii) it gives explanations on the performance boost brought by incorporating compression regularization into Co-teaching.