Feature Upsampling
9 papers with code • 1 benchmarks • 1 datasets
Deep features are a cornerstone of computer vision research, capturing image semantics and enabling the community to solve downstream tasks even in the zero- or few-shot regime. However, these features often lack the spatial resolution to directly perform dense prediction tasks like segmentation and depth prediction because models aggressively pool information over large areas. Feature Upsampling aims to recover this missing spatial resolution without impacting the space of the original deep features.
Most implemented papers
Deep Image Prior
In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning.
CARAFE: Content-Aware ReAssembly of FEatures
CARAFE introduces little computational overhead and can be readily integrated into modern network architectures.
SAPA: Similarity-Aware Point Affiliation for Feature Upsampling
We introduce point affiliation into feature upsampling, a notion that describes the affiliation of each upsampled point to a semantic cluster formed by local decoder feature points with semantic similarity.
On Point Affiliation in Feature Upsampling
We introduce the notion of point affiliation into feature upsampling.
A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection
A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is proposed for fast multi-scale object detection.
Joint Denoising and Demosaicking with Green Channel Prior for Real-world Burst Images
Considering the fact that the green channel has twice the sampling rate and better quality than the red and blue channels in CFA raw data, we propose to use this green channel prior (GCP) to build a GCP-Net for the JDD-B task.
Deep ViT Features as Dense Visual Descriptors
To distill the power of ViT features from convoluted design choices, we restrict ourselves to lightweight zero-shot methodologies (e. g., binning and clustering) applied directly to the features.
Local and Global GANs with Semantic-Aware Upsampling for Image Generation
To learn more discriminative class-specific feature representations for the local generation, we also propose a novel classification module.
FeatUp: A Model-Agnostic Framework for Features at Any Resolution
Deep features are a cornerstone of computer vision research, capturing image semantics and enabling the community to solve downstream tasks even in the zero- or few-shot regime.