Generalized Mean Pooling

Generalized Mean Pooling (GeM) computes the generalized mean of each channel in a tensor. Formally:

$$ \textbf{e} = \left[\left(\frac{1}{|\Omega|}\sum_{u\in{\Omega}}x^{p}_{cu}\right)^{\frac{1}{p}}\right]_{c=1,\cdots,C} $$

where $p > 0$ is a parameter. Setting this exponent as $p > 1$ increases the contrast of the pooled feature map and focuses on the salient features of the image. GeM is a generalization of the average pooling commonly used in classification networks ($p = 1$) and of spatial max-pooling layer ($p = \infty$).

Source: MultiGrain

Image Source: Eva Mohedano

Latest Papers

PAPER DATE
Unifying Deep Local and Global Features for Image Search
| Bingyi CaoAndre AraujoJack Sim
2020-01-14
MultiGrain: a unified image embedding for classes and instances
| Maxim BermanHervé JégouAndrea VedaldiIasonas KokkinosMatthijs Douze
2019-02-14

Tasks

TASK PAPERS SHARE
Image Retrieval 2 50.00%
Dimensionality Reduction 1 25.00%
Image Classification 1 25.00%

Components

COMPONENT TYPE
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories