Adaptive Dropout

Introduced by Ba et al. in Adaptive dropout for training deep neural networks

Adaptive Dropout is a regularization technique that extends dropout by allowing the dropout probability to be different for different units. The intuition is that there may be hidden units that can individually make confident predictions for the presence or absence of an important feature or combination of features. Dropout will ignore this confidence and drop the unit out 50% of the time.

Denote the activity of unit $j$ in a deep neural network by $a_{j}$ and assume that its inputs are {$a_{i}: i < j$}. In dropout, $a_{j}$ is randomly set to zero with probability 0.5. Let $m_{j}$ be a binary variable that is used to mask, the activity $a_{j}$, so that its value is:

$$ a_{j} = m_{j}g \left( \sum_{i: i<j}w_{j, i}a_{i} \right)$$

where $w_{j,i}$ is the weight from unit $i$ to unit $j$ and $g\left(·\right)$ is the activation function and $a_{0} = 1$ accounts for biases. Whereas in standard dropout, $m_{j}$ is Bernoulli with probability $0.5$, adaptive dropout uses adaptive dropout probabilities that depends on input activities:

$$ P\left(m_{j} = 1\mid{{a_{i}: i < j}}\right) = f \left( \sum_{i: i<j}\pi{_{j, i}a_{i}} \right) $$

where $\pi_{j, i}$ is the weight from unit $i$ to unit $j$ in the standout network or the adaptive dropout network; $f(·)$ is a sigmoidal function. Here 'standout' refers to a binary belief network is that is overlaid on a neural network as part of the overall regularization technique.

Source: Adaptive dropout for training deep neural networks

Latest Papers

Adaptive Low-Rank Factorization to regularize shallow and deep neural networks
Mohammad Mahdi BejaniMehdi Ghatee
Improved Dropout for Shallow and Deep Learning
Zhe LiBoqing GongTianbao Yang
Adaptive dropout for training deep neural networks
Jimmy BaBrendan Frey


Denoising 1 100.00%


🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign