Adaptive Dropout is a regularization technique that extends dropout by allowing the dropout probability to be different for different units. The intuition is that there may be hidden units that can individually make confident predictions for the presence or absence of an important feature or combination of features. Dropout will ignore this confidence and drop the unit out 50% of the time.
Denote the activity of unit $j$ in a deep neural network by $a_{j}$ and assume that its inputs are {$a_{i}: i < j$}. In dropout, $a_{j}$ is randomly set to zero with probability 0.5. Let $m_{j}$ be a binary variable that is used to mask, the activity $a_{j}$, so that its value is:
$$ a_{j} = m_{j}g \left( \sum_{i: i<j}w_{j, i}a_{i} \right)$$
where $w_{j,i}$ is the weight from unit $i$ to unit $j$ and $g\left(·\right)$ is the activation function and $a_{0} = 1$ accounts for biases. Whereas in standard dropout, $m_{j}$ is Bernoulli with probability $0.5$, adaptive dropout uses adaptive dropout probabilities that depends on input activities:
$$ P\left(m_{j} = 1\mid{{a_{i}: i < j}}\right) = f \left( \sum_{i: i<j}\pi{_{j, i}a_{i}} \right) $$
where $\pi_{j, i}$ is the weight from unit $i$ to unit $j$ in the standout network or the adaptive dropout network; $f(·)$ is a sigmoidal function. Here 'standout' refers to a binary belief network is that is overlaid on a neural network as part of the overall regularization technique.
Source:PAPER  DATE 

Adaptive LowRank Factorization to regularize shallow and deep neural networks
• 
20200505 
Improved Dropout for Shallow and Deep Learning
• • 
20160206 
Adaptive dropout for training deep neural networks
• 
20131201 
TASK  PAPERS  SHARE 

Denoising  1  100.00% 
COMPONENT  TYPE 


🤖 No Components Found  You can add them if they exist; e.g. Mask RCNN uses RoIAlign 