Activation Functions

Randomized Leaky Rectified Linear Units

Introduced by Xu et al. in Empirical Evaluation of Rectified Activations in Convolutional Network

Randomized Leaky Rectified Linear Units, or RReLU, are an activation function that randomly samples the negative slope for activation values. It was first proposed and used in the Kaggle NDSB Competition. During training, $a_{ji}$ is a random number sampled from a uniform distribution $U\left(l, u\right)$. Formally:

$$ y_{ji} = x_{ji} \text{ if } x_{ji} \geq{0} $$ $$ y_{ji} = a_{ji}x_{ji} \text{ if } x_{ji} < 0 $$

where

$$\alpha_{ji} \sim U\left(l, u\right), l < u \text{ and } l, u \in \left[0,1\right)$$

In the test phase, we take average of all the $a_{ji}$ in training similar to dropout, and thus set $a_{ji}$ to $\frac{l+u}{2}$ to get a deterministic result. As suggested by the NDSB competition winner, $a_{ji}$ is sampled from $U\left(3, 8\right)$.

At test time, we use:

$$ y_{ji} = \frac{x_{ji}}{\frac{l+u}{2}} $$

Source: Empirical Evaluation of Rectified Activations in Convolutional Network

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Micro-Expression Recognition 1 33.33%
General Classification 1 33.33%
Image Classification 1 33.33%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories