Swish is an activation function, $f(x) = x \cdot \text{sigmoid}(\beta x)$, where $\beta$ a learnable parameter. Nearly all implementations do not use the learnable parameter $\beta$, in which case the activation function is $x\sigma(x)$ ("Swish-1").
The function $x\sigma(x)$ is exactly the SiLU, which was introduced by other authors before the swish. See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning and Swish: a Self-Gated Activation Function where the same activation function was experimented with later.
Source: Searching for Activation FunctionsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 74 | 13.86% |
Object Detection | 34 | 6.37% |
General Classification | 27 | 5.06% |
Classification | 25 | 4.68% |
Semantic Segmentation | 24 | 4.49% |
Instance Segmentation | 11 | 2.06% |
Multi-Task Learning | 9 | 1.69% |
Quantization | 7 | 1.31% |
COVID-19 Diagnosis | 6 | 1.12% |