Activation Functions

General • 72 methods

Activation functions are functions that we apply in neural networks after (typically) applying an affine transformation combining weights and input features. They are typically non-linear functions. The rectified linear unit, or ReLU, has been the most popular in the past decade, although the choice is architecture dependent and many alternatives have emerged in recent years. In this section, you will find a constantly updating list of activation functions.

Subcategories

Method Year Papers
2000 9301
2016 7804
2000 6174
2000 5597
2014 1157
2016 603
2017 321
2000 235
2017 225
2019 207
2015 97
2017 77
2019 73
2020 52
2013 48
2023 37
2015 36
2000 32
2017 19
2017 13
2000 12
2021 9
2018 9
2016 9
2020 7
2017 7
2020 6
2020 5
2015 5
2000 5
2020 4
2015 4
2020 3
2015 3
2021 3
2019 3
2023 2
2015 2
1994 2
2019 2
2019 2
2022 2
2019 2
2021 2
2020 2
2016 2
2018 2
2019 1
2023 1
2018 1
2023 1
2021 1
2020 1
2000 1
2023 1
2021 1
2022 1
2023 1
2023 1
2000 1
2021 1
2000 1
2018 1
2020 1
2020 1
2018 1
2023 1
2022 1
2022 1
1998 0
2020 0
2000 0