Activation
This node applies an activation function \(y = f(x)\) to the provided input value. Activation functions are non-linear, which is necessary for neural networks. They can also be used before an output to limit or clamp the values. For example, Softmax is often used in classifiers, because it makes the output represent probabilities.
Supported input data types are Float16, Float32 and Float64, of any shape. The output data type and shape will be the same as the input.
The available activation functions are:
ELU
Exponential linear unit, or ELU, is defined as:
\(f(x) = \begin{cases} \begin{aligned} & x \quad & \text{if} \quad x > 0 \\ & \alpha (e^x - 1) \quad & \text{if} \quad x \le 0 \end{aligned} \end{cases}\)where \(\alpha\) is a number that you can choose by clicking the node and configuring it on the right panel.
Sigmoid
Outputs values in the range 0 to 1, with a smooth S-curve in between.
\(f(x) = \cfrac{1}{1 + e^{-x}}\)ReLU
Rectified linear unit, or ReLU, makes negative values vanish. It is defined as:
\(\begin{cases} \begin{aligned} & x \quad & \text{if} \quad x > 0 \\ & 0 \quad & \text{if} \quad x \le 0 \end{aligned} \end{cases}\)Tanh
Outputs values in the range -1 to 1, with a smooth S-curve in between.
\(f(x) = \tanh (x) = \cfrac{1}{1 + e^{-x}}\)Softmax
The Softmax activation function outputs elements in the range 0 to 1, and ensures that the total sum of all elements is 1. This makes it possible to interpret the output as probabilities. Large input values will correspond to a larger output probability, and smaller or more negative input values will correspond to a smaller output probability.
LeakyReLU
Similar to ReLU, but with a (usually small) slope in the negative region. It is defined as:
\begin{cases} \begin{aligned} & x \quad & \text{if} \quad x > 0 \\ & \alpha x \quad & \text{if} \quad x \le 0 \end{aligned} \end{cases}where \(\alpha\)is a number that you can choose by clicking the node and configuring it on the right panel.