### ReLU Activation Operate [with python code]

The rectified linear activation operate (RELU) is a piecewise linear operate that, if the enter is optimistic say x, the output shall be x. in any other case, it outputs zero.

The mathematical illustration of ReLU operate is,

The coding logic for the ReLU operate is easy,

``````if input_value > 0:
return input_value
else:
return 0``````

A easy python operate to imitate a ReLU operate is as follows,

``````def ReLU(x):
knowledge = [max(0,value) for value in x]
return np.array(knowledge, dtype=float)``````

The by-product of ReLU is, A easy python operate to imitate the by-product of ReLU operate is as follows,

``````def der_ReLU(x):
knowledge = [1 if value>0 else 0 for value in x]
return np.array(knowledge, dtype=float)``````

ReLU is used extensively these days, but it surely has some issues. to illustrate if we’ve got enter lower than 0, then it outputs zero, and the neural community cannot proceed the backpropagation algorithm. This drawback is often often known as Dying ReLU. To eliminate this drawback we use an improvised model of ReLU, referred to as Leaky ReLU.

### Python Code

``````import numpy as np
import matplotlib.pyplot as plt

# Rectified Linear Unit (ReLU)
def ReLU(x):
knowledge = [max(0,value) for value in x]
return np.array(knowledge, dtype=float)

# Spinoff for ReLU
def der_ReLU(x):
knowledge = [1 if value>0 else 0 for value in x]
return np.array(knowledge, dtype=float)

# Producing knowledge for Graph
x_data = np.linspace(-10,10,100)
y_data = ReLU(x_data)
dy_data = der_ReLU(x_data)

# Graph
plt.plot(x_data, y_data, x_data, dy_data)
plt.title('ReLU Activation Operate & Spinoff')
plt.legend(['ReLU','der_ReLU'])
plt.grid()
plt.present()`````` 