Keras is an Open Source Neural Network library written in Python that runs on top of Theano, CNTK or Tensorflow. It is design to be fast and easy to use. It was develop by Francois Chollet, Google engineer. Keras does not handle low-level computation. So, it make uses another Backend library to resolve it.
It is high-level Application Program Interface wrapper for the low-level Application Program Interface capable of running on top of TensorFlow, Theano or CNTK (Microsoft Cognitive Toolkit). Keras High-Level API handles the way we make models, defining layers, or set up multiple input-output models.
Keras Library Functions
- relu function
- sigmoid function
- softmax function
- softplus function
- softsign sunction
- tanh function
- selu function
- elu function
- exponential function
Relu function
Relu function is stand for Rectified Linear Activation function. It is the most common choice of activation function. Relu function provides state of the art result and computationally very efficient at the same time is knows as Relu function.
Syntax
tf.keras.activations.relu(x, alpha=0.0, max_value=None, threshold=0.0)
The pseudo code for Relu function are follows:
if input > 0:
return input
else:
return 0
Implementing Relu function in python
def relu(x):
return max(0.0, x)x = 1.0
print(‘Applying Relu on (%.1f) gives %.1f’ % (x, relu(x)))
x = -10.0
print(‘Applying Relu on (%.1f) gives %.1f’ % (x, relu(x)))
x = 0.0
print(‘Applying Relu on (%.1f) gives %.1f’ % (x, relu(x)))
x = 15.0
print(‘Applying Relu on (%.1f) gives %.1f’ % (x, relu(x)))
x = -20.0
print(‘Applying Relu on (%.1f) gives %.1f’ % (x, relu(x)))
Output
Applying Relu on (1.0) gives 1.0
Applying Relu on (-10.0) gives 0.0
Applying Relu on (0.0) gives 0.0
Applying Relu on (15.0) gives 15.0
Applying Relu on (-20.0) gives 0.0
Sigmoid function
The sigmoid function always returns an output between 0 and 1. An Sigmoid activation function is a mathematical function that controls the output of a neural network is knows as Sigmoid Function.
Syntax
tf.keras.activations.sigmoid(x)
Implementing Sigmoid function in Python
mport numpy as np
def sig(x):
return 1/(1 + np.exp(-x))x = 1.0
print(‘Applying Sigmoid Activation on (%.1f) gives %.1f’ % (x, sig(x)))x = -10.0
print(‘Applying Sigmoid Activation on (%.1f) gives %.1f’ % (x, sig(x)))x = 0.0
print(‘Applying Sigmoid Activation on (%.1f) gives %.1f’ % (x, sig(x)))x = 15.0
print(‘Applying Sigmoid Activation on (%.1f) gives %.1f’ % (x, sig(x)))x = -2.0
print(‘Applying Sigmoid Activation on (%.1f) gives %.1f’ % (x, sig(x)))
Output
Applying Sigmoid Activation on (1.0) gives 0.7
Applying Sigmoid Activation on (-10.0) gives 0.0
Applying Sigmoid Activation on (0.0) gives 0.5
Applying Sigmoid Activation on (15.0) gives 1.0
Applying Sigmoid Activation on (-2.0) gives 0.1
Softmax function
Softmax converts a vector of values to a probability is knows as Softmax function. The output vector are in range (0, 1) and sum to 1.
Syntax
tf.keras.activations.softplus(x)
Example of Softmax Function
a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32) b = tf.keras.activations.softplus(a) b.numpy() array([2.0611537e-09, 3.1326166e-01, 6.9314718e-01, 1.3132616e+00, 2.0000000e+01], dtype=float32)
Arguments
x : Input tensor.
Returns
The softplus activation: log(exp(x) + 1)
.
Softplus function
It is use to find softplus of the stated input tensor is knows as Softplus Function.
Syntax
tf.keras.layers.Softmax(axis=-1, **kwargs)
Example of Softplus function
a = tf.constant([-20, -1.0, 0.0, 1.0, 20], dtype = tf.float32) b = tf.keras.activations.softplus(a) b.numpy() array([2.0611537e-09, 3.1326166e-01, 6.9314718e-01, 1.3132616e+00, 2.0000000e+01], dtype=float32)
Arguments
x: Input tensor.
Returns
The softplus activation: log(exp(x) + 1)
.
Softsign sunction
syntax
tf.keras.activations.softsign(x)
Example of softsign function
a = tf.constant([-1.0, 0.0, 1.0], dtype = tf.float32) b = tf.keras.activations.softsign(a) b.numpy() array([-0.5, 0. , 0.5], dtype=float32)
Arguments
x: Input tensor.
Returns
The softsign activation: x / (abs(x) + 1)
.
Tanh function
It is a Hyperbolic tangent activation function is knows as tanh function.
Syntax
tf.keras.activations.tanh(x)
Example of tanh function
a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32) b = tf.keras.activations.tanh(a) b.numpy() array([-0.9950547, -0.7615942, 0., 0.7615942, 0.9950547], dtype=float32)
Arguments
x: Input tensor.
Returns
Tensor of same shape and dtype of input x
, with tanh activation: tanh(x) = sinh(x)/cosh(x) = ((exp(x) - exp(-x))/(exp(x) + exp(-x)))
.
Selu
function
It is stand for Scaled Exponential Linear Unit
Syntax
tf.keras.activations.selu(x)
Example of selu function
num_classes = 10 # 10-class problem model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(64, kernel_initializer='lecun_normal', activation='selu')) model.add(tf.keras.layers.Dense(32, kernel_initializer='lecun_normal', activation='selu')) model.add(tf.keras.layers.Dense(16, kernel_initializer='lecun_normal', activation='selu')) model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
Arguments
x : A tensor or variable to compute the activation function for.
Returns
The scaled exponential unit activation: scale * elu(x, alpha)
.
elu
function
It is stand for Exponential Linear Unit
Example of elu function
import tensorflow as tf model = tf.keras.Sequential() model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='elu', input_shape=(28, 28, 1))) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu')) model.add(tf.keras.layers.MaxPooling2D((2, 2))) model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='elu'))
Arguments
- x: Input tensor.
- alpha: A scalar, slope of negative section.
alpha
controls the value to which an ELU saturates for negative net inputs.
Returns
The exponential linear unit activation function: x
if x > 0
and alpha * (exp(x) - 1)
if x < 0
.
Exponential function
Syntax
tf.keras.activations.exponential(x)
Example of Exponential function
a = tf.constant([-3.0,-1.0, 0.0,1.0,3.0], dtype = tf.float32) b = tf.keras.activations.exponential(a) b.numpy() array([0.04978707, 0.36787945, 1., 2.7182817 , 20.085537], dtype=float32)
Arguments
- x: Input tensor.
Returns
Tensor with exponential activation: exp(x)
.
If you have any queries regarding this article or if I have missed something on this topic, please feel free to add in the comment down below for the audience. See you guys in another article.
To know more about keras please Wikipedia Click here
0 Comments