The Multilayer Perceptron (MLP) model features multiple layers that are interconnected in such a way that they form a feed-forward neural network.
Two
Marks Questions with Answers
Q.1 Explain
multilayer perceptron.
Ans.: The Multilayer Perceptron (MLP) model features multiple layers
that are interconnected in such a way that they form a feed-forward neural
network. Each neuron in one layer has directed connections to the neurons of a
separate layer. It consists of three types of layers: the input layer, output
layer and hidden layer.
Q.2
What is vanishing gradient problem?
Ans.: When back-propagation is used, the earlier layers will receive
very small updates compared to the later layers. This problem is referred to as
the vanishing gradient problem. The vanishing gradient problem is essentially a
situation in which a deep multilayer feed-forward network or a recurrent neural
network (RNN) does not have the ability to propagate useful gradient
information from the output end of the model back to the layers near the input
end of the model.
Q.3
Explain advantages deep learning.
Ans.: Advantages of deep learning:
•
No need for feature engineering
•
DL solves the problem on the end-to-end basis.
•
Deep learning gives more accuracy
Q.4 Explain
back propagation.
Ans.: Backpropagation is a training method used for a multi-layer
neural network. It is also called the generalized delta rule. It is a gradient
descent method which minimizes the total squared error of the output computed
by the net.
Q.5
What is hyperparameters?
Ans.: Hyperparameters are parameters whose values control the learning
process and determine the values of model parameters that a learning algorithm
ends up learning.
Q.6
Define ReLU.
Ans.: Rectified Linear Unit (ReLU) solve the vanishing gradient
problem. ReLU is a nonlinear function or piecewise linear function that will
output the input directly if it is positive, otherwise, it will output zero.
Q.7
What is vanishing gradient problem
Ans.: The vanishing gradient problem is a problem that user face, when
we are training Neural Networks by using gradient-based methods like
backpropagation. This problem makes it difficult to learn and tune the
parameters of the earlier layers in the network
Q.8 Define
normalization.
Ans.:
Normalization is a data pre-processing tool used to bring the
numerical data to a common scale without distorting its shape.
Q.9
What is batch normalization
Ans.: It is a method of adaptive reparameterization, motivated by the
difficulty of training very deep models. In Deep networks, the weights are
updated for each layer. So the output will no longer be on the same scale as
the input.
Q.10
Explain advantages of ReLU function
Ans.: Advantages of ReLU function:
a)
ReLU is simple to compute and has a predictable gradient for the
backpropagation of the error.
b)
Easy to implement and very fast.
c)
It can be used for deep network training
Q.11
Explain Ridge regression.
Ans.: Ridge regression, also known as L2 regularization, is a technique
of regularization to avoid the overfitting in training data set, which
introduces a small bias in the training model, through which one can get long
term predictions for that input.
Q.12
Explain dropout.
Ans.:Dropout was introduced by "Hinton et al" and this method
is now very popular. It consists of setting to zero the output of each hidden
neuron in chosen layer with some probability and is proven to be very effective
in reducing overfitting.
Q.13
Explain disadvantages of deep learning
Ans.: Disadvantages of deep learning
•
DL needs high-performance hardware.
•
DL needs much more time to train
•
it is very difficult to assess its performance in
real world applications
•
it is very hard to understand
Q.14
Explain need of hidden layers.
Ans.:
1. A
network with only two layers (input and output) can only represent the input
with whatever representation already exists in the input data.
2. If the data is discontinuousor non-linearly separable, the innate representation is inconsistent, and the mapping cannot be learned using two layers (Input and Output).
3.
Therefore, hidden layer(s) are used between input and output layers.
Q.15
Explain activation functions.
Artificial Intelligence and Machine Learning: Unit V: Neural Networks : Tag: : Neural Networks - Artificial Intelligence and Machine Learning - Two marks Questions with Answers
Artificial Intelligence and Machine Learning
CS3491 4th Semester CSE/ECE Dept | 2021 Regulation | 4th Semester CSE/ECE Dept 2021 Regulation