Artificial Intelligence and Machine Learning: Unit IV: Ensemble Techniques and Unsupervised Learning

Two marks Questions with Answers

Ensemble Techniques and Unsupervised Learning - Artificial Intelligence and Machine Learning

In an unsupervised learning, the network adapts purely in response to its inputs. Such networks can learn to pick out structure in their input.

Two Marks Questions with Answers

Q.1 What is unsupervised learning?

Ans.: In an unsupervised learning, the network adapts purely in response to its inputs. Such networks can learn to pick out structure in their input.

Q.2 What is semi-supervised learning?

Ans.: Semi-supervised learning uses both labeled and unlabeled data to improve supervised learning.

Q.3 What is ensemble method?

Ans.: Ensemble methods is a machine learning technique that combines several base models in order to produce one optimal predictive model. It combine the insights obtained from multiple learning models to facilitate accurate and improved decisions.

Q.4 What is cluster?

Ans.: Cluster is a group of objects that belong to the same class. In other words the similar object are grouped in one cluster and dissimilar are grouped in other cluster.

Q.5 Explain clustering.

Ans.: Clustering is a process of partitioning a set of data in a set of meaningful subclasses. Every data in the subclass shares a common trait. It helps a user understand the natural grouping or structure in a data set.

Q.6 What is Bagging?

Ans.: Bagging is also known as Bootstrap aggregation, ensemble method works by training multiple models independently and combining later to result in a strong model.

Q.7 Define boosting.

Ans.:Boosting refers to a group of algorithms that utilize weighted averages to make weak learning algorithms stronger learning algorithms

Q.8 What is K-Nearest Neighbour Methods?

Ans.: The K-Nearest Neighbor (KNN) is a classical classification method and requires no training effort, critically depends on the quality of the distance measures among examples.

The KNN classifier uses mahalanobis distance function. A sample is classified according to the majority vote of the its nearest K training samples in the feature space. Distance of a sample to its neighbors is defined using a distance function.

Q.9 Which are the performance factors that influence KNN algorithm?

Ans.: The performance of the KNN algorithm is influenced by three main factors:

1. The distance function or distance metric used to determine the nearest neighbors.

2. The decision rule used to derive a classification from the K-nearest neighbors.

3. The number of neighbors used to classify the new example.

Q.10 What is K-means clustering?

Ans.: k-means clustering is heuristic method. Here each cluster is represented by the center of the cluster. The k-means algorithm takes the input parameter, k, and partitions a set of n objects into k-clusters so that the resulting intracluster similarity is high but the intercluster similarity is low.

Q.11 List the properties of K-Means algorithm.

Ans.: 1. There are always k clusters.

2. There is always at least one item in each cluster.

3. The clusters are non-hierarchical and they do not overlap.

Q.12 What is stacking?

Ans.: Stacking, sometimes called stacked generalization, is an ensemble machine learning method that combines multiple heterogeneous base or component models via a meta-model.

Q.13 How do GMMs differentiate from K-means clustering?

Ans.: GMMs and K-means, both are clustering algorithms used for unsupervised learning tasks. However, the basic difference between the is that k-means is a distance-based clustering method while GMMs is a distribution based clusteringmethod.

Artificial Intelligence and Machine Learning: Unit IV: Ensemble Techniques and Unsupervised Learning : Tag: : Ensemble Techniques and Unsupervised Learning - Artificial Intelligence and Machine Learning - Two marks Questions with Answers