The Dempster-Shafer theory is designed to deal to deal with the distinction between uncertainty and ignorance.
Two
Marks Questions with Answers
Q.1 Define
Dempster-Shafer theory. AU: May-11
Ans.: The Dempster-Shafer theory is designed to deal to deal with the
distinction between uncertainty and ignorance.
Rather
than computing the probability of a proposition, it computes the probability
the evidence that supports the proposition.
Q.2 Define
Baye's theorem. AU: May-11,13,17,
Dec.-12
Ans.: In probability theory and applications, Baye's theorem
(alternatively called as Baye's law or Bayes rule) links a conditional
probability to its inverse.
P(b
| a) = P(a | b) P(b)/P(a)
This
equation is called as Baye's Rule or Baye's Theorem.
Q.3
What is reasoning by default? AU :
May-11
Ans.: We can do qualitative reasoning using technique like default
reasoning. Default reasoning treats conclusions not as "believed to a
certain degree", but as "believed until a better reason is found to
believe something else".
Q.4 What are the logics used in reasoning with uncertain information? AU: Dec.-11
Ans.: There are two approaches that can be taken for reasoning with
uncertain information in which logic is used.
Non-monotonic
logic is used in default reasoning process. Default reasoning also uses other
type or logic called as default logic.
The
second approach towards reasoning is vagueness which uses fuzzy logic. Fuzzy
logic is a method for reasoning with logical expressions describing membership
in fuzzy sets.
Q.5 Define
prior probability. AU: May-12
Ans.
The prior (unconditional) probability is associated with a proposition 'a'. The
prior probability is the degree of belief accorded to a proposition in the
absence of any other information.
It
is written as P(a). For example, the probability that, Ram has cavity = 0.1,
then the prior priobility is written as,
P(Cavity
= true) = 1 or P(cavity) = 0.1
Q.6
State the types of approximation methods.
AU: May-12
Ans.: For approximate inferencing randomize sampling algorithm (Monte
Carlo Algorithm) is used. There are two approximation methods that are used in
randomize sampling algorithm which are 1) Direct sampling algorithm and 2)
Markov chain sampling algorithm.
In
direct sampling algorithm samples are generated from known probability
distribution. In Markov chain sampling each event is generated by making a
random change to the preceding event.
Q.7
What do you mean by hybrid Bayesian network?
AU: Dec.-12
Ans.: A network with both discrete and continuous variables is called
as hybrid Bayesian network. In hybrid Bayesian network, for representing the
continuous variable its discretization is done in terms of intervals because it
can have infinite values.
For
specifying the hybrid network two kinds of distribution are specified. The
conditional distribution for a continuous variable given discrete or continuous
parents and the conditional distribution for a discrete variable given
continuous parent.
Q.8
Define computational learning theory. AU:
Dec.-12
Ans.: The computational learning theory is a mathematical field related
to the analysis of machine learning algorithms.
The
computational learning theory is used in the evaluation of sample complexity
and computational complexity. Sample complexity targets the issue that, how
many training examples are needed to learn a successful hypothesis? The
computational complexity evaluates that how much computational effort is needed
to learn a successful hypothesis?
In
addition to performance bounds, computational learning theory also deals with
the time complexity and feasibility of learning.
Q.9 Give
the full specification of Bayesian network.
AU: May-13
Ans.:Bayesian network: Definition: It is a data structure which is a
graph, in which each node is annotated with quantitative probability
information.
The
nodes and edges in the graph are specified as follows:-
1) A
set of random variables makes up the nodes of the network. Variables may be
discrete or continuous.
2) A
set of directed links or arrows connects pairs of nodes. If there is an arrow
from node X to node Y, then X is said to be a parent of Y.
3)
Each node Xi, has a conditional probability distribution P(Xi |
Parents(Xi)) that quantifies the effect of the parents on the node.
4)
The graph has no directed cycles (and hence is a directed, acyclic graph, or
DAG).
The
set of nodes and links is called as topology of the network.
Q.10
Define uncertainty. AU: May-14
Ans.: A agent working in real world environment almost never has access
to whole truth about its environment. Therefore, agent needs to work under
uncertainity.
Earlier
agents we have seen make the epistemological commitment that either the facts
(expressed as prepositions) are true, false or else they are unknown. When an
agent knows enough facts about its environment, the logical approach enables it
to derive plans, which are guaranteed to work.
But
when agent works with uncertain knowledge then it might be impossible to
construct a complete and correct description of how its actions will work. If a
logical agent can not conclude that any perticular course of action achieves
its goal, then it will be unable to act.
Q.11
State Baye's rule. (Refer Q.2) AU:
Dec.-14
Ans.: In probability theory and applications, Baye's theorem (alternatively called as Baye's law or Bayes rule) links a conditional probability to its inverse.
P(b | a) = P(a | b) P(b)/P(a)
This equation is called as Baye's Rule or Baye's Theorem.
Q.12
Define uncertainty. How it is solved? (Refer Q.10) AU: May-15
Ans.: A agent working in real world environment almost never has access to whole truth about its environment. Therefore, agent needs to work under uncertainity.
Earlier agents we have seen make the epistemological commitment that either the facts (expressed as prepositions) are true, false or else they are unknown. When an agent knows enough facts about its environment, the logical approach enables it to derive plans, which are guaranteed to work.
But when agent works with uncertain knowledge then it might be impossible to construct a complete and correct description of how its actions will work. If a logical agent can not conclude that any perticular course of action achieves its goal, then it will be unable to act.
Artificial Intelligence and Machine Learning: Unit II: Probabilistic Reasoning : Tag: : Probabilistic Reasoning - Artificial Intelligence and Machine Learning - Two marks Questions with Answers
Artificial Intelligence and Machine Learning
CS3491 4th Semester CSE/ECE Dept | 2021 Regulation | 4th Semester CSE/ECE Dept 2021 Regulation