The early work that is now generally recognized as AI was done in the period of 1943 to 1955. The first AI thoughts were formally put by men McCulloch and Walter Pitts (1943).
History
of AI
The
early work that is now generally recognized as AI was done in the period of
1943 to 1955. The first AI thoughts were formally put by men McCulloch and
Walter Pitts (1943). Their idea of AI was based on three theories, firstly
basic phsycology (the function of neurons in the brain), secondly formal
analysis of propositional logic and third was Turing's theory of computation.
Later
Donald Hebb in 1949 demonstrated simple updating rule for modifying the
connection strengths between neurons. His rule now called Hebbian learning
which is considered to be great influencial model in AI.
There
were huge early day work that can be recognized as AI but Alan Turing who first
articulated a complete vision of AI in his 1950 article named "Computing
Machinery and Intelligence".
Real
AI birth year is 1956 where in John McCarthy held workshop on automata theory,
neural nets and study of intelligence where other researchers also presented
their papers and they come out with new field in computer science called AI.
From
1952 to 1969 large amount of work was done with great success.
Newell
and Simon's presented General Problem Solver (GPS) within the limited class of
puzzles it could handle. It turned out that the order in which the program
considered subgoals and possible actions was similar that in which humans
approached the same problems. GPS was probably the first program which has
"thinking humanly" approach.
Herbert
Gelernter (1959) constructed the Geometry Theorem Prover which was capable of
proving quite tricky mathematics theorem.
At
MIT, in 1958 John McCarthy made major contributions to AI field :- development
of HLL LISP which has became the dominant AI programing language.
In
1958, McCarthy published a paper entitled Programs with Common Sense, in which
he described the Advice Taker, a hypothetical program that can be seen as the
first complete AI system. Like the Logic Theorist and Geometry Theorem Prover.
McCarthy's program was designed to use knowledge to search for solutions of
problems.
The
program was also designed so that it could accept new axioms in the normal
course of operation, thereby allowing it to achieve competence in new areas
without being reprogrammed. The Advice Taker thus embodied the central
principles of knowledge representation and reasoning.
Early
work building on the neural networks of McCulloch and Pitts also flourished.
The work of Winogard and Cowan (1963) showed how a large number of elements
could collectively represent an individual concept, with a corresponding
increase in robustness and parallelism. Hebb's learning methods were enhanced
by Bernie Widrow (Widrow and Hoff, 1960; Widro, 1962), who called his networks
adalines, and by Frank Rosenblatt (1962) with his perceptrons. Rosenblatt
proved the perceptron convergence theorem, showing that his learning algorithm
could adjust the connection strengths of a perception to match any input data,
provided such a match existed.
In
1965, Weizenbaum's ELIZA program appeared to conduct a serious conversation on
any topic by basically borrowing and manipulating the sentences given by a
human. None of the programs developed so far, had complex domain knowledge and
were called 'weak' methods. Researchers realized that it was necessary to use
more knowledge for more complicated, larger reasoning tasks.
The
DENDRAL program was developed by Buchanan in 1969 and was based on these
principles. It was a unique program that effectively used domain specific
knowledge in problem solving. In the mid- 1970's, MYCIN, a program developed to
diagnose blood infections. It used expert knowledge to diagnose illnesses and
prescribe treatments. This program is also known as the first program, which
addressed the problem of reasoning with uncertain or incomplete information.
Within
a very short time a number of knowledge representation languages were developed
such as predicate calculus, semantic networks, frames and objects. Some of them
are based on mathematical logic such as PROLOG. Although PROLOG goes back to
1972, it did not attract wide spread attention until a more efficient version
was introduced in 1979.
As
the real, useful strong works on AI were put forward by researchers, AI emerged
to be a big Industry.
In
1981, Japanese announced 5th generation project a 10-year plan to build
intelligent computers running PROLOG. US also formed the Micro electronics and
Computer Technology Corporation (MCC) for research in AI.
Overall
the AI industry boomed from few million dollars in 1980 to billions of dollars
in 1988. But soon after that AI industry had huge setback as many companies
suffered as they failed to deliver on extra vagant promises.
In
late 1970s more research were done by psychologists on neural networks which
continued in 1980s.
In
1990s AI emerged as a science. In terms of methodology AI has finally come
firmly under the scientific method. In recent years approaches based on Hidden Markov
Models (HMMS) have come to dominate the AI field. This model is based on two
aspects one is rigorous mathematical model theory and second is, these models
are generated by a process of training on a large corpus real speech data.
Judea
Pearl's (1988) Probabilistic Reasoning in Intelligent Systems led to a new
acceptance of probability theory in AI. Later Bayesian network was invented
which can represent uncertain knowledge along with reasoning support.
Judea
Pearl, Eric Hovitz and David Hackerman in 1986 promoted the idea of normative
expert systems that can act rationally according to the laws of decision
theory. Similar but slow revolution have ocurred in robotics, computer vision
and knowledge representation.
In
1987 a complete agent architecture called SOAR was work out by Allan Newell,
John Laired and Paul Rosenbloom. Many such agents were developed to work in big
environment "Internet". AI systems have become so common in web based
applications that the "- bot" suffix has entered in everyday language.
AI
technologies underlie many Internet tools, such as search engines, systems and
website.
While
developing complete agents it was realized that previously isolated subfields
of AI need to reorganize when their results are to be tied together.
Artificial Intelligence and Machine Learning: Unit I(a): Introduction to AI : Tag: : Introduction to AI - Artificial Intelligence and Machine Learning - History of AI
Artificial Intelligence and Machine Learning
CS3491 4th Semester CSE/ECE Dept | 2021 Regulation | 4th Semester CSE/ECE Dept 2021 Regulation