Top Banner
ARTIFICIAL INTELLIGENCE - Approcahes & Problems SEMINAR REPORT BACHELOR OF TECHNOLOGY IN COMPUTER SCIENCE AND ENGINEERING Submitted by Saichandra Srivatsav Goturu Reg-No: 09501A0589 Department of Computer Science and Engineering PRASAD V POTLURI SIDDHARTHA INSTITUTE OF TECHNOLOGY (Affiliated to JNTU: Kakinada, Approved by AICTE) (Autonomous, An ISO certified and NBA accredited institution) Kanuru, Vijayawada - 520007 Month, Year
30
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Artificial Intelligence

ARTIFICIAL INTELLIGENCE

- Approcahes & Problems

SEMINAR REPORT BACHELOR OF TECHNOLOGY

IN

COMPUTER SCIENCE AND ENGINEERING

Submitted by

Saichandra Srivatsav Goturu

Reg-No: 09501A0589

Department of Computer Science and Engineering

PRASAD V POTLURI SIDDHARTHA INSTITUTE OF TECHNOLOGY(Affiliated to JNTU: Kakinada, Approved by AICTE)

(Autonomous, An ISO certified and NBA accredited institution)Kanuru, Vijayawada - 520007

Month, Year

Marks Awarded: / 50

Signature of coordinator Signature of HOD

Page 2: Artificial Intelligence

Table Of Contents

1. Abstract

2. Introduction

3. GOFAI (or) Strong AI

4. Synthetic Intelligence

5. Intelligence

5.1. Definitions

5.2. Types of Intelligence

5.3. Left vs Right

6. Approaches to AI

6.1. Cybernatics

6.2. Symbolic

6.3. Sub-symbolic

6.4. Statistical

7. Problems of AI

8. Are We Going In The Right Direction(Conclusion)

9. References

Page 3: Artificial Intelligence

1.ABSTRACT:

Artificial intelligence (AI) is the intelligence of machines and robots and the branch of computer science that aims to create it. However, with time there have been many clashes between the thoughts of highly intellect scientists as well on various aspects of this field. The differences are from the definition of the term artificial intelligence itself. And so, without difficulty it crept into the approaches followed by these scientists too. Thus it has been a science that it is still in its very basic and primary stages of recognition itself and still far far away from the actual kind of development taking place.

No matter what approach one assumes and what definition one accepts, there are several problems in this field that are not only common to all the existing differences but are proving highly difficult to present a solution for. They start right out from our understanding of the intelligence and human way of it’s working(the most apt way to have a comparison with).

Hence in this seminar I like to cover these various differences in thoughts and the problems in an attempt to bring together various significant thoughts that went under the name artificial intelligence so that it might help in better understanding about where we are going wrong actually.

Page 4: Artificial Intelligence

2.Introduction:

If one were to rank a list of civilizations greatest and most elusive intellectual challenges, the problem of "decoding" ourselves -- understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome -- would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.

The term ‘artificial intelligence’ was coined by John McCarthy in a conference that was conducted at Dartmouth University. The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine. No formal definition, as yet, is available for as to what artificial intelligence actually is.

AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.

John McCarthy, defines it as "the science and engineering of making intelligent machines." Artificial intelligence is the search for a way to map intelligence into mechanical hardware and enable a structure into that system to formalize thought. Instantiating an intelligent system using man-made hardware, rather than our own "biological hardware" of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.

Some of McCarthy's colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Artificial Intelligence is the study of human intelligence such that it can be replicated artificially.

In their book Artificial Intelligence: A Modern Approach, authors Russell and Norvig tried to establish a clear classification of the definition of the field into distinct categories based on working definitions from other authors commenting on AI. The demarcation of concepts holds true to these clauses for systems that:

Think and act like humans Think and act rationally

Page 5: Artificial Intelligence

Artificial Intelligence is the study of human intelligence and actions replicated artificially, such that the resultant bears to its design a reasonable level of rationality.

We end up with four possible goals:

Systems that think like humans (focus on reasoning and human framework)

"The art of creating machines that perform functions that require intelligence when performed by humans" (Kurzweil). Involves cognitive modeling - we have to determine how humans think in a literal sense (explain the inner workings of the human mind, which requires experimental inspection or psychological testing)

Systems that think rationally (focus on reasoning and a general concept of intelligence)

"GPS - General Problem Solver" (Newell and Simon). Deals with "right thinking" and dives into the field of logic. Uses logic to represent the world and relationships between objects in it and come to conclusions about it. Problems: hard to encode informal knowledge into a formal logic system and theorem provers have limitations (if there's no solution to a given logical notation).

Systems that act like humans (focus on behavior and human framework)

Turing defined intelligent behavior as the ability to achieve human-level performance in all cognitive tasks, sufficient to fool a human person (Turing Test).Physical contact to the machine has to be avoided, because physical appearance is not relevant to exhibit intelligence. However, the "Total Turing Test" includes appearance by encompassing visual input and robotics as well.

Systems that act rationally (focus on behavior and a general concept of intelligence)

The rational agent - achieving one's goals given one's beliefs. Instead of focusing on humans, this approach is more general, focusing on agents (which perceive and act). More general than strict logical approach (i.e. thinking rationally).

3.GOFAI (or) STRONG AI:

Page 6: Artificial Intelligence

In artificial intelligence research, GOFAI ("Good Old-Fashioned Artificial Intelligence") describes the oldest original approach to achieving artificial intelligence, based on logic and problem solving. GOFAI was the dominant paradigm of AI research from the middle fifties until the late 1980s. AI's first generation of researchers firmly believed their techniques would lead to real, human-like intelligence in machines.

. The term "GOFAI" was coined by John Haugeland in his 1986 book Artificial Intelligence: The Very Idea, which explored the philosophical implications of artificial intelligence research. 1986 to describe artificial intelligence research up to that point, which he called "good old fashioned artificial intelligence" or "GOFAI".

The approach is based on the assumption that many aspects of intelligence can be achieved by the manipulation of symbols, an assumption defined as the "physical symbol systems hypothesis" by Allen Newell and Herbert A. Simon in the middle 1960s.

Strong AI is artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that can successfully perform any intellectual task that a human being can.It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Strong AI is also referred to as "artificial general intelligence" or as the ability to perform "general intelligent action." Strong AI is associated with traits such as consciousness, sentience, sapience and self-awareness observed in living beings.

4.SYNTHETIC INTELLIGENCE:

Synthetic intelligence (SI) is an alternative term for artificial intelligence which emphasizes that the intelligence of machines need not be an imitation or any way artificial; it can be a genuine form of intelligence.

The term was used by Haugeland in 1986.

"Synthetic intelligence" would therefore be man-made.

After the AI winter, many AI researchers chose to focus on finding solutions for specific individual problems, such as machine learning, rather than artificial general intelligence.

Page 7: Artificial Intelligence

. By the 1980s, many researchers began to doubt that high-level symbol manipulation alone could account for all intelligent behaviors. Opponents of the symbolic approach include roboticists such as Rodney Brooks, who aims to produce autonomous robots without symbolic representation (or with only minimal representation) and computational intelligence researchers, who apply techniques such as neural networks and optimization to solve problems in machine learning and control engineering. Now, both approaches are in common use, often applied to different problems.

This approach to AI is referred to by some popular sources as "weak AI" or "applied AI"

Sources disagree about exactly what constitutes "real" intelligence as opposed to "simulated" intelligence and therefore whether there is a meaningful distinction between artificial intelligence and synthetic intelligence. Russell and Norvig present this example:

"Can machines fly?" The answer is yes, because airplanes fly.

"Can machines swim?" The answer is no, because submarines don't swim.

"Can machines think?" Is this question like the first, or like the second?

GOFAI: SYNTHETIC INTELLIGENCE:

1. Intelligence like humans. Intelligence that is not like human but original.

2. Used Symbolic approach. Did not believe in symbolic approach

3. Tried to solve the problem as a whole. Tried to solve sub-problems leading to final solution.

Page 8: Artificial Intelligence

5.Intelligence:

Intelligence has been defined in many different ways including, but not limited to, abstract thought, understanding, self-awareness, communication, reasoning, learning, having emotional knowledge, retaining, planning, and problem solving.

5.1.Definitions

The definition of intelligence is controversial. Groups of scientists have stated the following:

1. from "Mainstream Science on Intelligence" (1994), an editorial statement by fifty-two researchers:

A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—"catching on," "making sense" of things, or "figuring out" what to do.[5]

2. from "Intelligence: Knowns and Unknowns" (1995), a report published by the Board of Scientific Affairs of the American Psychological Association:

Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person's intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.[6][7]

Page 9: Artificial Intelligence

Besides the foregoing definitions, these psychology and learning researchers also have defined intelligence as:

Researcher Quotation

Alfred Binet Judgment, otherwise called "good sense," "practical sense," "initiative," the faculty of adapting one's self to circumstances ... auto-critique.[8]

David Wechsler The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.[9]

Lloyd Humphreys"...the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills."[10]

Cyril Burt Innate general cognitive ability[11]

Howard Gardner

To my mind, a human intellectual competence must entail a set of skills of problem solving — enabling the individual to resolve genuine problems or difficulties that he or she encounters and, when appropriate, to create an effective product — and must also entail the potential for finding or creating problems — and thereby laying the groundwork for the acquisition of new knowledge.[12]

Linda Gottfredson The ability to deal with cognitive complexity.[13]

Sternberg & Salter Goal-directed adaptive behavior.[14]

Reuven Feuerstein

The theory of Structural Cognitive Modifiability describes intelligence as "the unique propensity of human beings to change or modify the structure of their cognitive functioning to adapt to the changing demands of a life situation.

Page 10: Artificial Intelligence

5.2.Types of intelligence:

1. Verbal – the ability to use words

2. Visual – the ability to imagine things in your mind

3. Physical – the ability to use your body in various situations

4. Musical - the ability to use and understand music

5. Mathematical – the ability to apply logic to systems and numbers

6. Introspective – the ability to understand your inner thoughts

7. Interpersonal – the ability to understand other people, and relate well to them

8.Naturalist Intelligence (“Nature Smart”) – Sensitive to living things. Gardner added this to his original list of seven years later).

9. Existential Intelligence – the ability to tackle deep questions about human existence such as the meaning of life, how did we get here, and what happens when we die.

Page 11: Artificial Intelligence

5.3.Left vs Right Brain:

Left

logical sequential rational analytical objective looks at parts systematic symbolic linear factual abstract

Right

Random Intuitive Holistic Sythesizing Subjective Looks at wholes Non-verbal Casual Concrete Visual Sensory Spatial

Page 12: Artificial Intelligence

digital Emotional

6.Approaches

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues. A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering? Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems? Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?

6.1.Cybernetics and brain simulation

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast.By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

Page 13: Artificial Intelligence

6.2.Symbolic

(GOFAI)

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background. Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Cognitive simulation

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 80s.

Logic-based

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms. His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning. Logic was also focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.

"Anti-logic" or "scruffy"

Researchers at MIT (such as Marvin Minsky and Seymour Papert) found that solving difficult problems in vision and natural language processing required ad-hoc solutions – they argued that there was no simple and general principle (like logic) that would capture all the aspects of

Page 14: Artificial Intelligence

intelligent behavior. Roger Schank described their "anti-logic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford). Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.

Knowledge-based

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications. This "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software. The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

6.3.Sub-symbolic

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "sub-symbolic" approaches to specific AI problems.

Bottom-up, embodied, situated, behavior-based or nouvelle AI

Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive. Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Computational Intelligence

Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s. These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence.

6.4.Statistical

Page 15: Artificial Intelligence

A statistical model is a mathematical model which is modified or trained by the input of data points. Statistical models are often but not always probabilistic

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a "revolution" and "the victory of the neats." Critics argue that these techniques are too focused on particular problems and have failed to address the long term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.

7.PROBLEMS OF AI:

The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention.

Deduction, reasoning, problem solving:

Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and '90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.

For difficult problems, most of these algorithms can require enormous computational resources – most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem-solving algorithms is a high priority for AI research.

Human beings solve most of their problems using fast, intuitive judgements rather than the conscious, step-by-step deduction that early AI research was able to model. AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research

Page 16: Artificial Intelligence

attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the probabilistic nature of the human ability to guess.

Knowledge Representation:

(COMMON SENSE KNOWLEDGE)

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of "what exists" is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.

Among the most difficult problems in knowledge representation are:

Default reasoning and the qualification problem

Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.

The breadth of commonsense knowledge

The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering — they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.

Page 17: Artificial Intelligence

The subsymbolic form of some commonsense knowledge

Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" or an art critic can take one look at a statue and instantly realize that it is a fake. These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.

Planning:

Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.

In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if the agent is not the only actor, it must periodically ascertain whether the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.

Learning:

The main goal of machine learning is to get knowledge from users, input data and so on, improving to solve more problems, reduce the mistakes and increase the efficiency of solving problems. Machine learning has been central to AI research from the beginning. In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine". Because learning is very

Page 18: Artificial Intelligence

complicated, much of the research focus on the concept learning which consists in finding a classification function which distinguishes between the entities that are instances of the concepts from those that are not. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.

Natural Language Processing:

Natural language processing gives machines the ability to read and understand the languages that humans speak. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as Internet texts. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.

A common method of processing and extracting meaning from natural language is through semantic indexing. Increases in processing speeds and the drop in the cost of data storage makes indexing large volumes of abstractions of the users input much more efficient.

Motion and manipulation:

(Robotics)

The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion - where the robot moves while maintaining physical contact with an object).

Page 19: Artificial Intelligence

Perception:

Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected subproblems are speech recognition, facial recognition and object recognition.

Social intelligence:

(Affective computing)

Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical enquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behaviour to them, giving an appropriate response for those emotions.

Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, in an effort to facilitate human-computer interaction, an intelligent machine might want to be able to display emotions—even if it does not actually experience them itself—in order to appear sensitive to the emotional dynamics of human interaction.

Creativity:

A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative, or systems that identify and assess creativity). Related areas of computational research are Artificial intuition and Artificial imagination.

Page 20: Artificial Intelligence

General intelligence:

(Strong AI and AI-complete)

Most researchers think that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.

Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.

Are we going in the right direction (Conclusion):

Although AI is making steady progress, at this rate it would take centuries before AI can be actually developed completely. The reason being lies within the way all are approaching AI development. There has been no proper definition or agreements about the important concepts like what is considered intelligence and what is it that we are aiming at. So, that always restricts the work flow, without having a well defined objective.

It can be said clearly now that before trying any further to develop the technology itself, we need to develop our understanding about the technology. The concept of developing an agent acting intelligently on its own is without a doubt related to the computer science field. But in order to go there, a proper study and understanding that is required cannot be provided by the same field. So it is time to refresh our approach and the methods we use for learning things itself.

The basic element in the nature which really gave the idea of exploring into AI- the human intelligence has to be explored first to the deepest level which is not being done now. And the vision should also be more of deciphering the programmed intelligence of that intelligence and

Page 21: Artificial Intelligence

various body organs involved in those complex functions rather than just another biological study. Comparisons must be made and theories should be formed about the possibilities.

This kind of process with a proper vision and objective clear ahead may yield better results than the present approaches.

In conclusion, it can be said that AI has made great progress in its short history, but the final sentence of Alan Turing’s essay on Computing Machinery and Intelligence is still valid today:

“We can see only a short distance ahead, but we can see that much remains to be done.”

Page 22: Artificial Intelligence

References:

Noam Chomsky, On Where Artificial Intelligence Went Wrong.http://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/?single_page=true

Wikipedia.http://en.wikipedia.org/wiki/Artificial_Intelligencehttp://en.wikipedia.org/wiki/Artificial_Intelligence#Historyhttp://en.wikipedia.org/wiki/Artificial_Intelligence#Approacheshttp://en.wikipedia.org/wiki/Artificial_Intelligence#Problemshttp://en.wikipedia.org/wiki/Neats_and_scruffieshttp://en.wikipedia.org/wiki/Portal:Artificial_intelligencehttp://en.wikipedia.org/wiki/GOFAIhttp://en.wikipedia.org/wiki/Synthetic_intelligencehttp://en.wikipedia.org/wiki/Strong_AIhttp://en.wikipedia.org/wiki/Weak_AIhttp://en.wikipedia.org/wiki/Intelligencehttp://en.wikipedia.org/wiki/Intelligence#Definitions

Stuart J. Russell & Peter Norvig, Artificial Intelligence: A Modern Approach.

Howard Gardner, Frames of Mind.

Types of Intelligence.http://www.macalester.edu/academics/psychology/whathap/ubnrp/intelligence05/mtypes.html