IntroductionArtificial Intelligence can be defined as an area
having over half a century of the history. First of all in the late
1940s, the emergence of the computers took place and it was during
this phase only that the Artificial Intelligence began in the
earnest.These machines have the ability to store huge amount of the
data and after this step these machines process it into the
information at a very high speed. Although the Artificial
Intelligence was born in the 1940s but it did not receive a great
response from the various users at that particular time. It was
only in the 1980s that the Artificial Intelligence received the
popular economic and the managerial acclaim. All along during this
period, a large amount of the transition took place in the concept
of the Artificial Intelligence and one of the main transitions
included the transition from a primary research area to the
potential commercial applications.After this period of the major
transitions only, the Artificial Intelligence was accepted as an
emerging technology and got a very hot response from the different
types of the users using it. The major reason of its acceptance was
the fact that the Artificial Intelligence does not replace people
but in fact the Artificial Intelligence liberate the experts from
solving the common and the simple types of the problems, hence in
turn leaving the experts for solving the various complex
problems.One of the major advantages of the Artificial Intelligence
is that it helps to avoid making the mistakes and also helps in
responding very quickly to any type of the problem that may
arise.Meaning and the DefinitionGeorge Luger and William
Stabblefied defined Artificial Intelligence as a branch of the
computer science that is mainly concerned with the automation of
the intelligent behavior.Dan Patterson defined Artificial
Intelligence as a branch of the computer science concerned with the
study and the creation of the computer systems that exhibit some
form of the intelligence: systems that learn the new concepts and
the tasks, systems that can reason and also draw the useful
conclusions about the world around us, systems that can under stand
the various natural languages and perceive and comprehend a visual
scene and the systems that perform the other types of the feats
that essentially require the human types of the
intelligence.Artificial Intelligence can be under stood as the
technology playing a very major part in the application of the
computers to the areas or the fields, which requires the basic
knowledge, the perception, the reasoning, the understanding and the
cognitive abilities. By having all this, it really becomes possible
to distinguish the human behavior from the machines like the
computers etc. Artificial Intelligence actually is the science and
the engineering involving the making of the intelligent machines
and one major point to be remembered here is that the Artificial
Intelligence is related a great deal to the similar task of making
use of the computers in order to under stand the human
intelligence. Human intelligence is also referred to as the natural
intelligence and the below explained comparison between the Natural
Intelligence and the Artificial Intelligence helps a great deal in
under standing the concept of both the Artificial Intelligence and
the Natural Intelligence and the basic differences that occur
between them.www.mbaofficial.com/mba-courses/.
HistoryMain articles:History of artificial
intelligenceandTimeline of artificial intelligenceThinking machines
and artificial beings appear inGreek myths, such asTalosofCrete,
the bronze robot ofHephaestus, andPygmalion'sGalatea.[13]Human
likenesses believed to have intelligence were built in every major
civilization: animatedcult imageswere worshiped
inEgyptandGreece[14]and humanoidautomatonswere built byYan Shi,Hero
of AlexandriaandAl-Jazari.[15]It was also widely believed that
artificial beings had been created byJbir ibn Hayyn,Judah
LoewandParacelsus.[16]By the 19th and 20th centuries, artificial
beings had become a common feature in fiction, as inMary
Shelley'sFrankensteinorKarel apek'sR.U.R. (Rossum's Universal
Robots).[17]Pamela McCorduckargues that all of these are examples
of an ancient urge, as she describes it, "to forge the
gods".[9]Stories of these creatures and their fates discuss many of
the same hopes, fears andethical concernsthat are presented by
artificial intelligence.Mechanical or"formal" reasoninghas been
developed by philosophers and mathematicians since antiquity. The
study of logic led directly to the invention of theprogrammable
digital electronic computer, based on the work of mathematicianAlan
Turingand others. Turing'stheory of computationsuggested that a
machine, by shuffling symbols as simple as "0" and "1", could
simulate any conceivable act of mathematical
deduction.[18][19]This, along with concurrent discoveries
inneurology,information theoryandcybernetics, inspired a small
group of researchers to begin to seriously consider the possibility
of building an electronic brain.[20]The field of AI research was
founded ata conferenceon the campus ofDartmouth Collegein the
summer of 1956.[21]The attendees, includingJohn McCarthy,Marvin
Minsky,Allen NewellandHerbert Simon, became the leaders of AI
research for many decades.[22]They and their students wrote
programs that were, to most people, simply
astonishing:[23]Computers were solving word problems in algebra,
proving logical theorems and speaking English.[24]By the middle of
the 1960s, research in the U.S. was heavily funded by theDepartment
of Defense[25]and laboratories had been established around the
world.[26]AI's founders were profoundly optimistic about the future
of the new field:Herbert Simonpredicted that "machines will be
capable, within twenty years, of doing any work a man can do"
andMarvin Minskyagreed, writing that "within a generation... the
problem of creating 'artificial intelligence' will substantially be
solved".[27]They had failed to recognize the difficulty of some of
the problems they faced.[28]In 1974, in response to the criticism
ofSir James Lighthilland ongoing pressure from the US Congress to
fund more productive projects, both the U.S. and British
governments cut off all undirected exploratory research in AI. The
next few years would later be called an "AI winter",[29]a period
when funding for AI projects was hard to find.In the early 1980s,
AI research was revived by the commercial success ofexpert
systems,[30]a form of AI program that simulated the knowledge and
analytical skills of one or more human experts. By 1985 the market
for AI had reached over a billion dollars. At the same time,
Japan'sfifth generation computerproject inspired the U.S and
British governments to restore funding for academic research in the
field.[31]However, beginning with the collapse of theLisp
Machinemarket in 1987, AI once again fell into disrepute, and a
second, longer lastingAI winterbegan.[32]In the 1990s and early
21st century, AI achieved its greatest successes, albeit somewhat
behind the scenes. Artificial intelligence is used for
logistics,data mining,medical diagnosisand many other areas
throughout the technology industry.[12]The success was due to
several factors: the increasing computational power of computers
(seeMoore's law), a greater emphasis on solving specific
subproblems, the creation of new ties between AI and other fields
working on similar problems, and a new commitment by researchers to
solid mathematical methods and rigorous scientific standards.[33]On
11 May 1997,Deep Bluebecame the first computer chess-playing system
to beat a reigning world chess champion,Garry Kasparov.[34]In 2005,
a Stanford robot won theDARPA Grand Challengeby driving
autonomously for 131 miles along an unrehearsed desert
trail.[35]Two years later, a team fromCMUwon theDARPA Urban
Challengewhen their vehicle autonomously navigated 55 miles in an
urban environment while adhering to traffic hazards and all traffic
laws.[36]In 2010,Definition of intelligence based on primitive
semantics reveals that the intelligence which is usually said is a
human level one.[37]In February 2011, in aJeopardy!quiz
showexhibition match,IBM'squestion answering system,Watson,
defeated the two greatest Jeopardy champions,Brad RutterandKen
Jennings, by a significant margin.[38]TheKinect, which provides a
3D bodymotion interface for theXbox 360and the Xbox One, uses
algorithms that emerged from lengthy AI research[39]as does the
iPhone'sSiri.
The Advantages for Artificial Intelligence (AI) Jobs- depending
on the level and type of intelligence these machines receive in the
future, it will obviously have an effect on the type of work they
can do, and how well they can do it (they can become more
efficient). As the level of AI increases so will their competency
to deal with difficult, complex even dangerous tasks that are
currently done by humans, a form of applied artificial
intelligence. They don't stop- as they are machines there is no
need for sleep, they don't get ill , there is no need for breaks or
Facebook, they are able to go, go, go! There obviously may be the
need for them to be charged or refueled, however the point is, they
are definitely going to get a lot more work done than we can. Take
the Finance industry for example, there are constant stories
arising of artificial intelligence in finance and that stock
traders are soon to be a thing of the past. No risk of harm- when
we are exploring new undiscovered land or even planets, when a
machine gets broken or dies, there is no harm done as they don't
feel, they don't have emotions. Where as going on the same type of
expeditions a machine does, may simply not be possible or they are
exposing themselves to high risk situations. Act as aids-they can
act as 24/7 aids to children with disabilities or the elderly, they
could even act as a source for learning and teaching. They could
even be part of security alerting you to possible fires that you
are in threat of, or fending off crime. Their function is almost
limitless- as the machines will be able to do everything (but just
better) essentially their use, pretty much doesn't have any
boundaries. They will make fewer mistakes, they are emotionless,
they are more efficient, they arebasically giving us more free time
to do as we please.The Disadvantages for Artificial Intelligence
(AI) Over reliance on AI-as you may have seen in many films such
asThe Matrix, iRobot or even kids films such as WALL.E, if we rely
on machines to do almost everything for us we become very
dependent, so much so they have the potential to ruin our lives if
something were to go wrong. Although the films are essentially just
fiction, it wouldn't be too smart not to have some sort of back up
plan to potential issues on our part. Human Feel-as they are are
machines they obviously can't provide you with that 'human touch
and quality', the feeling of a togetherness and emotional
understanding, that machines will lack the ability to sympathise
and empathise with your situations, and may act irrationally as a
consequence. Inferior-as machines will be able to perform almost
every task better than us in practically all respects, they will
take up many of our jobs, which will then result in masses of
people who are then jobless and as a result feel essentially
useless. This could then lead us to issues of mental illness and
obesity problems etc. Misuse-there is no doubt that this level of
technology in the wrong hands can cause mass destruction, where
robot armies could be formed, or they could perhaps malfunction or
be corrupted which then we could be facing a similar scene to that
of terminator ( hey, you never know). Ethically Wrong?-People say
that the gift of intuition and intelligence was God's gift to
mankind, and so to replicate that would be then to kind of 'play
God'. Therefore not right to even attempt to clone our
intelligence.
Applications of artificial intelligenceArtificial
intelligencehas been used in a wide range of fields
includingmedical diagnosis,stock trading,robot control,law,remote
sensing, scientific discovery and toys. However, many AI
applications are not perceived as AI: "A lot of cutting edge AI has
filtered into general applications, often without being called AI
because once something becomes useful enough and common enough it's
not labeled AI anymore,"Nick Bostromreports.[1]"Many thousands of
AI applications are deeply embedded in the infrastructure of every
industry."[2]In the late 90s and early 21st century, AI technology
became widely used as elements of larger systems,[2][3]but the
field is rarely credited for these successes.FinanceBanks use
artificial intelligence systems to organize operations, invest in
stocks, and manage properties. In August 2001, robots beat humans
in a simulatedfinancial tradingcompetition.[4]Financial
institutionshave long usedartificial neural networksystems to
detect charges or claims outside of the norm, flagging these for
human investigation. AI is more S/W related so the game can be
easier or harder.Hospitals and medicineA medical clinic can use
artificial intelligence systems to organize bed schedules, make a
staff rotation, and provide medical information.Artificial neural
networksare used asclinical decision support systemsformedical
diagnosis, such as inConcept Processingtechnology
inEMRsoftware.Other tasks in medicine that can potentially be
performed by artificial intelligence include: Computer-aided
interpretation of medical images. Such systems help scan digital
images,e.g.fromcomputed tomography, for typical appearances and to
highlight conspicuous sections, such as possible diseases. A
typical application is the detection of a tumor. Heart
soundanalysis[5]Heavy industryRobotshave become common in many
industries. They are often given jobs that are considered dangerous
to humans. Robots have proven effective in jobs that are very
repetitive which may lead to mistakes or accidents due to a lapse
in concentration and other jobs which humans may find
degrading.Japanis the leader in using and producing robots in the
world. In 1999, 1,700,000 robots were in use worldwide. For more
information, see survey[6]about artificial intelligence in
business.Online and telephone customer service
Anautomated online assistantprovidingcustomer serviceon a web
page.Artificial intelligence is implemented inautomated online
assistantsthat can be seen asavatarson web pages.[7]It can avail
for enterprises to reduce their operation and training cost.[7]A
major underlying technology to such systems isnatural language
processing.[7]Similar techniques may be used inanswering
machinesofcall centres, such asspeech recognitionsoftware to allow
computers to handle first level ofcustomer support,text
miningandnatural language processingto allow better customer
handling, agent training by automatic mining ofbest practicesfrom
past interactions,support automationand many other technologies to
improve agent productivity andcustomer
satisfaction.[8]TransportationFuzzy logiccontrollers have been
developed for automatic gearboxes in automobiles (the 2006 Audi TT,
VW Toureg[citation needed]and VW Caravell feature the DSP
transmission which utilizes Fuzzy Logic, a number of koda variants
(koda Fabia) also currently include a Fuzzy Logic based
controller).TelecommunicationsMany telecommunications companies
make use ofheuristic searchin the management of their workforces,
for exampleBT Grouphas deployed heuristic search[9]in a scheduling
application that provides the work schedules of 20,000
engineers.Toys and gamesThe 1990s saw some of the first attempts to
mass-produce domestically aimed types of basic Artificial
Intelligence for education, or leisure. This prospered greatly with
theDigital Revolution, and helped introduce people, especially
children, to a life of dealing with various types of Artificial
Intelligence, specifically in the form ofTamagotchisandGiga
Pets,iPod Touch, theInternet(example: basic search engine
interfaces are one simple form), and the first widely released
robot,Furby. A mere year later an improved type ofdomestic robotwas
released in the form ofAibo, a robotic dog with intelligent
features andautonomy. AI has also beenapplied to video
games.MusicThe evolution of music has always been affected by
technology. With AI, scientists are trying to make the
computeremulatethe activities of the skillful musician.
Composition, performance, music theory, sound processing are some
of the major areas on which research inMusic and Artificial
Intelligenceare focusing.AviationThe Air Operations Division (AOD)
uses AI for the rule basedexpert systems. The AOD has use
forartificial intelligencefor surrogate operators for combat and
training simulators, mission management aids, support systems for
tactical decision making, and post processing of the simulator data
into symbolic summaries.The use of artificial intelligence in
simulators is proving to be very useful for the AOD. Airplane
simulators are using artificial intelligence in order to process
the data taken from simulated flights. Other than simulated flying,
there is also simulated aircraft warfare. The computers are able to
come up with the best success scenarios in these situations. The
computers can also create strategies based on the placement, size,
speed and strength of the forces and counter forces. Pilots may be
given assistance in the air during combat by computers. The
artificial intelligent programs can sort the information and
provide the pilot with the best possible maneuvers, not to mention
getting rid of certain maneuvers that would be impossible for a
human being to perform. Multiple aircraft are needed to get good
approximations for some calculations so computer simulated pilots
are used to gather data. These computer simulated pilots are also
used to train futureair traffic controllers.The system used by the
AOD in order to measure performance was the Interactive Fault
Diagnosis and Isolation System, or IFDIS. It is a rule based expert
system put together by collecting information fromTF-30documents
and the expert advice from mechanics that work on the TF-30. This
system was designed to be used for the development of the TF-30 for
the RAAF F-111C. The performance system was also used to replace
specialized workers. The system allowed the regular workers to
communicate with the system and avoid mistakes, miscalculations, or
having to speak to one of the specialized workers.The AOD also uses
artificial intelligence inspeech recognitionsoftware. The air
traffic controllers are giving directions to the artificial pilots
and the AOD wants to the pilots to respond to the ATC's with simple
responses. The programs that incorporate the speech software must
be trained, which means they useneural networks. The program used,
the Verbex 7000, is still a very early program that has plenty of
room for improvement. The improvements are imperative because ATCs
use very specific dialog and the software needs to be able to
communicate correctly and promptly every time.The Artificial
Intelligence supported Design of Aircraft,[10]or AIDA, is used to
help designers in the process of creating conceptual designs of
aircraft. This program allows the designers to focus more on the
design itself and less on the design process. The software also
allows the user to focus less on the software tools. The AIDA uses
rule based systems to compute its data. This is a diagram of the
arrangement of the AIDA modules. Although simple, the program is
proving effective.In 2003,NASA'sDryden Flight Research Center, and
many other companies, created software that could enable a damaged
aircraft to continue flight until a safe landing zone can be
reached. The software compensates for all the damaged components by
relying on the undamaged components. The neural network used in the
software proved to be effective and marked a triumph for artificial
intelligence.The Integrated Vehicle Health Management system, also
used by NASA, on board an aircraft must process and interpret data
taken from the various sensors on the aircraft. The system needs to
be able to determine the structural integrity of the aircraft. The
system also needs to implement protocols in case of any damage
taken the vehicle.News, Publishing & WritingThe
companyNarrative Sciencemakes computer generated news and reports
commercially available, including summarizing team sporting events
based on statistical data from the game in English. It also creates
financial reports and real estate analyses.[11]Another company,
calledYseop, uses artificial intelligence to turn structured data
into intelligent comments and recommendations in natural
language.Yseopis able to write financial reports, executive
summaries, personalized sales or marketing documents and more at a
speed of thousands of pages per second and in multiple languages
including English, Spanish, French & German.[12]OtherVarious
tools of artificial intelligence are also being widely deployed
inhomeland security, speech and text recognition,data mining,
ande-mail spamfiltering. Applications are also being developed
forgesture recognition(understanding of sign language by machines),
individualvoice recognition, global voice recognition (from a
variety of people in a noisy room), facial expression recognition
for interpretation of emotion and non verbal cues. Other
applications arerobot navigation, obstacle avoidance, andobject
recognition.[citation needed]
Artificial IntelligenceIs the MostImportantTechnology of the
Future...
Artificial Intelligence is a set of tools that are driving
forward key parts of the futurist agenda, sometimes at a rapid
clip. The last few years have seen a slew of surprising advances:
the IBM supercomputer Watson, which beat two champions ofJeopardy!;
self-driving cars that have logged over 300,000 accident-free miles
and are officially legal in three states; and statistical learning
techniques are conducting pattern recognition on complex data sets
from consumer interests to trillions of images. In this post, Ill
bring you up to speed on what is happening in AI today, and talk
about potential future applications. Any brief overview of AI will
be necessarily incomplete, but Ill be describing a few of the most
exciting items.The key applications of Artificial Intelligence are
in any area that involves more data than humans can handle on our
own, but which involves decisions simple enough that an AI can get
somewhere with it. Big data, lots of little rote operations that
add up to something useful. An example is image recognition; by
doing rigorous, repetitive, low-level calculations on image
features, we now have services likeGoogle Goggles, where you take
an image of something, say a landmark, and Google tries to
recognize what it is. Services like these are the first stirrings
ofAugmented Reality(AR).Its easy to see how this kind of image
recognition can be applied to repetitive tasks in biological
research. One such difficult task is in brain mapping, an area that
underlies dozens of transhumanist goals. The leader in this area is
Sebastian Seung at MIT, who develops software to automatically
determine the shape of neurons and locate synapses. Seung developed
a fundamentally new kind of computer vision for automating work
towards building connectomes, which detail the connections between
all neurons. These are a key step to building computers that
simulate the human brain.As an example of how difficult it is to
build a connectome without AI, consider the case of the flatworm,C.
elegans, the only completed connectome to date. Although electron
microscopy was used to exhaustively map the brain of this flatworm
in the 1970s and 80s, it took more than a decade of work to piece
this data into a full map of the flatworms brain. This is despite
that brain containing just 7000 connections between 300 neurons. By
comparison, the human brain contains 100trillionconnections between
100 billion neurons. Without sophisticated AI, mapping it will be
hopeless.Theres another closely related area that depends on AI to
make progress; cognitive prostheses. These are brain implants that
can perform the role of a part of the brain that has been damaged.
Imagine a prosthesis that restores crucial memories to Alzheimers
patients. The feasibility of a prosthesis of the hippocampus, part
of the brain responsible for memory, was proven recently byTheodore
Bergerat the University of Southern California. A rat with its
hippocampus chemically disabled was able to form new memories with
the aid of an implant.The way these implants are built is by
carefully recording the neural signals of the brain and making a
device that mimics the way they work. The device itself uses an
artificial neural network, which Berger calls aHigh-density
Hippocampal Neuron Network Processor. Painstaking observation of
the brain region in question is needed to build amodel detailed
enoughto stand in for the original. Without neural network
techniques (a subcategory of AI) and abundant computing power, this
approach would never work.Bringing the overview back to more
everyday tech, consider all the AI that will be required to make
thevisionof Augmented Reality mature. AR, as exemplified by Google
Glass, uses computer glasses to overlay graphics on the real world.
For the tech to work, it needs to quickly analyze what the viewer
is seeing and generate graphics that provide useful information. To
be useful, the glasses have to be able to identify complex objects
from any direction, under any lighting conditions, no matter the
weather. To be useful to a driver, for instance, the glasses would
need to identify roads and landmarks faster and more effectively
than is enabled by any current technology. AR is not there yet, but
probably will be within the next ten years. All of this falls into
the category of advances in computer vision, part of AI.Finally,
lets consider some of the recent advances in building AI
scientists. In 2009, Adam became the first robot todiscover new
scientific knowledge, having to do with the genetics of yeast. The
robot, which consists of a small room filled with experimental
equipment connected to a computer, came up with its own hypothesis
and tested it. Though the context and the experiment were simple,
this milestone points to a new world of robotic possibilities. This
is where the intersection between AI and other transhumanist areas,
such as life extension research, could become profound.Many
experiments in life science and biochemistry require a great deal
of trial and error. Certain experiments are already automated with
robotics, but what about computers that formulate and test their
own hypotheses? Making this feasible would require the computer to
understand a great deal of common sense knowledge, as well as
specialized knowledge about the subject area. Consider a robot
scientist like Adam with the object-level knowledge of
theJeopardy!-winning Watson supercomputer. This could be built
today in theory, but it will probably be a few years before
anything like it is built in practice. Once it is, its difficult to
say what the scientific returns could be, but they could be
substantial. Well just have to build it and find out.That concludes
this brief overview. There are many other interesting trends in AI,
but machine vision, cognitive prostheses, and robotic scientists
are among the most interesting, and relevant to futurist goals.I
would like to thank Michael Anissimov, a fellow transhumanist and
author of theAccelerating Future blog, for contributing this
piece.
Branches of AI
logical AIWhat a program knows about the world in general the
facts of the specific situation in which it must act, and its goals
are all represented by sentences of some mathematical logical
language. The program decides what to do by inferring that certain
actions are appropriate for achieving its goals. The first article
proposing this was [McC59]. [McC89] is a more recent summary.
[McC96b] lists some of the concepts involved in logical aI. [Sha97]
is an important text.searchAI programs often examine large numbers
of possibilities, e.g. moves in a chess game or inferences by a
theorem proving program. Discoveries are continually made about how
to do this more efficiently in various domains.pattern
recognitionWhen a program makes observations of some kind, it is
often programmed to compare what it sees with a pattern. For
example, a vision program may try to match a pattern of eyes and a
nose in a scene in order to find a face. More complex patterns,
e.g. in a natural language text, in a chess position, or in the
history of some event are also studied. These more complex patterns
require quite different methods than do the simple patterns that
have been studied the most.representationFacts about the world have
to be represented in some way. Usually languages of mathematical
logic are used.inferenceFrom some facts, others can be inferred.
Mathematical logical deduction is adequate for some purposes, but
new methods ofnon-monotonicinference have been added to logic since
the 1970s. The simplest kind of non-monotonic reasoning is default
reasoning in which a conclusion is to be inferred by default, but
the conclusion can be withdrawn if there is evidence to the
contrary. For example, when we hear of a bird, we man infer that it
can fly, but this conclusion can be reversed when we hear that it
is a penguin. It is the possibility that a conclusion may have to
be withdrawn that constitutes the non-monotonic character of the
reasoning. Ordinary logical reasoning is monotonic in that the set
of conclusions that can the drawn from a set of premises is a
monotonic increasing function of the premises. Circumscription is
another form of non-monotonic reasoning.common sense knowledge and
reasoningThis is the area in which AI is farthest from human-level,
in spite of the fact that it has been an active research area since
the 1950s. While there has been considerable progress, e.g. in
developing systems ofnon-monotonic reasoningand theories of action,
yet more new ideas are needed. The Cyc system contains a large but
spotty collection of common sense facts.learning from
experiencePrograms do that. The approaches to AI based
onconnectionismandneural netsspecialize in that. There is also
learning of laws expressed in logic. [Mit97] is a comprehensive
undergraduate text on machine learning. Programs can only learn
what facts or behaviors their formalisms can represent, and
unfortunately learning systems are almost all based on very limited
abilities to represent information.planningPlanning programs start
with general facts about the world (especially facts about the
effects of actions), facts about the particular situation and a
statement of a goal. From these, they generate a strategy for
achieving the goal. In the most common cases, the strategy is just
a sequence of actions.epistemologyThis is a study of the kinds of
knowledge that are required for solving problems in the
world.ontologyOntology is the study of the kinds of things that
exist. In AI, the programs and sentences deal with various kinds of
objects, and we study what these kinds are and what their basic
properties are. Emphasis on ontology begins in the
1990s.heuristicsA heuristic is a way of trying to discover
something or an idea imbedded in a program. The term is used
variously in AI.Heuristic functionsare used in some approaches to
search to measure how far a node in a search tree seems to be from
a goal.Heuristic predicatesthat compare two nodes in a search tree
to see if one is better than the other, i.e. constitutes an advance
toward the goal, may be more useful. [My opinion].genetic
programmingGenetic programming is a technique for getting programs
to solve a task by mating random Lisp programs and selecting
fittest in millions of generations. It is being developed by John
Koza's group and here's atutorial.
Features of AI Work: Use of symbolic reasoning . Focus on
problems that do not respond to algorithmic solution (Heuristic) .
Work on problems with inexact , missing , or poorly defined
information . Provide answers that are sufficient but not exact .
Deals with semantics as well as syntactic . Work with qualitative
knowledge rather that quantitative knowledge . Use large amount of
domain specific knowledge .
Comparison between intelligent computing and conventional
computing:Intelligent ComputingConventional Computing
1Does not guarantee a solution to a given problem.1Guarantees a
solution to a given problem.
2Results may not be reliable and consistent2Results are
consistent and reliable.
3Programmer does not tell the system how to solve the given
problem.3Programmer tells the system exactly how to solve the
problem
4Can solve a range of problems in a given domain.4Can solve only
one problem at a time in a given domain
GoalsThe general problem of simulating (or creating)
intelligence has been broken down into a number of specific
sub-problems. These consist of particular traits or capabilities
that researchers would like an intelligent system to display. The
traits described below have received the most
attention.[6]Deduction, reasoning, problem solvingEarly AI
researchers developed algorithms that imitated the step-by-step
reasoning that humans use when they solve puzzles or make logical
deductions.[40]By the late 1980s and 1990s, AI research had also
developed highly successful methods for dealing withuncertainor
incomplete information, employing concepts fromprobabilityand
economics.[41]For difficult problems, most of these algorithms can
require enormous computational resources most experience a
"combinatorial explosion": the amount of memory or computer time
required becomes astronomical when the problem goes beyond a
certain size. The search for more efficient problem-solving
algorithms is a high priority for AI research.[42]Human beings
solve most of their problems using fast, intuitive judgements
rather than the conscious, step-by-step deduction that early AI
research was able to model.[43]AI has made some progress at
imitating this kind of "sub-symbolic" problem solving:embodied
agentapproaches emphasize the importance ofsensorimotorskills to
higher reasoning;neural netresearch attempts to simulate the
structures inside the brain that give rise to this
skill;statistical approaches to AImimic the probabilistic nature of
the human ability to guess.Knowledge representation
An ontology represents knowledge as a set of concepts within a
domain and the relationships between those concepts.Main
articles:Knowledge representationandCommonsense knowledgeKnowledge
representation[44]andknowledge engineering[45]are central to AI
research. Many of the problems machines are expected to solve will
require extensive knowledge about the world. Among the things that
AI needs to represent are: objects, properties, categories and
relations between objects;[46]situations, events, states and
time;[47]causes and effects;[48]knowledge about knowledge (what we
know about what other people know);[49]and many other, less well
researched domains. A representation of "what exists" is
anontology: the set of objects, relations, concepts and so on that
the machine knows about. The most general are calledupper
ontologies, which attempt to provide a foundation for all other
knowledge.[50]Among the most difficult problems in knowledge
representation are:Default reasoningand thequalification
problemMany of the things people know take the form of "working
assumptions." For example, if a bird comes up in conversation,
people typically picture an animal that is fist sized, sings, and
flies. None of these things are true about all birds.John
McCarthyidentified this problem in 1969[51]as the qualification
problem: for any commonsense rule that AI researchers care to
represent, there tend to be a huge number of exceptions. Almost
nothing is simply true or false in the way that abstract logic
requires. AI research has explored a number of solutions to this
problem.[52]The breadth ofcommonsense knowledgeThe number of atomic
facts that the average person knows is astronomical. Research
projects that attempt to build a complete knowledge base
ofcommonsense knowledge(e.g.,Cyc) require enormous amounts of
laboriousontological engineering they must be built, by hand, one
complicated concept at a time.[53]A major goal is to have the
computer understand enough concepts to be able to learn by reading
from sources like the internet, and thus be able to add to its own
ontology.[citation needed]The subsymbolic form of somecommonsense
knowledgeMuch of what people know is not represented as "facts" or
"statements" that they could express verbally. For example, a chess
master will avoid a particular chess position because it "feels too
exposed"[54]or an art critic can take one look at a statue and
instantly realize that it is a fake.[55]These are intuitions or
tendencies that are represented in the brain non-consciously and
sub-symbolically.[56]Knowledge like this informs, supports and
provides a context for symbolic, conscious knowledge. As with the
related problem of sub-symbolic reasoning, it is hoped thatsituated
AI,computational intelligence, orstatistical AIwill provide ways to
represent this kind of knowledge.[56]Planning
Ahierarchical control systemis a form ofcontrol systemin which a
set of devices and governing software is arranged in a
hierarchy.Main article:Automated planning and schedulingIntelligent
agents must be able to set goals and achieve them.[57]They need a
way to visualize the future (they must have a representation of the
state of the world and be able to make predictions about how their
actions will change it) and be able to make choices that maximize
theutility(or "value") of the available choices.[58]In classical
planning problems, the agent can assume that it is the only thing
acting on the world and it can be certain what the consequences of
its actions may be.[59]However, if the agent is not the only actor,
it must periodically ascertain whether the world matches its
predictions and it must change its plan as this becomes necessary,
requiring the agent to reason under uncertainty.[60]Multi-agent
planninguses thecooperationand competition of many agents to
achieve a given goal.Emergent behaviorsuch as this is used
byevolutionary algorithmsandswarm intelligence.[61]LearningMain
article:Machine learningMachine learning is the study of computer
algorithms that improve automatically through experience[62][63]and
has been central to AI research since the field's
inception.[64]Unsupervised learningis the ability to find patterns
in a stream of input.Supervised learningincludes
bothclassificationand numericalregression. Classification is used
to determine what category something belongs in, after seeing a
number of examples of things from several categories. Regression is
the attempt to produce a function that describes the relationship
between inputs and outputs and predicts how the outputs should
change as the inputs change. Inreinforcement learning[65]the agent
is rewarded for good responses and punished for bad ones. These can
be analyzed in terms ofdecision theory, using concepts likeutility.
The mathematical analysis of machine learning algorithms and their
performance is a branch oftheoretical computer scienceknown
ascomputational learning theory.[66]Withindevelopmental robotics,
developmental learning approaches were elaborated for lifelong
cumulative acquisition of repertoires of novel skills by a robot,
through autonomous self-exploration and social interaction with
human teachers, and using guidance mechanisms such as active
learning, maturation, motor synergies, and
imitation.[67][68][69][70]Natural language processing
Aparse treerepresents thesyntacticstructure of a sentence
according to someformal grammar.Main article:Natural language
processingNatural language processing[71]gives machines the ability
to read andunderstandthe languages that humans speak. A
sufficiently powerful natural language processing system would
enablenatural language user interfacesand the acquisition of
knowledge directly from human-written sources, such as Internet
texts. Some straightforward applications of natural language
processing includeinformation retrieval(ortext mining) andmachine
translation.[72]A common method of processing and extracting
meaning from natural language is through semantic indexing.
Increases in processing speeds and the drop in the cost of data
storage makes indexing large volumes of abstractions of the users
input much more efficient.Motion and manipulationMain
article:RoboticsThe field ofrobotics[73]is closely related to AI.
Intelligence is required for robots to be able to handle such tasks
as object manipulation[74]andnavigation, with sub-problems
oflocalization(knowing where you are, or finding out where other
things are),mapping(learning what is around you, building a map of
the environment), andmotion planning(figuring out how to get there)
or path planning (going from one point in space to another point,
which may involve compliant motion - where the robot moves while
maintaining physical contact with an object).[75][76]PerceptionMain
articles:Machine perception,Computer vision, andSpeech
recognitionMachine perception[77]is the ability to use input from
sensors (such as cameras, microphones,tactile sensors, sonar and
others more exotic) to deduce aspects of the world.Computer
vision[78]is the ability to analyze visual input. A few selected
subproblems arespeech recognition,[79]facial recognitionandobject
recognition.[80]Social intelligenceMain article:Affective
computing
Kismet, a robot with rudimentary social skills[81]Affective
computing is the study and development of systems and devices that
can recognize, interpret, process, and simulate
humanaffects.[82][83]It is an interdisciplinary field
spanningcomputer sciences,psychology, andcognitive
science.[84]While the origins of the field may be traced as far
back as to early philosophical inquiries intoemotion,[85]the more
modern branch of computer science originated withRosalind Picard's
1995 paper[86]on affective computing.[87][88]A motivation for the
research is the ability to simulateempathy. The machine should
interpret the emotional state of humans and adapt its behaviour to
them, giving an appropriate response for those emotions.Emotion and
social skills[89]play two roles for an intelligent agent. First, it
must be able to predict the actions of others, by understanding
their motives and emotional states. (This involves elements ofgame
theory,decision theory, as well as the ability to model human
emotions and the perceptual skills to detect emotions.) Also, in an
effort to facilitatehuman-computer interaction, an intelligent
machine might want to be able todisplayemotionseven if it does not
actually experience them itselfin order to appear sensitive to the
emotional dynamics of human interaction.CreativityMain
article:Computational creativityA sub-field of AI
addressescreativityboth theoretically (from a philosophical and
psychological perspective) and practically (via specific
implementations of systems that generate outputs that can be
considered creative, or systems that identify and assess
creativity). Related areas of computational research areArtificial
intuitionandArtificial thinking.General intelligence[edit]Main
articles:Artificial general intelligenceandAI-completeMany
researchers think that their work will eventually be incorporated
into a machine withgeneralintelligence (known asstrong AI),
combining all the skills above and exceeding human abilities at
most or all of them.[7]A few believe thatanthropomorphicfeatures
likeartificial consciousnessor anartificial brainmay be required
for such a project.[90][91]Many of the problems above may require
general intelligence to be considered solved. For example, even a
straightforward, specific task likemachine translationrequires that
the machine read and write in both languages (NLP), follow the
author's argument (reason), know what is being talked about
(knowledge), and faithfully reproduce the author's intention
(social intelligence). A problem likemachine translationis
considered "AI-complete". In order to solve this particular
problem, you must solve all the problems.[92]Approaches[edit]There
is no established unifying theory orparadigmthat guides AI
research. Researchers disagree about many issues.[93]A few of the
most long standing questions that have remained unanswered are
these: should artificial intelligence simulate natural intelligence
by studyingpsychologyorneurology? Or is human biology as irrelevant
to AI research as bird biology is toaeronautical
engineering?[94]Can intelligent behavior be described using simple,
elegant principles (such aslogicoroptimization)? Or does it
necessarily require solving a large number of completely unrelated
problems?[95]Can intelligence be reproduced using high-level
symbols, similar to words and ideas? Or does it require
"sub-symbolic" processing?[96]John Haugeland, who coined the term
GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed
that AI should more properly be referred to assynthetic
intelligence,[97]a term which has since been adopted by some
non-GOFAI researchers.[98][99]Cybernetics and brain
simulation[edit]Main articles:CyberneticsandComputational
neuroscienceIn the 1940s and 1950s, a number of researchers
explored the connection betweenneurology,information theory,
andcybernetics. Some of them built machines that used electronic
networks to exhibit rudimentary intelligence, such asW. Grey
Walter'sturtlesand theJohns Hopkins Beast. Many of these
researchers gathered for meetings of the Teleological Society
atPrinceton Universityand theRatio Clubin England.[20]By 1960, this
approach was largely abandoned, although elements of it would be
revived in the 1980s.Symbolic[edit]Main article:GOFAIWhen access to
digital computers became possible in the middle 1950s, AI research
began to explore the possibility that human intelligence could be
reduced to symbol manipulation. The research was centered in three
institutions:Carnegie Mellon University,StanfordandMIT, and each
one developed its own style of research.John Haugelandnamed these
approaches to AI "good old fashioned AI" or "GOFAI".[100]During the
1960s, symbolic approaches had achieved great success at simulating
high-level thinking in small demonstration programs. Approaches
based oncyberneticsorneural networkswere abandoned or pushed into
the background.[101]Researchers in the 1960s and the 1970s were
convinced that symbolic approaches would eventually succeed in
creating a machine withartificial general intelligenceand
considered this the goal of their field.Cognitive
simulationEconomistHerbert SimonandAllen Newellstudied human
problem-solving skills and attempted to formalize them, and their
work laid the foundations of the field of artificial intelligence,
as well ascognitive science,operations researchandmanagement
science. Their research team used the results
ofpsychologicalexperiments to develop programs that simulated the
techniques that people used to solve problems. This tradition,
centered atCarnegie Mellon Universitywould eventually culminate in
the development of theSoararchitecture in the middle
1980s.[102][103]Logic-basedUnlikeNewellandSimon,John McCarthyfelt
that machines did not need to simulate human thought, but should
instead try to find the essence of abstract reasoning and problem
solving, regardless of whether people used the same
algorithms.[94]His laboratory atStanford(SAIL) focused on using
formallogicto solve a wide variety of problems, includingknowledge
representation,planningandlearning.[104]Logic was also the focus of
the work at theUniversity of Edinburghand elsewhere in Europe which
led to the development of the programming languagePrologand the
science oflogic programming.[105]"Anti-logic" or
"scruffy"Researchers atMIT(such asMarvin MinskyandSeymour
Papert)[106]found that solving difficult problems
invisionandnatural language processingrequired ad-hoc solutions
they argued that there was no simple and general principle
(likelogic) that would capture all the aspects of intelligent
behavior.Roger Schankdescribed their "anti-logic" approaches as
"scruffy" (as opposed to the "neat" paradigms
atCMUandStanford).[95]Commonsense knowledge bases(such asDoug
Lenat'sCyc) are an example of "scruffy" AI, since they must be
built by hand, one complicated concept at a
time.[107]Knowledge-basedWhen computers with large memories became
available around 1970, researchers from all three traditions began
to buildknowledgeinto AI applications.[108]This "knowledge
revolution" led to the development and deployment ofexpert
systems(introduced byEdward Feigenbaum), the first truly successful
form of AI software.[30]The knowledge revolution was also driven by
the realization that enormous amounts of knowledge would be
required by many simple AI applications. Sub-symbolic symbolic AI
seemed to stall and many believed that symbolic systems would never
be able to imitate all the processes of human cognition,
especiallyperception,robotics,learningandpattern recognition. A
number of researchers began to look into "sub-symbolic" approaches
to specific AI
problems.[96]Bottom-up,embodied,situated,behavior-basedornouvelle
AIResearchers from the related field ofrobotics, such asRodney
Brooks, rejected symbolic AI and focused on the basic engineering
problems that would allow robots to move and survive.[109]Their
work revived the non-symbolic viewpoint of the
earlycyberneticsresearchers of the 1950s and reintroduced the use
ofcontrol theoryin AI. This coincided with the development of
theembodied mind thesisin the related field ofcognitive science:
the idea that aspects of the body (such as movement, perception and
visualization) are required for higher intelligence.Computational
intelligenceInterest inneural networksand "connectionism" was
revived byDavid Rumelhartand others in the middle 1980s.[110]These
and other sub-symbolic approaches, such asfuzzy
systemsandevolutionary computation, are now studied collectively by
the emerging discipline ofcomputational
intelligence.[111]StatisticalIn the 1990s, AI researchers developed
sophisticated mathematical tools to solve specific subproblems.
These tools are trulyscientific, in the sense that their results
are both measurable and verifiable, and they have been responsible
for many of AI's recent successes. The shared mathematical language
has also permitted a high level of collaboration with more
established fields (likemathematics, economics oroperations
research).Stuart RussellandPeter Norvigdescribe this movement as
nothing less than a "revolution" and "the victory of
theneats."[33]Critics argue that these techniques are too focused
on particular problems and have failed to address the long term
goal of general intelligence.[112]There is an ongoing debate about
the relevance and validity of statistical approaches in AI,
exemplified in part by exchanges betweenPeter NorvigandNoam
Chomsky.[113][114]
Integrating the approaches
Intelligent agent paradigmAnintelligent agentis a system that
perceives its environment and takes actions which maximize its
chances of success. The simplest intelligent agents are programs
that solve specific problems. More complicated agents include human
beings and organizations of human beings (such asfirms). The
paradigm gives researchers license to study isolated problems and
find solutions that are both verifiable and useful, without
agreeing on one single approach. An agent that solves a specific
problem can use any approach that works some agents are symbolic
and logical, some are sub-symbolicneural networksand others may use
new approaches. The paradigm also gives researchers a common
language to communicate with other fieldssuch asdecision theoryand
economicsthat also use concepts of abstract agents. The intelligent
agent paradigm became widely accepted during the 1990s.[2]Agent
architecturesandcognitive architecturesResearchers have designed
systems to build intelligent systems out of interactingintelligent
agentsin amulti-agent system.[115]A system with both symbolic and
sub-symbolic components is ahybrid intelligent system, and the
study of such systems isartificial intelligence systems
integration. Ahierarchical control systemprovides a bridge between
sub-symbolic AI at its lowest, reactive levels and traditional
symbolic AI at its highest levels, where relaxed time constraints
permit planning and world modelling.[116]Rodney Brooks'subsumption
architecturewas an early proposal for such a hierarchical
system.[117]Tools[edit]In the course of 50 years of research, AI
has developed a large number of tools to solve the most difficult
problems incomputer science. A few of the most general of these
methods are discussed below.Search and optimization[edit]Main
articles:Search algorithm,Mathematical optimization,
andEvolutionary computationMany problems in AI can be solved in
theory by intelligently searching through many possible
solutions:[118]Reasoningcan be reduced to performing a search. For
example, logical proof can be viewed as searching for a path that
leads frompremisestoconclusions, where each step is the application
of aninference rule.[119]Planningalgorithms search through trees of
goals and subgoals, attempting to find a path to a target goal, a
process calledmeans-ends analysis.[120]Roboticsalgorithms for
moving limbs and grasping objects uselocal searchesinconfiguration
space.[74]Manylearningalgorithms use search algorithms based
onoptimization.Simple exhaustive searches[121]are rarely sufficient
for most real world problems: thesearch space(the number of places
to search) quickly grows toastronomicalnumbers. The result is a
search that istoo slowor never completes. The solution, for many
problems, is to use "heuristics" or "rules of thumb" that eliminate
choices that are unlikely to lead to the goal (called
"pruningthesearch tree").Heuristicssupply the program with a "best
guess" for the path on which the solution lies.[122]Heuristics
limit the search for solutions into a smaller sample size.[75]A
very different kind of search came to prominence in the 1990s,
based on the mathematical theory ofoptimization. For many problems,
it is possible to begin the search with some form of a guess and
then refine the guess incrementally until no more refinements can
be made. These algorithms can be visualized as blindhill climbing:
we begin the search at a random point on the landscape, and then,
by jumps or steps, we keep moving our guess uphill, until we reach
the top. Other optimization algorithms aresimulated annealing,beam
searchandrandom optimization.[123]Evolutionary computationuses a
form of optimization search. For example, they may begin with a
population of organisms (the guesses) and then allow them to mutate
and recombine,selectingonly the fittest to survive each generation
(refining the guesses). Forms ofevolutionary
computationincludeswarm intelligencealgorithms (such asant
colonyorparticle swarm optimization)[124]andevolutionary
algorithms(such asgenetic algorithms,gene expression programming,
andgenetic programming).[125]Logic[edit]Main articles:Logic
programmingandAutomated reasoningLogic[126]is used for knowledge
representation and problemsolving, but it can be applied to other
problems as well. For example, thesatplanalgorithm uses logic
forplanning[127]andinductive logic programmingis a method
forlearning.[128]Several different forms of logic are used in AI
research.Propositionalorsentential logic[129]is the logic of
statements which can be true or false.First-order logic[130]also
allows the use ofquantifiersandpredicates, and can express facts
about objects, their properties, and their relations with each
other.Fuzzy logic,[131]is a version of first-order logic which
allows the truth of a statement to be represented as a value
between 0 and 1, rather than simply True (1) or False (0).Fuzzy
systemscan be used for uncertain reasoning and have been widely
used in modern industrial and consumer product control
systems.Subjective logic[132]models uncertainty in a different and
more explicit manner than fuzzy-logic: a given binomial opinion
satisfies belief + disbelief + uncertainty = 1 within aBeta
distribution. By this method, ignorance can be distinguished from
probabilistic statements that an agent makes with high
confidence.Default logics,non-monotonic
logicsandcircumscription[52]are forms of logic designed to help
with default reasoning and thequalification problem. Several
extensions of logic have been designed to handle specific domains
ofknowledge, such as:description logics;[46]situation
calculus,event calculusandfluent calculus(for representing events
and time);[47]causal calculus;[48]belief calculus; andmodal
logics.[49]Probabilistic methods for uncertain reasoning[edit]Main
articles:Bayesian network,Hidden Markovmodel,Kalman filter,Decision
theory, andUtility theoryMany problems in AI (in reasoning,
planning, learning, perception and robotics) require the agent to
operate with incomplete or uncertain information. AI researchers
have devised a number of powerful tools to solve these problems
using methods fromprobabilitytheory and economics.[133]Bayesian
networks[134]are a very general tool that can be used for a large
number of problems: reasoning (using theBayesian
inferencealgorithm),[135]learning(using theexpectation-maximization
algorithm),[136]planning(usingdecision
networks)[137]andperception(usingdynamic Bayesian
networks).[138]Probabilistic algorithms can also be used for
filtering, prediction, smoothing and finding explanations for
streams of data, helpingperceptionsystems to analyze processes that
occur over time (e.g.,hidden Markov modelsorKalman filters).[138]A
key concept from the science of economics is "utility": a measure
of how valuable something is to an intelligent agent. Precise
mathematical tools have been developed that analyze how an agent
can make choices and plan, usingdecision theory,decision
analysis,[139]information value theory.[58]These tools include
models such asMarkov decision processes,[140]dynamicdecision
networks,[138]game theoryandmechanism design.[141]Classifiers and
statistical learning methods[edit]Main articles:Classifier
(mathematics),Statistical classification, andMachine learningThe
simplest AI applications can be divided into two types:classifiers
("if shiny then diamond") and controllers ("if shiny then pick
up"). Controllers do however also classify conditions before
inferring actions, and therefore classification forms a central
part of many AI systems.Classifiersare functions that usepattern
matchingto determine a closest match. They can be tuned according
to examples, making them very attractive for use in AI. These
examples are known as observations or patterns. In supervised
learning, each pattern belongs to a certain predefined class. A
class can be seen as a decision that has to be made. All the
observations combined with their class labels are known as a data
set. When a new observation is received, that observation is
classified based on previous experience.[142]A classifier can be
trained in various ways; there are many statistical andmachine
learningapproaches. The most widely used classifiers are theneural
network,[143]kernel methodssuch as thesupport vector
machine,[144]k-nearest neighbor algorithm,[145]Gaussian mixture
model,[146]naive Bayes classifier,[147]anddecision tree.[148]The
performance of these classifiers have been compared over a wide
range of tasks. Classifier performance depends greatly on the
characteristics of the data to be classified. There is no single
classifier that works best on all given problems; this is also
referred to as the "no free lunch" theorem. Determining a suitable
classifier for a given problem is still more an art than
science.[149]Neural networks[edit]Main articles:Neural
networkandConnectionism
A neural network is an interconnected group of nodes, akin to
the vast network ofneuronsin thehuman brain.The study ofartificial
neural networks[143]began in the decade before the field AI
research was founded, in the work ofWalter PittsandWarren
McCullough. Other important early researchers wereFrank Rosenblatt,
who invented theperceptronandPaul Werboswho developed
thebackpropagationalgorithm.[150]The main categories of networks
are acyclic orfeedforwardneural networks(where the signal passes in
only one direction) andrecurrent neural networks(which allow
feedback). Among the most popular feedforward networks
areperceptrons,multi-layer perceptronsandradial basis
networks.[151]Among recurrent networks, the most famous is
theHopfield net, a form of attractor network, which was first
described byJohn Hopfieldin 1982.[152]Neural networks can be
applied to the problem ofintelligent control(for robotics)
orlearning, using such techniques asHebbian learningandcompetitive
learning.[153]Hierarchical temporal memoryis an approach that
models some of the structural and algorithmic properties of
theneocortex.[154]Control theory[edit]Main article:Intelligent
controlControl theory, the grandchild ofcybernetics, has many
important applications, especially
inrobotics.[155]Languages[edit]Main article:List of programming
languages for artificial intelligenceAI researchers have developed
several specialized languages for AI research,
includingLisp[156]andProlog.[157]Evaluating progress[edit]Main
article:Progress in artificial intelligenceIn 1950, Alan Turing
proposed a general procedure to test the intelligence of an agent
now known as theTuring test. This procedure allows almost all the
major problems of artificial intelligence to be tested. However, it
is a very difficult challenge and at present all agents
fail.[158]Artificial intelligence can also be evaluated on specific
problems such as small problems in chemistry, hand-writing
recognition and game-playing. Such tests have been termedsubject
matter expert Turing tests. Smaller problems provide more
achievable goals and there are an ever-increasing number of
positive results.[159]One classification for outcomes of an AI test
is:[160]Optimal: it is not possible to perform better.Strong
super-human: performs better than all humans.Super-human: performs
better than most humans.Sub-human: performs worse than most
humans.For example, performance atdraughts(i.e. checkers) is
optimal,[161]performance at chess is super-human and nearing strong
super-human (seecomputer chess:computers versus human) and
performance at many everyday tasks (such as recognizing a face or
crossing a room without bumping into something) is sub-human.A
quite different approach measures machine intelligence through
tests which are developed frommathematicaldefinitions of
intelligence. Examples of these kinds of tests start in the late
nineties devising intelligence tests using notions fromKolmogorov
complexityanddata compression.[162]Two major advantages of
mathematical definitions are their applicability to nonhuman
intelligences and their absence of a requirement for human
testers.An area that artificial intelligence had contributed
greatly to is Intrusion detection.[163]A derivative of the Turing
test is the Completely Automated Public Turing test to tell
Computers and Humans Apart (CAPTCHA). as the name implies, this
helps to determine that a user is an actual person and not a
computer posing as a human. In contrast to the standard Turing
test, CAPTCHA administered by a machine and targeted to a human as
opposed to being administered by a human and targeted to a machine.
A computer asks a user to complete a simple test then generates a
grade for that test. Computers are unable to solve the problem, so
correct solutions are deemed to be the result of a person taking
the test. A common type of CAPTCHA is the test that requires the
typing of distorted letters, numbers or symbols that appear in an
image undecipherable by a computer.[164]Applications[edit]
Anautomated online assistantproviding customer service on a web
page one of many very primitive applications of artificial
intelligence.
Main article:Applications of artificial intelligenceArtificial
intelligence techniques are pervasive and are too numerous to list.
Frequently, when a technique reaches mainstream use, it is no
longer considered artificial intelligence; this phenomenon is
described as theAI effect.[165]Competitions and prizes[edit]Main
article:Competitions and prizes in artificial intelligenceThere are
a number of competitions and prizes to promote research in
artificial intelligence. The main areas promoted are: general
machine intelligence, conversational behavior, data-mining,robotic
cars, robot soccer and games.Platforms[edit]Aplatform(or "computing
platform") is defined as "some sort of hardware architecture or
software framework (including application frameworks), that allows
software to run." As Rodney Brooks[166]pointed out many years ago,
it is not just the artificial intelligence software that defines
the AI features of the platform, but rather the actual platform
itself that affects the AI that results, i.e., there needs to be
work in AI problems on real-world platforms rather than in
isolation.A wide variety of platforms has allowed different aspects
of AI to develop, ranging fromexpert systems, albeitPC-based but
still an entire real-world system, to various robot platforms such
as the widely availableRoombawith open
interface.[167]Philosophy[edit]Main article:Philosophy of
artificial intelligenceArtificial intelligence, by claiming to be
able to recreate the capabilities of the humanmind, is both a
challenge and an inspiration for philosophy. Are there limits to
how intelligent machines can be? Is there an essential difference
between human intelligence and artificial intelligence? Can a
machine have amindandconsciousness? A few of the most influential
answers to these questions are given below.[168]Turing's "polite
convention"We need not decide if a machine can "think"; we need
only decide if a machine can act as intelligently as a human being.
This approach to the philosophical problems associated with
artificial intelligence forms the basis of theTuring
test.[158]TheDartmouth proposal"Every aspect of learning or any
other feature of intelligence can be so precisely described that a
machine can be made to simulate it." This conjecture was printed in
the proposal for theDartmouth Conferenceof 1956, and represents the
position of most working AI researchers.[169]Newell and Simon's
physical symbol system hypothesis"A physical symbol system has the
necessary and sufficient means of general intelligent action."
Newell and Simon argue that intelligences consist of formal
operations on symbols.[170]Hubert Dreyfusargued that, on the
contrary, human expertise depends on unconscious instinct rather
than conscious symbol manipulation and on having a "feel" for the
situation rather than explicit symbolic knowledge. (SeeDreyfus'
critique of AI.)[171][172]Gdel's incompleteness theoremAformal
system(such as a computer program) cannot prove all true
statements.[173]Roger Penroseis among those who claim that Gdel's
theorem limits what machines can do. (SeeThe Emperor's New
Mind.)[174]Searle's strong AI hypothesis"The appropriately
programmed computer with the right inputs and outputs would thereby
have a mind in exactly the same sense human beings have
minds."[175]John Searle counters this assertion with hisChinese
roomargument, which asks us to lookinsidethe computer and try to
find where the "mind" might be.[176]Theartificial brainargumentThe
brain can be simulated.Hans Moravec,Ray Kurzweiland others have
argued that it is technologically feasible to copy the brain
directly into hardware and software, and that such a simulation
will be essentially identical to the original.[91]Predictions and
ethics[edit]Main articles:Artificial intelligence in fiction,Ethics
of artificial intelligence,Transhumanism, andTechnological
singularityArtificial intelligence is a common topic in both
science fiction and projections about the future of technology and
society. The existence of an artificial intelligence that rivals
human intelligence raises difficult ethical issues, and the
potential power of the technology inspires both hopes and fears.In
fiction, artificial intelligencehas appeared fulfilling many
roles.These include:a real time battlefield analyst (CortanainHalo:
Combat Evolved,Halo 2,Halo 3, andHalo 4)a servant
(R2-D2andC-3POinStar Wars)a law enforcer (K.I.T.T."Knight Rider")a
comrade (Lt. Commander DatainStar Trek: The Next Generation)a
conqueror/overlord (The Matrix,Omnius)a dictator (With Folded
Hands),(Colossus: The Forbin Project(1970 Movie).a benevolent
provider/de facto ruler (The Culture)a supercomputer (The Red
QueeninResident Evil/ "Gilium" inOutlaw Star/Golem XIV)an assassin
(Terminator)a sentient race (Battlestar Galactica/Transformers/Mass
Effect)an extension to human abilities (Ghost in the Shell)the
savior of the human race (R. Daneel OlivawinIsaac
Asimov'sRobotseries)the human race critic and philosopher (Golem
XIV)Mary Shelley'sFrankensteinconsiders a key issue in theethics of
artificial intelligence: if a machine can be created that has
intelligence, could it alsofeel? If it can feel, does it have the
same rights as a human? The idea also appears in modern science
fiction, including the filmsI Robot,Blade RunnerandA.I.: Artificial
Intelligence, in which humanoid machines have the ability to feel
human emotions. This issue, now known as "robot rights", is
currently being considered by, for example, California'sInstitute
for the Future, although many critics believe that the discussion
is premature.[177]The subject is profoundly discussed in the 2010
documentary filmPlug & Pray.[178]Martin Ford, author ofThe
Lights in the Tunnel: Automation, Accelerating Technology and the
Economy of the Future,[179]and others argue that specialized
artificial intelligence applications, robotics and other forms of
automation will ultimately result in significant unemployment as
machines begin to match and exceed the capability of workers to
perform most routine and repetitive jobs. Ford predicts that many
knowledge-based occupationsand in particular entry level jobswill
be increasingly susceptible to automation via expert systems,
machine learning[180]and other AI-enhanced applications. AI-based
applications may also be used to amplify the capabilities of
low-wage offshore workers, making it more feasible
tooutsourceknowledge work.[181]Joseph Weizenbaumwrote that AI
applications can not, by definition, successfully simulate genuine
human empathy and that the use of AI technology in fields such
ascustomer serviceorpsychotherapy[182]was deeply misguided.
Weizenbaum was also bothered that AI researchers (and some
philosophers) were willing to view the human mind as nothing more
than a computer program (a position now known ascomputationalism).
To Weizenbaum these points suggest that AI research devalues human
life.[183]Many futurists believe that artificial intelligence will
ultimately transcend the limits of progress.Ray Kurzweilhas
usedMoore's law(which describes the relentless exponential
improvement in digital technology) to calculate thatdesktop
computerswill have the same processing power as human brains by the
year 2029. He also predictsthat by 2045 artificial intelligence
will reach a point where it is able toimproveitselfat a rate that
faexceeds anything conceivable in the past, a scenario that science
fiction writerVernor Vingenamed the "singularity".[184]Robot
designerHans Moravec, cyberneticistKevin Warwickand inventorRay
Kurzweilhave predicted that humans and machines will merge in the
future intocyborgsthat are more capable and powerful than
either.[185]This idea, calledtranshumanism, which has roots
inAldous HuxleyandRobert Ettinger, has been illustrated in fiction
as well, for example in themangaGhost in the Shelland the
science-fiction seriesDune. In the 1980s artistHajime Sorayama's
Sexy Robots series were painted and published in Japan depicting
the actual organic human form with life-like muscular metallic
skins and later "the Gynoids" book followed that was used by or
influenced movie makers includingGeorge Lucasand other creatives.
Sorayama never considered these organic robots to be real part of
nature but always unnatural product of the human mind, a fantasy
existing in the mind even when realized in actual form. Almost 20
years later, the first AI robotic pet,AIBO, came available as a
companion to people. AIBO grew out of Sony's Computer Science
Laboratory (CSL). Famed engineer Toshitada Doi is credited as
AIBO's original progenitor: in 1994 he had started work on robots
with artificial intelligence expert Masahiro Fujita, at CSL. Doi's,
friend, the artist Hajime Sorayama, was enlisted to create the
initial designs for the AIBO's body. Those designs are now part of
the permanent collections of Museum of Modern Art and the
Smithsonian Institution, with later versions of AIBO being used in
studies in Carnegie Mellon University. In 2006, AIBO was added into
Carnegie Mellon University's "Robot Hall of Fame".Political
scientistCharles T. Rubinbelieves that AI can be neither designed
nor guaranteed to befriendly.[186]He argues that "any sufficiently
advanced benevolence may be indistinguishable from malevolence."
Humans should not assume machines or robots would treat us
favorably, because there is noa priorireason to believe that they
would be sympathetic to our system of morality, which has evolved
along with our particular biology (which AIs would not
share).Edward Fredkinargues that "artificial intelligence is the
next stage in evolution", an idea first proposed bySamuel Butler's
"Darwin among the Machines" (1863), and expanded upon byGeorge
Dysonin his book of the same name in 1998.[187]