Joe Hanson Senior Project 2012 Dr. Call Taking Man Out of the Loop: The Dangers of Exponential Reliance On Artificial Intelligence A 2012 Time Magazine article dubbed “The Drone” the 2011 Weapon of the Year. 1 With over 7,000 drones in the air, military use of unmanned vehicles is exponentially rising. Why is drone technology progressing at such a fast rate? Artificial Intelligence (AI) is at the forefront of drone technology development. Exponential technological developments in the last century have changed society in numerous ways. Mankind is beginning to rely increasingly on technology in everyday life, with many of these technologies bringing beneficial progress to all aspects of society. Exponential growth in computer, robotic, and electronic technology has led to the integration of this technology into social, economic, and military systems. Artificial intelligence is a part of computer science that is the intelligence and cause of action of a machine, both in hardware and software form. Using AI this machine can act autonomously and function in an environment using rapid data processing, pattern recognition, and environmental perception sensors to make decisions and carry out goals and tasks. AI seeks to emulate human intelligence, using these sensors to understand and process to solve and adapt to problems in real time. There is debate over whether AI is even plausible; if it is even possible to create a machine that can emulate human thought. Both humans and computers are able the process information, but humans have the ability to understand that information. Humans are able to make sense out of what they see and hear, involving the use of intelligence. 2 Some 1 Feifel Sun, TIME Magazine 178, no. 25 (2011): 26. 2 Henry Mishkoff, Understanding Artificial Intelligence (Texas: Texas Instruments, 1985), 5.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Joe HansonSenior Project 2012Dr. Call
Taking Man Out of the Loop: The Dangers of Exponential Reliance On Artificial Intelligence
A 2012 Time Magazine article dubbed “The Drone” the 2011 Weapon of the Year.1 With
over 7,000 drones in the air, military use of unmanned vehicles is exponentially rising. Why is
drone technology progressing at such a fast rate? Artificial Intelligence (AI) is at the forefront of
drone technology development. Exponential technological developments in the last century have
changed society in numerous ways. Mankind is beginning to rely increasingly on technology in
everyday life, with many of these technologies bringing beneficial progress to all aspects of
society. Exponential growth in computer, robotic, and electronic technology has led to the
integration of this technology into social, economic, and military systems.
Artificial intelligence is a part of computer science that is the intelligence and cause of
action of a machine, both in hardware and software form. Using AI this machine can act
autonomously and function in an environment using rapid data processing, pattern recognition,
and environmental perception sensors to make decisions and carry out goals and tasks. AI seeks
to emulate human intelligence, using these sensors to understand and process to solve and adapt
to problems in real time.
There is debate over whether AI is even plausible; if it is even possible to create a
machine that can emulate human thought. Both humans and computers are able the process
information, but humans have the ability to understand that information. Humans are able to
make sense out of what they see and hear, involving the use of intelligence.2 Some
1 Feifel Sun, TIME Magazine 178, no. 25 (2011): 26.2 Henry Mishkoff, Understanding Artificial Intelligence (Texas: Texas Instruments, 1985), 5.
characteristics of intelligence include the ability to: “respond to situations flexibly, make sense
out of ambiguous or contradictory messages, recognize importance of different elements of a
situation, and draw distinctions.”3 When discussing the possibilities of AI and the creation of a
thinking machine, the main issue is whether or not a computer is able to possess intelligence.
Supporters of AI development argue that because of exponential progress in computer and
robotic technology, AI is developing further than just simple data processing, to the creation of
autonomous AI that can emulate and surpass the intelligence of a human. According to
University of Michigan Professor Paul Edwards, scientists are beginning to “simulate some of
the functional aspects of biological neurons and their synaptic connections, neural networks
could recognize patterns and solve certain kinds of problems without explicitly encoded
knowledge or procedures,” meaning that AI is beginning to incorporate human biology to make
it think.4 On the other side of the debate, AI skeptics and deniers argue that AI will never have
the ability to surpass human intelligence. They argue that the human brain is far too advanced,
that though a machine can calculate data faster, it will never match the complexity of a human
brain.
In order to emulate human thought, computer systems rely on programmed “expert
systems,” an kind of AI that, “acts as an intelligent assistant,” to the AI's human user.5 An expert
system is not just a computer program that can search and retrieve knowledge. Instead, an expert
system possesses expertise, pools information and creates its own conclusion, “emulating human
reason.”6 An expert system has three components that makes it more technologically advanced
2
3 Mishkoff, 5.4 Paul Edwards, The Closed World (Cambridge: The MIT Press, 1997), 356.5 Edwards, 356.6 Mishkoff, 5.
than a simple informational retrieval system. One of these components is “knowledge base,” a
collection of declarative knowledge (facts) and procedural knowledge (courses of action), acting
as the expert system's memory bank. An expert system can integrate the two types of knowledge
when making a conclusion.7 Another component is an “user interface,” hardware that a human
user can communicate with the system, forming a two-way communication channel. The last
component is the interface engine, which is the most advanced part of the expert system. This
program knows when and how to apply knowledge, and also directs the implementation of that
knowledge. These three components allow the expert system to exceed the capabilities of a
simple information retrieval system.
The capabilities of expert systems have opened up doors for military application. These
functions can be applied to a number of military situations, from battlefield management, to
surveillance, to data processing. Integrating expert systems into military AI technology gives
those systems the ability to interpret, monitor, plan, predict, and control system behavior.8 A
system is able to monitor its behavior, comparing and interpreting observations collected through
sensory data. The ability to monitor and interpret is important for AI specializing in surveillance
and image analysis, a vital capability for unmanned aerial vehicles. Expert systems also function
as battlefield aids, helping to plan by designing actions, while also prediction, inferring
consequences based on large amounts of data.9 Military application of expert systems in their AI
systems give them an advantage on and off the battlefield, aiding in decision making and
streamlining battlefield management and surveillance.
3
6 Mishkoff, 55.8 Mishkoff, 59.9 Mishkoff, 59.
AI benefits society in a number of ways, including socially, economically, and
technologically. AI's rapid data processing and accuracy can help in many different sectors of
society. Although these benefits are progressive and necessary in connection with other
emerging technologies, specifically computer technology, society must be wary of over-reliance
on AI technology and integration. Over integration of AI into society has begun the trend of
taking a human out of loop, relying more on AI to carry out tasks, ranging small to large. And as
AI technology continues to develop, autonomous AI systems will further be relied on to carry out
tasks in all aspects of society, especially in military systems and weapons, and as there is less
human control, humans must be cautious of putting all the eggs in one basket. The dangers of
using AI in the military often outweigh the benefits, dangers including malfunction, unethical
use, lack of testing, and the unpredictable nature and actions of AI systems.
The possibility of a loss of control over an AI system, of humans giving a thinking
machine too much responsibility, increases the chances of that reliance backfiring on its human
creators. The backfire isn't just inconvenient, it can also be dangerous, especially if the backfire
takes place in a military system. Missiles, unmanned drones, and other advanced forms of
weaponry are relying on AI to aid them in functioning, and as AI technology becomes faster and
smarter, humans are relying on the AI technology more and more. These systems have the
ability to cause catastrophic damage, and taking humans out of the loop is especially dangerous.
There has been extensive research and debate over AI in numerous regards. From the
birth of AI as a field at the 1956 Dartmouth Conference, there has been support and opposition,
optimists, skeptics, and deniers from all fields including physics, philosophy, computer science,
and engineering. I will recognize all these different viewpoints, but my argument is that of the
4
skeptics, recognizing the benefits and progress that AI can bring, but still being wary of over-
reliance on AI, specifically its integration into military systems. The idea of putting technology,
such as advanced weaponry and missiles, under the responsibility of an AI system, whether it be
AI software or hardware is especially dangerous. AI machines may lack the ability to think
morally, ethically, or understand morality at all, so giving it the ability to kill while overly
relying on it is a danger. Optimists such as founders of AI Marvin Minsky and John McCarthy
fully support, embrace, and trust the integration of AI into society. On the other side of the
spectrum are the deniers, the most famous being Hubert Dreyfus, who believe that a machine
will never have the capabilities to emulate human intelligence, denying the existence of AI all
together. This section of my paper reviews the existing literature on AI and the diverse views of
its critics and supporters.
The supporters of AI come from diverse fields of study, but all embrace the technology
and have an optimism and trust for it. Alan Turing, an English computer scientist, was one of the
first scholars to write about AI, even before it was declared as a field. Turing's paper,
“Computing Machinery and Intelligence,” is mainly concerned with the question of “Can
Machines Think?”.10 Turing's work was some of the first looking into computer and AI theory.
Turing introduces the “Turing Test,” which tests a machine, both software and hardware, to see if
it can exhibit intelligent behavior. Turing doesn't just introduce the Turing Test, but also shows
his optimism for AI by refuting the “Nine Objections,” which were nine possible objections of a
machine's ability to think. Some of these objections include a theological objection, the inability
for computers to think independently, mathematical limitations, and complete denial of the
5
10 Alan Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433-460.
existence of thinking machines. Turing refutes these objections through both philosophical and
scientific arguments supporting the possibility of a thinking machine. Turing argues that a
reason that people deny the possibility of thinking machines is not because they think it is
impossible, but rather because they fear it and that, “we like to believe that Man is in some subtle
way superior to the rest of creation.”11 Turing argues that computers will have the ability to
think independently and have conscious experiences.
Another notable early AI developer was Norbert Wiener, an American mathematician,
who was the originator of cybernetics theory. In The Use of Human Beings: Cybernetics and
Society, Weiner argues that the automation of society is beneficial. Wiener shows that there
shouldn't be a fear of integrating technology into society, but instead people should embrace the
integration. Wiener says that cybernetics and the continuation of technological progress rely on
a human trust in autonomous machines. Though Wiener recognizes the benefits and progress
that automation brings, he still does warn of relying too heavily on it.
After the establishment of AI as a field at the Dartmouth Conference, the organizer of the
conference, John McCarthy, wrote Defending AI Research. In this book, McCarthy collected
numerous essays that support the development of AI and its benefits to society. McCarthy
reviews the existing literature of notable early AI developers and either refutes or supports their
claims. In the book, McCarthy reviews the article “Artificial Intelligence: A General Survey.”12
The article was written by James Lighthill, a British mathematician. In the article, Lighthill is
critical of the existence of AI as a field. McCarthy refutes Lighthill's claims and defends AI
existence and development. McCarthy also defends AI research from those who claim “AI as an
6
11 Turing, 44412 John McCarthy, Defending AI Research (California: CSLI Publications, 1996), 27-34.
incoherent concept philosophically,” specifically refuting the arguments of Dreyfus. McCarthy
argues that philosophers often “say that no matter what it [AI] does, it wouldn't count as
intelligent.”13 Lastly, McCarthy refutes the arguments of those who claim that AI research is
immoral and antihuman, saying that these skeptics and opponents are against pure science and
research motivated solely by curiosity.14 McCarthy argues that research in computer science is
necessary for opening up options for mankind.15
Hubert Dreyfus is been a prominent denier of the existence of AI for decades. A
professor of philosophy at UC Berkeley, Dreyfus has written numerous books in opposition to
and critiquing the foundations of AI as a field. Dreyfus's main critique of AI is the idea that a
machine can never have the capability to fully emulate human intelligence. Dreyfus argues that
the power of a biological brain can not be matched, even if a machine has superior data
processing capabilities. A biological brain not only reacts to what it perceives in the
environment, but relies on background knowledge and experience to think.16 Humans also
incorporate ethics and morals into their decisions, and a machine can only use what it is
programmed to think. What Dreyfus is arguing is that the human brain is superior to AI, and that
a machine can't emulate human intelligence. Dreyfus's view that, “scientists are only beginning
to understand the workings of the human brain, with its billions of interconnected neurons
working together to produce thought. How can a machine be built based on something of which
scientists have so little understanding?”17 shows his view on AI.
When looking at the relationship between the military and computer technology and AI,
7
13 McCarthy, vii.14 McCarthy, 2.15 McCarthy, 20.16 Hubert Dreyfus, Mind Over Machine, 31.17 David Masci. 1997. “Artificial Intelligence.” CQ Researcher, 7.
there has been much debate over how much integration is safe. As the military integrates
autonomous systems in their communication, information, and weapon systems, the danger of
over reliances rises. One of the first people to recognize this danger was the previously
mentioned Norbert Wiener. Even though Wiener was supportive of AI and its integration into
society, he had a very different viewpoint concerning its use in military and weapon technology.
Wiener wrote a letter in 1947 called “A Scientist Rebels,” which argues and resists government
and military influence on AI and computer research. Wiener warns of the “gravest
consequences” of the government's influence on development of AI.18 Wiener looks at the
development of the atomic bomb as an example, and how the scientists' work falls into the hands
of “he is least inclined to trust,” in this case the government and military. The idea that civilian
scientific research can be integrated by the military and used in weaponry is a critique of the
military's influence on AI development. Scientific research may seem innocent, but as it is
manipulated through military influence, purely scientific research is integrated into war
technology.
Paul Edwards's The Closed World gives a history of the relationship and the impact that
the military had on AI research and development and vise versa. Edwards looks at why the
military put so much time and effort into computers. Edwards looks at the effects that computer
technology and the integration of AI data processing systems had on the history of the Cold War.
Edwards's broad historic look at computer and AI development gives insight to a military
connection to the progressing technology that still exists today. Computer development began in
the early 1940s, and from that time to the early 1960s, the U.S. military played an important role
8
18 Norbert Wiener. “From the Archives.” 38.
in the progressing computer technologies. After WWII, the military's role in computer research
grew exponentially. The U.S. Army and Air Force began to fund research projects, contracting
large commercial technology corporations such as Northrop and Bell Laboratories.19 This
growth in military funding and purchases enabled American computer research to progress at an
extremely fast pace, however, due to secrecy, the military was able to keep control over the
spread of research.20 Because of this secrecy, military sponsored computer projects were tightly
controlled and censored. Due to heavy investment, the military did have a role in the
“nurturance” of AI due to their relationship with the government controlled Advanced Research
Projects Agency (ARPA). AI research received over 80% of its funding from ARPA, keeping the
military in tune with AI research and development.21 The idea that, “the computerization of
society has essentially been a side effect of the computerization of war,” sums up the effect of the
military on computer and AI development.
Paul Lehner's Artificial Intelligence and National Defense looks at how AI can benefit the
military, specifically through software applications. Written in 1989, Lehner’s view represents
that of the later years of the Cold War, one where the technology had not fully developed, but the
technology was exponentially progressing. Lehner discusses the integration of “expert systems,”
software that can be used to aid and replace human decision makers. Lehner recognizes AI's data
processing speed and accuracy and the benefits that the “expert system” could bring when
applied to the military. Armin Krishnan's Killer Robots looks at the other way that AI is being
integrated into the military, through hardware and weapons, also evaluating the moral and ethical
9
19 Edwards, 60.20 Edwards, 63.21 Edwards, 64.
issues surrounding the use of AI weaponry. Krisnan’s book was written in 2009, and looks at AI
in the military currently, specifically looking at the ethical and legal problems associated with
drone warfare and other robotic soldier systems. Some of the ethical concerns Krishnan brings
up are: diffusion of responsibility for mistakes or civilian deaths, moral disengagement of
soldiers, unnecessary war, and automated killing.
Recently there has been much debate over the legal concerns regarding the use of AI in
military systems and weaponry. One of the leading experts on the legality of AI integration is
Peter W. Singer, the Director of 21st Century Defense Initiative at Brookings. In his article
“Robots At War: The New Battlefield,”(2009) Singer raises the numerous legal concerns. The
laws of war were outlined in the Geneva Convention laws in the middle of the 20th century.
However, due to the progressing and changing war technologies, these 20th century laws of war
are having trouble keeping up with 21st century war technology. Singer argues that the laws of
war need to be updated to include new, AI systems and their integration. Due to high numbers of
civilian deaths from AI systems, specifically drones, Singer also argues that these can be seen
war crimes. Lastly, Singer brings up the question of who is responsible lawfully for an
autonomous machine: the commander, the programmer, the designer, the pilot, or the drone
itself? Singer's interesting look at the legal concerns over changing war technology is also stated
in his participation on the U.S. Congressional hearings on unmanned military systems.
Many scholars have also looked at what the future holds for AI. In 1993, Vernor Vinge
coined the term “singularity” to describe the idea that one day, AI technology will surpass human
intelligence. This is when computers will become more advanced than human intelligence,
moving human kind into a post-human state. This is the point where AI “wakes up,” gaining the
10
ability to think for itself. This idea of “singularity” is expanded on in Katherine Hayles's How
We Became Posthuman. Hayles looks at this as a time period in the near future where
information is separated from the body, where information becomes materialized and can be
moved through different bodies. Hayles's view shows that AI isn't just mechanically advancing,
but also mentally and psychologically advancing. In the view of singularity, humans are heading
in a direction where computers and humans will have to integrate with each other. But as
technology continues to progress and AI systems become more advanced, it is important to
recognize that the future may be integrated with AI technology.
I. The History of AI: The Early 1900s to 1956
Beginning in the early 1900s, computer scientists, mathematicians, and engineers began
to experiment with creating a thinking machine. During World War II, the military began using
computers to break codes, ushering in the development of calculating computers. ENIAC, the
Electronic Numerical Integrator And Computer, was the first electronic computer to successfully
function.22 Early on, the majority of computer and AI projects were military funded, giving the
military major influence over allocation and integration of the technology. As computer
technology began to progress, so did AI as a branch of computer science.
The first person to consider the possibilities of creating AI in the form of a thinking
machine was Alan Turing. In his article “Computing Machinery and Intelligence,” Turing
recognized the possibilities that a machine could plausibly emulate human thought. Turing's
paper was very important to the development of AI as a field, being the first to argue the
plausibility of AI existence, while also establishing a base for the field. Turing's refuting of the
11
22 Arthur Burks, “The ENIAC,” Annals of the History of Computing 3, no. 4 (1981): 389.
nine objections goes against the views of the skeptics and deniers, recognizing a diverse variety
of arguments against AI.
Another major figure in the development of computers and artificial intelligence was
Hungarian mathematician John von Neumann. Von Neumann made many important
contributions in a variety of fields, but had a very large impact on computer science. Today's
computers are based on “von Neumann architecture,” building a computer to, “use a sequential
'program' to held in the machines 'memory' to dictate the nature and the order of the basic
computational steps carried out by the machine's central processor.”23 He also used this
architecture and compared it to a human brain, arguing that their functions are very similar. Von
Neumann's 1950 “The Computer and the Brain,” was an important work concerning artificial
intelligence, strengthening Turing's claim that computers could emulate human thought.24 In his
book, Von Neumann compares the human brain to a computer, pointing out similarities in the
their architecture and function. In some cases, the brain acts digitally, because its neurons
themselves operate digitally. Similar to a computer, the neurons fire depending on an order to
activate them.25 The result of von Neumann's work strengthened the plausibility of creating a
thinking machine.
Ultimately, the work of Turing, Wiener, and Von Neumann show an optimism that the
early computer developers had. All three of them shared a faith in computer science and AI and
supported its progress. Turing finished his paper with, “We can only see a short distance ahead,
but we can see plenty there that needs to be done.”26 Even though these early computer
12
23 von Neumann, The Computer and the Brain, xii.24 von Neumann25 von Neumann, 29.26 Turing, 460.
developers shared this optimism, they were also wary of the dangers of the progressing computer
technology. Specifically Wiener, who had earlier written his letter “A Scientist Rebels,” had a
skeptical view of the future of computer technology. In Cybernetics, Wiener states,
What many of us fail to realize is that the last four hundred years are a highly
special period in the history of the world. The pace at which changes during
these years have taken place is unexampled in earlier history, as is the very
nature of these changes. This is partly the results of increased communication,
but also of an increased mastery over nature, which on a limited planet like
the earth, may prove in the long run to be an increased slavery to nature. For
the more we get out of the world the less we leave, and in the long run we
shall have to pay our debts at a time that may be very inconvenient for our
own survival.27
This quote from Wiener reflects the skepticism of Wiener. He understood the benefits that AI
and computer science could bring to society, but were wary of over-reliance on the technology.
Wiener’s quote is a warning of how fragile the world is, and that we need to be careful of the
rapid development of AI technology. As humans “master nature” through technology, they
become more and more vulnerable to their own creations.
II. The History of AI: 1956, The Cold War, and an Optimistic Outlook
Following the work of Turing, Von Neumann, and Wiener, computer scientists John
McCarthy and Marvin Minsky organized the Dartmouth conference in the summer of 1956. This
conference would lead to the birth of AI as a field, a branch of computer science. The
13
27 von Neumann, 46.
conference was based on the idea that, “machines use language, form abstractions and concepts,
solve kinds of problems now reserved for humans, and improve themselves.”28 Using this idea,
the goal of the conference was to establish AI as a field and show that it was plausible. As a
result, AI began to gain momentum as a field.
The military had a major influence over the research and development of AI and
computer science beginning in the 1940s. Shortly after World War II, as the Cold War era began,
AI research and development began to grow exponentially. Military agencies had the financial
backing to provide the majority of the funding, as U.S. Army, Navy, and Air Force began to fund
research projects and contract civilian science and research labs for computer science
development. Between 1951 and 1961, military funding for research and development rose from
$2 billion to over $8 billion. By 1961, research and development companies Raytheon and
Sperry Rand were receiving over 90% of their funding from military sources. The large budget
for research and development enabled AI research to take off, as ARPA received 80% of its
funding from the federal government.29 Because of the massive amount of funding from military
sources, American computer research was able to surpass the competition and progress at an
exponential rate. The U.S. Military was able to beat out Britain, their only plausible rival,
making the U.S. the leaders in computer technology.
There were numerous consequences of the military influence of having their hand in
research and development of computer science early in the Cold War. As a result of their
overwhelming funding, the military was able to keep tight control over the research and
14
28 John McCarthy, Marvin Minsky “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence” (proposal, Dartmouth College, August 31, 1955).29 Edwards, 64.
development, directing it in the direction they desired. This direction was primarily concerned
with developing technology that could benefit the military themselves, whether it be for
communication or weaponry or national defense. Wanting to keep their influence as strong as
possible, the military kept tight control through secrecy of the research.30 The military wanted to
make sure that researchers they had on contract were always aware of the interests of national
security, censoring the communication between researchers and scientists in different
organizations. A problem that arose from this censorship was that researchers could no longer
openly share ideas, impeding and slowing down development. This showed that the military was
willing to wait longer to ensure that national security measures were followed.
As a result of the heavy funding from the military, AI turned from being just theory to
having commercial interests. Parallel to the rapidly progressing computer technology, military
research agencies began to also progress in AI development, studying cognitive processes and
computer simulation.31 The main military research agency to look into AI was the Advanced
Research Projects Agency (ARPA, renamed DARPA in 1972). Joseph Licklider, head of ARPA's
Information Processing Techniques Office, was a crucial figure in increasing development of AI
technology, establishing his office as the primary supporter of “the closed world military goals of
decision support and computerized command and control,” which found “a unique relationship
to the cyborg discourses of cognitive psychology and AI.”32 Thus unique relationship is the basis
of AI, mastering cognitive psychology and then integrating and emulating that psychology into a
machine. This branch of ARPA not only shows the military's interest and impact on research and
15
30 Edwards, 62.31 Edwards, 259.32 Edwards, 260.
development of AI, but also the optimism that the military had for its development. ARPA was
able to mix basic computer research with military ventures, specifically for national defense,
allowing the military to control the research and development of AI technology.
The military influence over DARPA continued into the 1970s, as DARPA became the
most important research agency for military projects. The military began to rely on AI for
military use at an exponential rate. DARPA began to integrate AI technology into a number of
military systems including soldier aids for both pilots and ground soldiers and battlefield
management systems that relied on expert systems.33
All these aspects of AI's integration into warfare are known as the “robotic battlefield” or
the “electronic battlefield.” AI research opened the doors for this new warfare technology,
integrating AI and computer technology to create electronic, robotic warfare and automated
command and sensor networks for battlefield management. During the Vietnam War, military
leaders shared an optimism for new AI technology. General William Westmoreland, head of
military operations for the U.S. in Vietnam from 1964 to 1968 predicted that, “on the battlefield
of the future, enemy forces will be located, tracked, and targeted almost instantaneously through
the use of data-links, computer assisted intelligence evaluation and automated fire control.”34
Westmoreland also saw that as the military began to increasingly rely on AI technology, the need
for human soldiers would decrease. Westmoreland’s prediction not only shows the optimism that
military leaders had of AI technology, but also the over reliance that the military would have on
those weapons.
From the 1950s to the 1980s, DARPA continued to be the military’s main research and
16
33 Edwards, 297.34 Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons, 19.
development agency. DARPA received heavy funding from the federal government, as military
leaders continued to support the need for the integration of new AI technology. The military
leader’s optimism in AI technology is reflected by the ambitious goals that DARPA had. In
1981, DARPA aimed to create a “fifth generation system,” one that would “have knowledge
information processing systems of a very high level. In these systems, intelligence will be
greatly improved to approach that of a human being.”35 Three years later in 1984, DARPA’s
“Strategic Computing” stressed the need for the new technology stating, “Using this new
technology [of artificial intelligence], machines will perform complex tasks with little human
intervention, or even with complete autonomy.”36 It was in 1984 that the U.S. military began not
just researching and developing AI, but actually integrating it into military applications for use
on the battlefield. DARPA announced the creation of three different projects, an all purpose
autonomous land vehicle, a “pilot’s associate” to assist pilots during missions, and a battlefield
management system for aircraft carriers. The military was beginning to rely on this AI
technology, using it to assist human military leaders and soldiers. Fearing they would lose
ground in their progress to Britain, China and Japan, DARPA spent over $1 billion to maintain
their lead.37
President Ronald Reagan continued the trend of the federal government using DARPA for
advanced weapon development and showed the military’s commitment to developing AI military
weapons and systems. Reagan’s Strategic Defense Initiative (SDI), later nicknamed “Star Wars,”
was a proposed network of hundreds of orbiting satellites with advanced weaponry and battle
17
35 Paul Lehner, Artificial Intelligence and National Defense: Opportunity and Challenge, 164.36 David Bellin, Computers in Battle: Will They Work?, 171.37 Lehner, 166.
management capabilities. These satellites would be equipped with layers of computers, “where
each layer of defense handles its own battle management and weapon allocation decisions.”38
Reagan’s SDI is a perfect example of the government and military’s overly ambitious integration
of AI technology. Reagan was willing to put both highly advanced and nuclear weapons in the
partial control of AI technology. Overall, Reagan’s SDI was a reckless proposition by the
military, taking man out of the loop while putting weapons of mass destruction under the control
of computer systems.
As a result of the military’s commitment to the research and development of AI, AI
technology has developed rapidly and its integration into both society and military applications.
Before looking at the future of AI, it is important to first look at the different levels of autonomy,
and where the technology currently is present day. In a nutshell, autonomy is the ability of a
machine to function on its own with little to no human control or supervision. There are three
types of machine autonomy: pre-programmed autonomy, limited autonomy, and complete
autonomy. Preprogrammed autonomy is when a machine follows instructions and has no
capacity to think for themselves.39 An example of preprogrammed autonomy is in a factory
machine programmed for one job, such as welding or painting. Limited autonomy is the
technology level that exists today, one where the machine is capable of carrying out most
functions on its own, but still relies on a human operator for more complex behaviors and
decisions. Current U.S. UAVs possess limited autonomy, using sensors and data processing to
come up with solutions, but still relying on human decision making. Complete autonomy is the
18
38 Lehner, 159.39 Krishnan, 44.
most advanced level, operating themselves with no human input or control.40 Although complete
autonomy is still being developed, AI technology continues to progress at a rapid pace, opening
the doors for complete autonomy, with DARPA estimating that complete autonomy will be
achieved before 2030.41
In a 2007 interview with Tony Tether, the Director of DARPA, Tether showed his
agency’s optimism and commitment to the development of future of AI technology. Tether refers
to DARPA’s cognitive program, the program focusing on research and development of thinking
machines, as “game changing,” where the computer is able to “learn” its user.42 DARPA is
confident that they will be able to create fully cognitive machines, making AI smarter and more
closely emulating human intelligence. Tether discusses the Command Post of the Future
(CPOF), a distributed, computer run command and control system that functions 24/7, taking
human operators out of the loop. The CPOF, though beneficial for its accurate and rapid data
processing, is a dangerous example of over reliance on AI. Tether says, “those people who are
now doing that 24-by-7 won’t be needed,” but it is important, not just for safety, but to retain full
control, to have a human operator over military weapons and systems.43 This still shows the
military’s influence over the research and development, directing DARPA’s research towards an
over-reliance on AI machines.
But what happens when humans rely on AI so much that there is no turning back?
Vinge’s Singularity Theory is the theory that AI will one day surpass human intelligence, and
humans will eventually integrate with AI technology. Vinge’s Singularity points out the ultimate
quality.49 AI robotics gives the AI system the ability to perform manual tasks, making them
useful for integration into industrial and manufacturing sectors of society, such as automobile
and computer chip factories. In medicine, surgeons and doctors are now integrating AI
technology to assist in challenging surgery operations and to identify and treat diseases.50 AI has
even found its way into everyday life, assisting the elderly in senior facilities, assisting pilots on
commercial airlines, and being integrating into human homes, creating “smart houses.”51
I recognize this integration of AI is both beneficial and is not dangerous. AI is helping progress
health, economic, and industrial technology, making it safer, more advanced, and more efficient.
Although there are numerous benefits, it is also important to understand both the limitations and
dangers of AI technology, specifically with its integration into military systems.
Hubert Dreyfus leads the charge against the integration of AI, arguing both the limitations
21
47 Mishkoff, 108.48 Mishkoff, 108.49 Mishkoff, 120.50 Von Drehle, “Meet Dr. Robot,” 44.; Velichenko. “Using Artificial Intelligence and Computer Technologies for Developing Treatment Programs for Complex Immune Diseases,” 635.51 Anderson, “Robot Be Good,” 72.
and danger of AI machines. Dreyfus claims in What Computers Can’t Do that early AI
developers were, “blinded by their early success and hypnotized by the assumption that thinking
is a continuum,” meaning that Dreyfus believes this progress cannot continue.52 Dreyfus is
specifically wary of the integration of AI into systems when they have not been tested. Over
optimism and reliance of AI supporters gives the AI machine the ability to function
autonomously when it has not been fully tested. In Mind Over Machine, Dreyfus expands his
skepticism, warning of the dangers of A.I. decision making because to him, decisions must be
pre-programed into a computer, which leads to the A.I.’s “ability to use intuition [to be] forfeited
and replaced by merely competent decision making. In a crisis competence is not good
enough.”53 Dreyfus takes a skeptical approach by recognizing the benefits of AI on society,
specifically information processing, but strongly opposes the forcing of undeveloped AI on
society. He says that, “AI workers feel that some concrete results are better than none,” that AI
developers continue to integrate untested AI into systems without working out all the
consequences of doing so.54 Dreyfus is correct in saying that humans must not integrate
untested, under developed AI into society, but rather always be cautious. This skeptical approach
is important for the safe integration of AI, specifically when removing a human operator and
replacing him with an autonomous machine.
Since the 1940s, there has been skepticism of AI in military applications from a diverse
group of opponents. The military’s commitment to the use and reliance of using autonomous
machines for military functions comes with many dangers, removing human operators and
22
52 Hubert Dreyfus, What Computers Can’t Do, 302.53 Hubert Dreyfus, Mind Over Machine, 31.54 Hubert Dreyfus, What Computers Can’t Do, 304.
putting more decisions into the hands of the AI machine. Dreyfus argues the danger in
implementing “questionable A.I.-based technologies” that have not been tested. To Dreyfus,
allowing these automated defense systems to be implemented, “without the widespread and
informed involvement of the people to be affected” is not only dangerous, but also
inappropriate.55 It is inappropriate to integrate untested AI into daily life, where that AI may
malfunction or make a mistake that could negatively impact human life. Dreyfus is wary of the
military decision-makers being tempted to “install questionable AI-based technologies in a
variety of critical contexts,” especially those applications that involve weapons and human life.56
Whether its to justify the billions of dollars spent for research and development or the temptation
of the advanced capabilities of the AI machines, military leaders must be cautious of over
reliance on AI technology for military applications.
Dreyfus was not the first skeptic of technology and its integration into military
applications. Wiener’s letter “A Scientist Rebels” showed both early scientists’ resistance and
skepticism of research and development’s relationship with the military. The point that Wiener
wants to make is that even if scientific information seems innocent, it can still have catastrophic
consequences. Wiener’s letter was written shortly after the bombings of Hiroshima and
Nagasaki, where the atomic bomb developer’s work fell into the hands of the military. To
Wiener, it was even worse that the bomb was used “to kill foreign civilians indiscriminately.”57
The broad message of Wiener’s letter is that scientists should be skeptical of the military
application of their research. Though their work may seem innocent and purely empirical, it can
23
55 Hubert Dreyfus, Mind Over Machine, 12.56 Hubert Dreyfus, Mind Over Machine, 12.57 Wiener, “From the Archives,” 37.
still have grave consequences by falling into the hands of the military. Though Wiener is not
explicitly talking about AI research, his skepticism is important. Wiener emphasizes the need for
researchers and developers to be wary of their work, and warns them of the dangers of
cooperating with the military.
Wiener’s criticism of the military’s relationship with research and development has not
changed that relationship, and the military continues to develop and use more AI technology in
its weapons and systems. The military application of AI brings a number of dangers both to
friendlies, enemies, and civilians. Though AI has many benefits in the military, the dangers
outweigh those benefits. The idea of taking a human out of the loop is not only dangerous, but
when human life is on the line, can a thinking machine be trusted to function like a human?
Functioning completely autonomously, how do we know that that machine will emulate the
thought, decision making, and ethics of a human? The following are some of the dangers of
integrating AI technology into military applications.
As previously warned by Wiener, the government misuse of AI in the military could be a
dangerous outcome of AI’s integration. Governments like the United States have massive
defense budgets, giving them the resources to build large armies of thinking machines. This
increases the chances of unethical use of AI by countries, specifically the U.S., giving these
countries the opportunity to not just use AI technology for traditional warfare, but expanding its
use for any sort of security. The use of AI opens the doors for unethical infringement upon civil
liberties and privacy within the country.58
Another major danger of the use of AI in the military is the possibility of malfunctioning
24
58 Krishnan, 147-148.
weapons and networks, when the weapon or system acts in an unanticipated way. As previously
stated, computer programming is built on the idea of programming, finding errors through
malfunction, and fixing those errors. However, when using AI technology that might not be
perfected, the risk of malfunction is greater. Software errors and unpredictable failures leading
to malfunction are both liabilities to the AI military system. These chances of malfunction make
AI military systems untrustworthy, a huge danger when heavily relying on AI software integrated
into military networks.59 It is very challenging to test for errors in the military software.
Software often can pass practical tests, however there are so many situations and scenarios that
perfecting the software is nearly impossible.60 The larger the networks, the greater the dangers
of malfunction. Thus, when AI conventional weapons are networked and integrated into larger
AI defense networks, “an error in one network component could ‘infect’ many other
components.”61 The malfunction of an AI weapon is not only dangerous to those who are
physically affected, but also opens up ethical and legal concerns. The malfunction of an AI
system could be catastrophic, especially if that system is in control of WMDs. AI controlled
military systems increase the chances of accidental war considerably.
However, the danger of malfunction is not just theory. July 1988 was an example of an
AI system malfunction. The U.S.S. Vincennes, a U.S. battle ship nicknamed “Robo-cruiser”
because of its automated Aegis system, an automated radar and battle management system, was
patrolling the Persian Gulf. An Iranian civilian airliner carrying 290 people registered on the
system as an F-14 Iranian fighter, and the computer system considered it an enemy. The system
25
59 Bellin, 209.60 Bellin, 209.61 Krishnan, 152.
fired and took down the plane, killing all 290 people. This event showed that humans are always
needed in the loop, especially with machine autonomy growing. Giving a machine full control
over weapon systems is reckless and dangerous, and if the military continues to phase out human
operators, these AI systems will be become increasingly greater liabilities.62
The weakness in the software and functioning capabilities of AI military systems also
make them vulnerable to probing and hacking, exposing flaws or losing control of the unmanned
system.63 Last year, Iran was able to capture a U.S. drone by hacking its GPS system and
making it land in Iran instead of what it thought was Afghanistan. The Iranian engineer who
worked on the team to hijack the drone said that they “electronically ambushed” the drone, "By
putting noise [jamming] on the communications, you force the bird into autopilot. This is where
the bird loses its brain." The Iranian’s successful hijacking of the drone shows the vulnerabilities
of software on even advanced AI systems integrated into drones.64
Generally war is not predictable, and AI machines function off of programs written for
what is predictable. This is a major flaw in AI military technology, as the programs that make AI
function consist of rules and code. These rules and codes are precise, making it nearly
impossible for AI technology to adapt to a situation and change its functions. Because war is
unpredictable, computerized battle management technology lacks both experience and morality,
both needed to make informed and moral decisions on the battlefield. The ability to adapt is
necessary for battlefield management, and in some cases, computer programming limits the
technology from making those decisions.65
26
62 Peter Singer, “Robots At War: The New Battlefield,” 40.63 Alan Brown, “The Drone Warriors,” 24.64 Scott Peterson, “Iran Hijacked US Drone, says Iranian Engineer.”65 Bellin, 233.
The last danger, the “Terminator Scenario” is more of a stretch, but still is a possibility.
In the “Terminator Scenario,” machines become self aware, see that humans are their enemy, and
take over the world, destroying humanity. As AI machines become increasingly intelligence,
their ability to become self aware and intellectually evolve will also develop. The idea of AI
machines beginning to “learn” their human operators and environments is the start of creating
machines that will become fully self aware. If these self aware machines have enough power, for
example their integration into military systems, they have the power to dispose of humanity.66
Though the full destruction of humanity is a stretch, the danger of AI turning on their human
creators is still a possibility and should be recognized as an apparent consequence of integrating
AI into military systems.
IV. A Continuing Trend: The Military’s Exponential Use of Autonomous AI
Though these dangers are apparent, and in some cases have lead to loss of human life, the
U.S. military continues to exponentially rely on AI technology in its military systems, integrated
into both its weapon systems and battle network systems. The military is using AI technology,
such as autonomous drones, AI battlefield management systems, and AI communication and
decision making networks for national security and on the battlefield, ushering in a new era of
war technology. The idea of taking man out of the loop on the battlefield is dangerous and
reckless. Removing human operators is not only a threat to human life, but also opens the debate
over ethical, legal, and moral problems regarding the use of AI technology in battle.
AI has progressively been integrated into military applications, the most common being
weapons (guided missiles and drones) and expert systems for national defense and battlefield
27
66 Krishnan, 154.
management. This increased integration has led to both an over reliance and over optimism of
the technology. The rise of drone warfare through the use of UAVs (Unmanned Aerial Vehicles)
and UCAVs (Unmanned Combat Aerial Vehicles), has brought numerous benefits to military
combat, but also many concerns. As UCAVs become exponentially more autonomous, their
responsibilities have grown, utilizing new technology and advanced capabilities to replace
human operators and take humans out of the loop.67
The U.S. military’s current level of autonomy on UCAV’s is supervised autonomy, where
a machine can carry out most functions without having to use pre-programmed behaviors. With
supervisor autonomy, an AI machine can make many decisions on its own, requiring little human
supervision. In this case, the machine still relies on a human operator for final complex
decisions such as weapon release and targeting, but is able to function mostly on its own.68
Supervised autonomy is where the military should stop its exponential integration. It is able to
put complex legal and ethical decisions in the hands of a human operator, while still using the
benefits that AI has. When the final decision involves human life or destruction, it is important
to have a human operator making that decision, rather than allowing the computer to decide.
Supervised autonomy still allows a human operator to monitor the functions of the UCAV, while
keeping it ethically and legally under control. It is especially dangerous that the U.S. military is
working towards the creation of completely autonomous machines, ones that can operate on their
own with no human supervision or control. Complete autonomy gives the machine the ability to
learn and think and adjust behavior in specific situations.69 Giving these completely autonomous
28
67 Hugh McDaid, Robot Warriors: The Top Secret History of the Pilotless Plane, 162.68 Krishnan, 44.69 Krishnan, 44.
machines the ability to make their own decisions is dangerous, as their decisions would be
unpredictable and uncontrollable. The U.S. military’s path to creating and utilizing completely
autonomous machines is reckless, and supervised autonomy is farthest the military should go
with AI technology and warfare.
In the last decade, the use of military robotics has grown for a number of reasons,
including the numerous benefits that AI robotics brings to the battlefield. Originally used for
purely reconnaissance, the military is now utilizing UAVs as weapons. The use of UAVs and
other AI weapons are heavily supported by the low ranking military personnel, the ones who are
directly interacting with the drones. Higher ranking military officials and political leaders are
split, with some fully supporting use while others recognize the dangers and concerns of their
use. For now, the benefits that UAVs possess continue the integration of them into the U.S.
military.
One of the benefits of AI weaponry is it reduces the man power requirements. In first
world countries, especially the U.S., the pool of prospective soldiers is shrinking. Both physical
requirements and the attractiveness of military service are keeping Americans away from enlisted
in the military. As the military budget decreases, UCAVs are able to replace human soldiers,
cutting personnel costs from human soldiers.70 Another benefit of replacing human soldiers with
AI robotics is that it takes humans out of the line of fire, while also eliminating human fallibility.
The reduction is casualties of war is very appealing to not only the fighting soldiers, but also
their family, friends, and fellow citizens. Being able to take soldiers out of the line of fire and
replace them with robotics saves soldiers lives. These robotics are also able to reduce mistakes
29
70 Krishnan, 35.
and increase performance as compared to their human counterparts. The amplified capabilities
of the machines give them the ability to outperform human soldiers.71 The ability to function
24/7, low response time, advanced communication networks, rapid data and information
processing, and targeting speed and accuracy are some of the many benefits of AI robotics on the
battlefield.
The benefits of AI military robotics are very important to the lower ranking military
personnel. These soldiers interact with the robotics on the battlefield, recognizing the benefits it
brings to them personally, while failing to recognize the ethical and legal concerns that also come
along with the drones. The following are quotes from enlisted, low ranking U.S. soldiers:72
• “It's surveillance, target acquisition, and route reconnaissance all in one. We saved countless lives, caught hundreds of bad guys and disabled tons of IEDs in our support of troops on the ground.” -Spc. Eric Myles, UAV Operator
• “We call the Raven and Wasp our Airborne Flying Binoculars and Guardian Angels.” -GySgt. Butler
• “The simple fact is this technology saves lives.” -Sgt. David Norsworthy
It is understandable why low ranking soldiers embrace the technology and support their use.
UCAVs have proven to be highly effective on the battlefield, saving the lives of U.S. soldiers and
effectively combatting enemies, utilizing their advanced AI functions. Though UCAVs are
effective on the battlefield and especially benefit the soldiers on the front line, the ethical and
legal concerns are very important consequences of the overall use of AI technology.
However, higher ranking military leaders and political leaders are split in their support.
Some of these leaders fully support the technology, while others are skeptical of too much
automation and the dangers of over reliance. German Army General Wolfgang Schneiderhan,
30
71 Krishnan, 40.72 U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned Systems and the Future of War Hearing, Fagan, 63.
who also served as Chief of Staff of the German Army from 2002 to 2009 shows this skepticism
in his article, “UV’s: An Indispensable Asset in Operations.” Schneiderhan not only looks at the
dangers of taking a human out of the loop, but also the importance of humanitarian law,
specifically involving human life. Schneiderhan explicitly warns that, “unmanned vehicles must
retain a ‘man in the loop’ function in more complex scenarios or weapon employment,”
especially wary of “cognitive computer failure combined with a fully automated and potentially
deadly response.”73 Schneiderhan’s skepticism both recognizes the main dangers of over-
reliance of AI for military use, while also stressing the importance of keeping a human operator
involved in decision making. Schneiderhan argues that a machine should not be making
decisions regarding human life, but rather decisions should be made by a conscious human who
has both experience and situational awareness, while also understanding humanitarian law.74
Schneiderhan’s skepticism contrasts with the over-optimism that many U.S. military leaders
share about the use of AI in weaponry.
Navy Vice-Admiral Arthur Cebrowski, chief of the DoD’s Office for Force
Transformation stressed the importance of AI technology for “the military transformation,” using
the advanced capabilities and benefits to develop war technology. Cebrowski argues that AI
technology is “necessary” to move money and manpower to support new technologies, including
AI research and development, instead of focusing on improving old technologies.75 Navy Rear
Admiral Barton Strong, DoD Head of Joint Projects argues that AI technology and drones will
“revolutionize warfare.” Strong says that because “they are relatively inexpensive and can
31
73 Schneiderhan, “UV's, An Indispensable Asset in Operations,” 91.74 Schneiderhan, 91.75 U.S. Senate. Foreign Affairs, Defense, and Trade Division. Military Transformation: Intelligence, Surveillance and Reconnaissance, 7.
effectively accomplish missions without risking human life,” drones are necessary for
transforming armies.76 General James Mattis, head of U.S. Joint Forces Command and NATO
Transformation argues that AI robots will continue to play a larger role in future military
operations. Mattis fully supports the use of AI weapons, and since commanding forces in Iraq,
the UAV force has increased to over 5,300 UAV drones. Mattis even understands the
relationship that can form between a soldier and a machine. Mattis embraces the reduction of
risk to soldiers, the efficient gathering of intelligence, and their ability to strike stealthily.
Mattis’s high ranking and support of UAVs will lead to even more use of UAVs.77 From a
soldier’s point of view, the benefits that drones bring far exceed the legal and ethical concerns
that those soldiers are not responsible for. Drones are proving effective on the battlefield,
leading to support from the low and high ranking military leaders. However, civilian researchers
and scientists continue to be skeptical of the use of AI in the military, especially when involving
human life.
Looking more closely at the benefits of UCAVs, it is clear why both low ranking and
military leaders are optimistic and supportive of the use of UCAVs. The most clear reason is the
reduction of friendly military casualties, taking U.S. human soldiers out of the line of fire.78
When soldier causalities plays a large part in public perception of war, reducing loss of human
life makes war less devastating on the home front. The advanced capabilities of AI integrated
into military robots and systems is another appealing benefit of AI. Rapid information
processing, accurate decision making and calculations, 24/7 functionality, and battlefield
32
76 McDaid, 6.77 Brown, 23.78 John Keller, “Air Force to Use Artificial Intelligence and Other Advanced Data Processing to Hit the Enemy Where It Hurts,” 6.
assessment amplify the capabilities of a human soldier, making UCAVs extremely efficient and
dangerous. By processing large amounts of data at a rapid speed, UCAVs can, “hit the enemy
where it hurts” and take advantage of calculated vulnerabilities before the enemy can prepare a
defense.79 In a chaotic battle situation, where a soldier has to process numerous different
environmental, physical, and mental factors, speed and accuracy of decision making is essential
to a soldier. AI have the ability to cope with the chaos of a battlefield, making decisions faster
and more efficiently, processing hundreds of variables, than human soldiers.80 While soldiers
are hindered by fear and pain, AI machines lack this emotion, instead being able to function
solely on the battlefield. The advanced capabilities and abilities of UCAVs have proven to be
extremely effective on the battlefield. Though UCAVs are efficient and deadly soldiers, they
also open the doors for numerous ethical, legal, and moral concerns.
V. Ethical Concerns
Military ethics is a very broad concept, so in order to understand the ethical concerns
caused by the use of AI in the military, I will first discuss what military ethics are. In a broad
sense, ethics look at what is right and wrong. Military ethics is often a confusing and
contradictory concept because war involves violence and killing against others, often considered
to be immoral in general. Though some argue that military ethics cannot exist because of the
killing of others, I will look at military ethics where killing is ethical. In this definition of
military ethics, war is ethical if it counters hostile aggression and is conducted lawfully.81 For
example, the U.S.’s planned raid on Osama Bin Laden’s compound leading to his killing could
33
79 Keller, 10.80 The Economist. “No Command, and Control,” 89.81 Krishnan, 117.
be viewed as ethical. Bin Laden was operating an international terrorist organization that had
successfully killed thousands of civilians through their attacks. However, the use of WMDs, for
example, the U.S.’s bombing of Hiroshima and Nagasaki is often viewed as unethical. In the
case of those bombings, thousands of civilians were killed, and it can be debated that the use of
WMDs is not lawful due to their catastrophic damage to a civilian population. The bombings of
Hiroshima and Nagasaki can be viewed as war crimes against a civilian population, breaking
numerous laws of war established in the Rules of Aerial Warfare (the Hague, 1923), including
Article XXII that states: “Aerial bombardment for the purpose of terrorizing the civilian
population, of destroying or damaging private property not of military character, or of injuring
non-combatants is prohibited.” 82
As shown in the examples, civilian causalities are one of the most unethical concerns
with war in general. As previously stated, the tragedy in the Persian Gulf in 1988 showed the
consequences of an AI systems’s mistake on a large group of civilians. As the military continues
to progressively utilize UCAVs for combat, civilian deaths from UCAVs have also risen. The
U.S. military has relied on UCAVs heavily for counter terrorism operations in Pakistan. Because
of the effectiveness of the strikes, the U.S. continues to utilize drones for airstrikes on terrorist
leaders and terrorist training camps. However, with increasing drone strikes, the death toll of
civilians and non militants has increased exponentially, and has even outnumbered the death toll
of targeted militants.83 This is where the unethical nature of UCAV airstrikes is beginning to
unfold. The effectiveness of the airstrikes is appealing to the military and they continue to utilize
them, yet ignore the thousands of civilians who are also killed. Marge Van Cleef, Co-Chair of
34
82 The Hague. 1923. Draft Rules of Aerial Warfare. Netherlands: The Hague.83 Leila Hudson, “Drone Warfare: Blowback From The New American Way of War,” 122.
the Women’s International League for Peace and Freedom takes the ethical argument a step
further, claiming that drone warfare is terrorism itself. Van Cleef says that, “families in the
targeted regions have been wipe out simply because a suspected individual happened to be near
them or in their home. No proof is needed.”84 The use of UCAVs has proven to be unethical for
this reason, that civilians are continuously killed in drone strikes. Whether it be through
malfunction, lack of information, or another mistake, UCAVs have shown that they are not able
to avoid the killing of civilians. However, civilians are not the only victims of UCAV use.
Moral disengagement, changing the psychological impact of killing, is another major
ethical concern of UCAV use. When a soldier is put in charge of a UCAV and gives that UCAV
the order to kill, having a machine as a barrier neutralizes a soldier’s inhibition to kill. Because
of this barrier, soldiers can kill the enemy from a large distance, disengaging the soldier from the
actual feeling of taking a human life. Using UCAVs separates a soldier from emotional and
moral consequences to killing.85 An example of this moral disengagement is of a UCAV
operator in Las Vegas spending his day operating a UCAV, carrying out airstrikes and other
missions thousands of miles away, then joining his family for dinner that night. Being in these
two situations daily not only leads to emotional detachment from killing, but also hides the
horrors of war. Often on the virtual battlefield, “soldiers are less situationally aware and also less
restrained because of emotional detachment.”86 Because of this emotional detachment to kill,
UCAVs are unethical in that they make the psychological impact of killing non-existent.
One of the main deterrents of war is the loss of human life. But when humans are taken
35
84 Marge Van Cleef, “Drone Warfare=Terrorism,” 20.85 Krishnan, 128.86 U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned Systems and the Future of War Hearing, Barrett, 13.
out of the line of fire and human causalities shrink as AI weapons increase, is it easier to go to
war? An unethical result of rising use of robotic soldiers is the possibilities of unnecessary war,
when the perception of war is changed due to the lack of military casualties.87 Unmanned
systems in war, “further disconnect the military from society. People are more likely to support
the use of force as long as they view it as costless.”88 When the people at home only see the lack
of human causalities, the horrors of war are hidden and they may think that the impact of going
to war is less than it really is. This false impression that, “war can be waged with fewer costs
and risks” creates an illusion that the war is easy and cheap.89 This can lead nations into a war
that might not be necessary, giving them the perception, “gee, warfare is easy.”90
These three ethical concerns all fall under the idea of automated killing, which is an
ethical concern in itself. Giving a machine full control over the decision to end a life is unethical
for a number of reasons: machines lack empathy, morals, have no concept of the finality of life,
and life and human experiences. AI machines are programmed far differently from humans, so
the decision to end of human life should never be left up to a machine. When looking at a
machines morals, they may still have the ability to comprehend environments and situations, but
will not have the ability to feel remorse or fear punishment.91 In the event that an AI machine
kills a human wrongly, will it feel remorse for that killing? It is unethical and dangerous to use
AI weaponry because humans have the ability to think morally, while a machine may just
“blindly pull the trigger because some algorithm says so.”92 AI machines also lack empathy, the
36
87 Singer, 44.88 Singer, 44.89 Cortright, “The Prospect of Global Drone Warfare.”90 Singer, 44.91 Krishnan, 132.92 Krishnan, 132.
ability to empathize with human beings. If an AI machine can’t understand human suffering or
has never experienced it themselves, it will continue to carry out unethical acts without being
emotionally effected. Fitting in with empathy and morals, AI machines lack the concept of the
finality of life and the idea of being mortal. Both not knowing and not experiencing death and
the end of life, an AI machine doesn’t have the ability to take finality of life into consideration
when making an ethical decision. With no sense of the ability to die, an AI machine lacks
empathy for death, allowing it to refrain from moral decisions.93 Automated killing opens the
doors for all these ethical concerns.
VI. Legal Concerns
However, ethical concerns are not the only problem with the use of AI machines in the
military. There are also a number of legal concerns regarding the use of AI weaponry,
specifically with the rise of drones. Today, modern warfare is still governed by the laws of the
Geneva Convention, a series of laws to establish the laws of war, armed conflict, and
humanitarian treatment. However, the Geneva Convention was drafted during the 1940s, a time
when warfare was radically different. This means that the laws of war are outdated, the 20th
century military laws are not able to keep up with 21st century war technology.94 The laws of
armed conflict need to be updated before the use of UCAVs continues to establish the legality of
using them in the first place. For example, an article of the Geneva Convention's protocol states:
"effective advance warning shall be given of attacks which may affect the civilian population,
unless circumstances do not permit."95 However, the killing of civilians by UCAVs without prior
37
93 Krishnan, 133.94 U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned Systems and the Future of War Hearing, Singer, 7.95 Michael Newton, “Flying Into the Future: Drone Warfare and the Changing Face of Humanitarian Law.”
warning violates the humanitarian protections established by the Geneva Convention, illegally
carrying out attacks resulting in civilian deaths. Only combatants can be lawfully targeted in
armed conflict, and any killing of non-combatants violates armed conflict law.96 Armed conflict
is changing at such a fast pace, it is hard to establish humanitarian laws for war that can adapt to
changing technologies.
As of now, the actions of UCAVs could be deemed as war crimes, the violation of armed
conflict laws. One legal concern with the use of UCAVs is the debate over whether they are
considered “state sanctioned lethal force” or not. If they are state sanctioned, such as a soldier in
the U.S. Army, they are legal and must follow the laws of armed conflict. However, numerous
drones are operated by the CIA, meaning they are not state sanctioned. Because these drones are
not state sanctioned, they are violating international armed law, as being state sanctioned gives
the U.S. military the right to use lethal force. The killing of civilians in general, but specifically
by non-state sanctioned weapons can be seen as war crimes.97
Another legal problem of drone warfare concerns liability of the weapon, who is to blame
for an AI malfunction or mistake. There are so many people involved in the development,
building, and operation of a drone, making it hard to decide who is responsible for an error. Is it
the computer scientist who programmed the drone, the engineer who built the drone, the operator
of the drone, or the military leader who authorized the attack? It can even be argued that the
drone is solely responsible for its own actions, and should be tried and punished as though it is a
human soldier. Article 1 of the Hague Convention requires combatants to be, “commanded by a
38
96 Ryan Vogel, “Drone Warfare and the Law of Armed Conflict,” 105.97 Van Cleef, 20.
person responsible for his subordinates.”98 This makes sense for human soldiers, but makes it
very hard to legally control an autonomous machine, one that cannot take responsibility for its
own actions when acting autonomously. Because UCAV use is rising, there needs to be
established legal accountability laws in the event of a robotic malfunction or mistake leading to
human or environmental damage.99
The field of AI continues to develop at an extremely rapid pace, opening up the door for
increased optimism and reliance on the new technologies. However, this exponential growth
comes with numerous ethical, legal, and moral concerns, especially in regards to its relationship
with the military. The military has influenced the research and development of AI since it was
established in the 1950s, and continues to have a hand in AI growth through heavy funding and
involvement. Though AI brings great benefits to society politically, socially, economically, and
technologically, we should be warned of over reliance on the technology. It is important to
always keep a human in the loop, whether it be for civilian or military purposes. AI technology
has the power to shape the society we live in today, but each increase in autonomy should be
taken with a grain of salt.
39
98 Krishnan, 103.99 Krishnan, 103.
Bibliography
Adler, Paul S. and Terry Winograd. Usability: Turning Technologies Into Tools. New York:
Oxford University Press, 1992.
Anderson, Alan Ross. Minds and Machines. New Jersey: Prentice-Hall Inc.. 1964.
Anderson, Michael, and Susan Leigh Anderson. “Robot Be Good.” Scientific American 303, no.
4 (2010): 72-77.
Bellin, David and Gary Chapman. Computers in Battle: Will They Work?. New York: Harcourt
Brace Jovanovich Publishers, 1987.
Brown, Alan S. “The Drone Warriors.” Mechanical Engineering 132, no. 1 (January 2010):
22-27.
Burks, Arthur W. “The ENIAC: The First General-Purpose Electronic Computer,” Annals of the
History of Computing 3, no. 4 (1981): 310–389.
Cortright, David. “The Prospect of Global Drone Warfare.” CNN Wire (Oct 19, 2011).
Dhume, Sadanand. “The Morality of Drone Warfare: The Reports About Civilian Casualties are
Unreliable.” Wall Street Journal Online, (Aug 17, 2011).
Dreyfus, Hubert L. Mind Over Machine. New York: The Free Press, 1986.
Dreyfus, Hubert L. What Computers Can't Do: The Limits of Artificial Intelligence. New York:
Harper Colophon Books, 1979.
Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War
America. Massachusetts: MIT Press, 1996.
Ford, Nigel. How Machines Think. Chichester, England: John Wiley and Sons, 1987.
40
Hayles, Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and
Informatics. Chicago: The University of Chicago Press, 1999.
Heims, Steve J. John Von Neumann and Norbert Wiener: From Mathematics to the Technologies
of Life and Death. Massachusetts: MIT Press, 1980.
Hogan, James P. Mind Matters. New York: Ballantine Publishing Group, 1997.
Hudson, Leila, Colin Owens, and Matt Flannes. “Drone Warfare: Blowback From The New
American Way of War.” Middle East Policy 18, no. 3 (Fall 2011): 122-132.
Keller, John. “Air Force to Use Artificial Intelligence and Other Advanced Data Processing to
Hit the Enemy Where It Hurts,” Military & Aerospace Electronics 21, no. 3 (2010): 6-10.
Krishnan, Armin. Killer Robots: Legality and Ethicality of Autonomous Weapons. Vermont:
Ashgate, 2009.
Lehner, Paul. Artificial Intelligence and National Defense: Opportunity and Challenge.
Pennsylvania: Tab Books Inc., 1989.
Le Page, Michael. “What Happens When We Become Obsolete?” New Scientist 211, no. 2822