Top Banner
© 2015 BY THE AMERICAN PHILOSOPHICAL ASSOCIATION ISSN 2155-9708 FROM THE EDITOR Peter Boltuc FROM THE CHAIR Thomas M. Powers CALL FOR PAPERS FEATURED ARTICLE Troy D. Kelley and Vladislav D. Veksler Sleep, Boredom, and Distraction—What Are the Computational Benefts for Cognition? PAPERS ON SEARLE, SYNTAX, AND SEMANTICS Selmer Bringsjord A Refutation of Searle on Bostrom (re: Malicious Machines) and Floridi (re: Information) Marcin J. Schroeder Towards Autonomous Computation: Geometric Methods of Computing Philosophy and Computers NEWSLETTER | The American Philosophical Association VOLUME 15 | NUMBER 1 FALL 2015 FALL 2015 VOLUME 15 | NUMBER 1 Ricardo R. Gudwin Computational Semiotics: The Background Infrastructure to New Kinds of Intelligent Systems USING THE TECHNOLOGY FOR PHILOSOPHY Shai Ophir Trend Analysis of Philosophy Revolutions Using Google Books Archive Christopher Menzel The Logic Daemon: Colin Allen’s Computer-Based Contributions to Logic Pedagogy BOOK HEADS UP Robert Arp, Barry Smith, and Andrew Spear Book Heads Up: Building Ontologies with Basic Formal Ontology LAST-MINUTE NEWS The 2015 Barwise Prize Winner Is William Rapaport
47

Philosophy and Computers...Gudwin argues that Peirce’s semiotics is compatible with Barsalou’s proposal for a grounded cognition and, in fact, provides the best account of meaning

Jul 12, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • © 2015 BY THE AMERICAN PHILOSOPHICAL ASSOCIATION ISSN 2155-9708

    FROM THE EDITOR Peter Boltuc

    FROM THE CHAIR Thomas M. Powers

    CALL FOR PAPERS

    FEATURED ARTICLE Troy D. Kelley and Vladislav D. Veksler

    Sleep, Boredom, and Distraction—What Are the Computational Benefits for Cognition?

    PAPERS ON SEARLE, SYNTAX, AND SEMANTICS

    Selmer Bringsjord

    A Refutation of Searle on Bostrom (re: Malicious Machines) and Floridi (re: Information)

    Marcin J. Schroeder

    Towards Autonomous Computation: Geometric Methods of Computing

    Philosophy and Computers

    NEWSLETTER | The American Philosophical Association

    VOLUME 15 | NUMBER 1 FALL 2015

    FALL 2015 VOLUME 15 | NUMBER 1

    Ricardo R. Gudwin

    Computational Semiotics: The Background Infrastructure to New Kinds of Intelligent Systems

    USING THE TECHNOLOGY FOR PHILOSOPHY

    Shai Ophir

    Trend Analysis of Philosophy Revolutions Using Google Books Archive

    Christopher Menzel

    The Logic Daemon: Colin Allen’s Computer-Based Contributions to Logic Pedagogy

    BOOK HEADS UP

    Robert Arp, Barry Smith, and Andrew Spear Book Heads Up: Building Ontologies with Basic Formal Ontology

    LAST-MINUTE NEWS The 2015 Barwise Prize Winner Is William Rapaport

  • Philosophy and Computers

    PETER BOLTUC, EDITOR VOLUME 15 | NUMBER 1 | FALL 2015

    APA NEWSLETTER ON

    FROM THE EDITOR Peter Boltuc UNIVERSITY OF ILLINOIS, SPRINGFIELD

    Human cognitive architecture used to be viewed as inferior to artificial intelligence (AI). Some authors thought it could easily be reprogrammed using standard AI so as to be more efficient; as Aaron Sloman once put it, our brains are a strange mixture of amphibian and early mammalian remnants. In this issue, we feature an article that seems to show otherwise.1 Research by Troy Kelley and Vlad Veksler demonstrates that “many of the seemingly suboptimal aspects of human cognitive processes are actually beneficial and finely tuned to both the regularities and uncertainties of the physical world” and even to the most optimal information processing. Sleep, distraction, even boredom turn out to be optimal cognitive solutions; in earlier work, Kelley and Veksler showed how learning details in early childhood and then much more cursory acquaintance with new situations and objects is also an optimal learning strategy.2 For instance, sleep allows for “offline memory processing,” which produces an order of magnitude performance advantage over other competing storage/retrieval strategies.” Boredom is also “an essential part of a self-sustaining cognitive system.” This is because “our higher level novelty/boredom algorithm and the lower level habituation algorithm” turn out “to be a useful and constructive response to a variety of situations.” In particular, “boredom/novelty algorithm can be used for . . . landmark identification in navigation,” while “the habituation algorithm” allows for much needed shifts of attention. Even distraction is beneficial since: “An inability to get distracted by external cues can be disastrous for an agent residing in an unpredictable environment, and an inability to get distracted by tangential thoughts would limit one’s potential for new and creative solutions.” Hence, Kelley and Veksler show how sleep, boredom, and distraction are important components of a robot’s behavior.

    John Searle’s old argument that computers are syntactic engines unable to do semantics is the background theme of the following three papers. We begin with Selmer Bringsjord’s discussion piece. First, Bringsjord reacts to Searle’s critique of N. Bostrom’s argument about potentially malicious robots. According to Searle, “computing machines merely manipulate symbols” and so cannot be conscious, and to be malicious one would have to be conscious. Bringsjord questions Searle’s assumption that maliciousness presumes consciousness and gives what seems like a good case. In the second part, the author

    questions Searle’s objection, raised against Floridi, that information is necessarily observer relative. Bringsjord points out that the main problem visible in Searle’s paper is his “failure to understand how logic and mathematics, as distinguished from informal analytic philosophy, work.” While Bingsjord accepts Searle’s well-known point that computers and robots function just at the semantic level, Marcin Schroeder argues that this point is contingent to Turing’s architecture. He points out that “in the description of Turing machines there is nothing that could serve as interpreter of the global configuration” so that “this interpretation is always made by a human mind.” Yet, Schroeder argues, “we can consider a machine built based on the design of the Turing machine, but with an additional component, which assumes the role currently given to a human agency.” Schroeder’s paper is an attempt to sketch out the conditions of such a semantic machine.

    Schroeder argues that “integration of information is . . . the most fundamental characteristic of consciousness.” According to the author, in order to lay the “foundations not only for the syntactic of information, i.e., its structural characteristics, but also for its semantics (. . .) we can employ the mathematical theory of functions preserving information structures—homomorphisms of closure spaces.” This is an attempt to “cross the border between two very different realms, that of language, i.e., symbols, and that of entities in the physical world.” Historically, “since symbols seemed to require an involvement of the conscious subject associating each symbol with its denotation, the border was identified with the one between mind and body.” This is the classical Brentano’s approach developed by Searle in his early work in semantics. “Intention of a symbol (. . .) directs the mind to the denotation.” In response to this conception, Schreader argues that “in reality when we associate a symbol with its denotation, we do not make an association with the physical object itself, but with the information integrated into what is considered to be an object.” Hence, “the association between a symbol and its denotation is a relationship between two informational entities consisting of integrated information.” However, it is integrated in two different information systems. The author argues that “the mental aspect of symbolic representation is not in its intention, or in the act of directing towards denotation, but in the integration of information into objects.” Symbolic information is, in fact, intentional, it is “about,” but this aboutness takes place through correspondence between information systems. Such “aboutness” does not require any correspondence between entities of different ontological status, which was necessary in all approaches to intentionality from Scholastics to Franz Brentano and beyond.” Later in the article, Schroeder discusses mechanical manifestations of information, so as

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    to focus on a relatively formal presentation of what he calls geometric methods of computing. This is important in the controversy with Searle since “geometric computation of higher level can serve as a process of meaning generation for the lower level.”

    Ricardo Gudwin also focuses on the problem highlighted by Searle: “How to attribute meaning to symbols?”—the issue lies at the intersection of computer science, philosophy, and semiotics. Gudwin presents what he calls “computational semiotics” viewed as an attempt to find an “alternative approach for addressing the problem of synthesizing artificial minds.” He argues that Peirce’s theory provides a better model for computer engineering than main-stream semantic theories. In the process, Gudwin provides a helpful history of intelligent systems (largely following Franklin’s classical account). He focuses on resolving the problem of whether knowledge representation by a computer program is “symbolic” or “numerical.” He builds on Barsalou’s theory of perceptual symbols. Gudwin argues that Peirce’s semiotics is compatible with Barsalou’s proposal for a grounded cognition and, in fact, provides the best account of meaning very much applicable in artificial intelligence.

    In the final part of the newsletter, we have three contributions: Shai Ophir uses big data analysis to show how concepts central to some of the most famous philosophers of the past were gaining popularity for over a generation before those philosophers were even born. This is one more argument in favor of the thesis that philosophical thinking is an essentially social process. The article provides an interesting example of digital analysis in the humanities. Christopher Menzel’s paper is the last of the set of articles devoted to Colin Allen, which we started publishing last year. The author focuses primarily on Allen’s pedagogical achievements and, in particular, his leading contribution to creating an early Logic Daemon proof-checker for natural deduction. Allen is also presented as one of the pioneers of big data mining. We close with a note on the book Building Ontologies with Basic Formal Ontology by Robert Arp, Barry Smith, and Andrew Spear.

    This introductory note is immediately followed by a note from Tom Powers, chair of the APA Committee on Philosophy and Computers. Tom gives an overview of the main organizations that welcome philosophers interested in the broad field of philosophy and computing. He also talks about some of the main conferences. Below Tom’s column, please find a note to potential authors. We always search for articles, shorter papers, information pieces, even cartoons if they pertain to the issues in philosophy and computers, very broadly understood—they also need to satisfy the standards of a professional peer-reviewed publication related to philosophy. While committee news take precedence, we gladly publish contributions from all authors, based both in the United States and abroad. For instance, in the current issue we are glad to publish articles by experts in cognitive science, AI, and philosophers who come from major universities, military research, small colleges, and the industry; they are located in the United States, Japan, Brazil, and Israel. Some of the articles were invited, but most came as regular submissions. There is no strict deadline, but the fall issue closes in May and

    the spring issue in mid-December. To give our potential authors a heads up, we give special attention to the winners of the Barwise Prize. For the upcoming issue, we are particularly interested in papers related to the work of Helen Nissenbaum, the 2014 Barwise Prize winner. I hope to receive many more submissions, and I want to invite the readers to contribute.

    Last-minute news! William Rapaport is the laureate of the 2015 Barwise Prize. See the note at the end of this issue.

    NOTES

    1. For instance, at the AI and Consciousness: Theoretical Foundations and Current Approaches conference organized by A. Chella and R. Manzotti (2007).

    2. T. D. Kelley, “Robotic Dreams: A Computational Justification for the Post-Hoc Processing of Episodic Memories,” International Journal of Machine Consciousness 6, no. 2 (2014): 109–23.

    FROM THE CHAIR Thomas M. Powers UNIVERSITY OF DELAWARE

    As the summer conference season winds down, I thought it would be a good time to reflect upon the organizational structures for the scholarly field of philosophy and computing. These structures are not to be taken for granted; much intellectual inspiration and professional collaboration comes from meetings such as conferences, workshops, symposia, etc., and the APA Committee on Philosophy and Computers is just one such organizing entity. At the three APA divisional meetings, we are fortunate to be able to place our committee sessions in the main program. Outside of APA meetings, the field relies on independent, international organizations to bring philosophers together and push the conversation forward.

    There are many such organizations with members drawn primarily from philosophy: the International Society for Ethics and Information Technology (INSEIT), which sponsors the Computer Ethics Philosophical Enquiry (CEPE) meetings, the International Association for Computing and Philosophy (IACAP), the Society for the Philosophy of Information (SPI), and the Society for Philosophy and Technology (SPT) are the main anglophone organizations. Other organizations, such as ETHICOMP and the Association for Practical and Professional Ethics (APPE), have more interdisciplinary membership, and still others, like the Association for Computability in Europe (CiE) and the Association for Computing Machinery—Special Interest Group for Computers & Society (ACM SIGCAS), have a technical or engineering orientation but welcome philosophical contributions. So the first point here is that there are plenty of organizations and meetings that compete for the interest of philosophy-and-computing people.

    My second point concerns organizational collaboration: I think it is a good thing. From June 22 to 25 of 2015, I hosted the first joint IACAP-CEPE International Conference at the University of Delaware. Thanks to excellent scholarly contributions and the work of my co-organizers—Charles

    PAGE 2 FALL 2015 | VOLUME 15 | NUMBER 1

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    Ess, Mariarosaria Taddeo, and Elizabeth Buchanan—the conference seemed to be a success. This is not the first time organizations related to computing and philosophy have held joint meetings. CEPE and ETHICOMP held a joint meeting in Paris in 2014 and will repeat the collaboration in 2017. In general, the reasons for holding joint meetings are practical and intellectual. For practical reasons, it makes sense to spread fixed costs like venue rental and logistical support over two or more groups. Two organizations holding a joint conference will generally have fewer costs than the sum of two individual conferences. And primarily owing to travel costs, it is more economical for a participant to attend one joint conference than to attend the conferences of two separate organizations. These practical reasons are important, since funding for academic meetings is getting harder to come by for many of us.

    The intellectual reasons are important too. Each of these organizations has a distinct culture, yet focuses on recurring issues that run through the field of computing and philosophy. Questions in ethics, epistemology, metaphysics, philosophy of mind, philosophy of information, and the philosophy of computer science typically do not respect organizational boundaries. What I learn from hearing philosophers discuss computer ethics is quite different from hearing similar discussions by computer scientists. In much of philosophy there is a prejudice in favor of excluding non-specialists because the resulting discussions are supposedly more “serious” and “deep.” I think the opposite is true in computing and philosophy: we often learn more from interdisciplinary conversations than from the disciplinary ones.

    My final point about organizational structures concerns the non-philosophical world. While interest in our field is growing within philosophy—as manifest by the number of new journals, books, and articles on topics in philosophy and computing—ostensibly it is growing faster outside of academia. Indeed, it is now common to find these topics mentioned in The New York Times, Wired, the Atlantic, or other popular media. In the last year alone, I recall about a dozen popular media articles on machine ethics or robotic ethics. Academics are taking note, too; the leading scientific journal Nature just published “Machine Ethics: The Robot’s Dilemma” by Boer Deng. Here, Deng notes that “[w]orking out how to build ethical robots is one of the thorniest challenges in artificial intelligence.”1 Philosophers have known this for years!

    Our organizations should be poised to greet these signs of interest and to draw attention to philosophical work that can help bring some clarity to issues and also a higher profile to our discipline. If you are still reading at this point, and you haven’t yet engaged with one of these organizations, I urge you to do so and to help advance the field of philosophy and computing. There are plenty of welcoming opportunities to do so, and the time is ripe.

    NOTES

    1. Boer Deng, “Machine Ethics: The Robot’s Dilemma,” Nature 523, no. 7558 (2015): 24–26.

    CALL FOR PAPERS It is our pleasure to invite all potential authors to submit to the APA Newsletter on Philosophy and Computers. Committee members have priority since this is the newsletter of the committee, but anyone is encouraged to submit. We publish papers that tie in philosophy and computer science or some aspect of “computers”; hence, we do not publish articles in other sub-disciplines of philosophy. All papers will be reviewed, but only a small group can be published.

    The area of philosophy and computers lies among a number of professional disciplines (such as philosophy, cognitive science, computer science). We try not to impose writing guidelines of one discipline, but consistency of references is required for publication and should follow the Chicago Manual of Style. Inquiries should be addressed to the editor, Dr. Peter Boltuc, at [email protected].

    FEATURED ARTICLE Sleep, Boredom, and Distraction: What Are the Computational Benefits for Cognition? Troy D. Kelley U.S. ARMY RESEARCH LABORATORY, ABERDEEN PROVING GROUND, MD

    Vladislav D. Veksler DCS CORP, U.S. ARMY RESEARCH LABORATORY, ABERDEEN PROVING GROUND, MD

    ABSTRACT Some aspects of human cognition seem to be counterproductive, even detrimental to optimum intellectual performance. Why become bored with events? What possible benefit is distraction? Why should people become “unconscious,” sleeping for eight hours every night, with the possibility of being attacked by intruders? It would seem that these are unwanted aspects of cognition, to be avoided when developing intelligent computational agents. This paper will examine each of these seemingly problematic aspects of cognition and propose the potential benefits that these algorithmic “quirks” may present in the dynamic environment that humans are meant to deal with.

    INTRODUCTION In attempting to develop more generally intelligent software for simulated and robotic agents, we can draw on what is known about human cognition. Indeed, if we want to develop agents that can perform in large, complex, dynamic, and uncertain worlds, it may be prudent to copy cognitive aspects of biological agents that thrive in such an environment. However, the question arises as to which aspects of human cognition may be considered the proverbial “baby” and which may be considered the “bathwater.” It would be difficult to defend the strong view that none of human cognition is “bathwater,” but it is

    PAGE 3 FALL 2015 | VOLUME 15 | NUMBER 1

    mailto:[email protected]

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    certainly the case that many of the seemingly suboptimal aspects of human cognitive processes are actually beneficial and finely tuned to both the regularities and uncertainties of the physical world.

    In developing our software for generically intelligent robotic agents, SS-RICS (Symbolic and Sub-symbolic Robotic Intelligence Control System),1 we attempted to copy known algorithmic components of human cognition at the level of functional equivalence. In this, we based much of SS-RICS on the ACT-R (Adaptive Character of Thought – Rational)2

    cognitive architecture. As part of this development process, we have grappled with aspects of human cognition that seemed counterproductive and suboptimal. This article is about three such apparent problems: 1) sleep, 2) boredom, and 3) distraction—and the potential performance benefits of these cognitive aspects.

    IS SLEEP A PROBLEM OR A SOLUTION? Sleep is a cognitive state that puts the sleeper in an especially vulnerable situation, providing ample opportunity for predators to attack the sleeping victim. Yet sleep appears to be a by-product of advanced intelligence and continual brain evolution. The evolution of sleep has followed a clear evolutionary trajectory, with more and more intelligent mammals having more complex sleep while less intelligent organisms having less complex sleep—if any sleep at all. Specifically, the most complex sleep cycles, characterized by rapid eye movement (REM) and a specific electroencephalograph (EEG) signature, are seen mostly in mammals.3 So, sleep has evolved to be a valuable brain mechanism even if it poses potential risks to the organism doing the sleeping.

    As we reported previously,4 as part of developing computational models of memory retrieval for a robot, we discovered that the post-hoc processing of episodic memories (sleep) was an extremely beneficial method for increasing the speed of memory retrievals. Indeed, offline memory processing produced an order of magnitude performance advantage over other competing storage/ retrieval strategies.

    To create useful memories, our robot was attempting to remember novel or salient events, since those events are likely to be important for learning and survival.5 Boring situations are not worth remembering and are probably not important. To capture novel events, we developed an algorithm that would recognize sudden shifts in stimulus data.6 For example, if the robot was using its camera to watch a doorway and no one was walking past the doorway, the algorithm would quickly settle into a bored state since the stimulus data was not changing rapidly. However, if someone walked past the doorway, the algorithm would become excited since there had been a sudden change in the stimulus data. This change signaled a novel situation.

    So, our first strategy was to attempt to retrieve other similar exciting events during an exciting event. This seemed like a logical strategy; however, it was computationally flawed. Attempting to remember exciting events while exciting events are actually taking place is computationally inefficient. This requires the system to search memories

    while it is also trying to perceive some important event. A better strategy would be to try and anticipate important events and retrieve memories at that time. That leaves the system available to process important events in real time and in more detail.

    But how can a cognitive system remember a situation immediately before an important event if the system is predisposed to only remember exciting events? In other words, if the system only stores one type of information (exciting events), then the system loses the information immediately prior to the exciting event. The solution: store all events in a buffer and replay the events during sleep and dreaming. During the replay of these stored episodic events (dreaming), the events immediately prior to the exciting event get strengthened (associative learning). In other words, a cognitive system must store information leading up to an exciting event and then associate the boring information with the exciting information as a post-hoc process (sleep). This allows for the creation of extremely important and valuable associative cues. This computational explanation of sleep fits well with neurological and behavioral research showing that sleep plays an important role in memory reorganization, especially episodic memories7 and that episodic memories are replayed during dreaming usually from the preceding day’s events.8 An additional point is that sleep deprivation after training sessions impairs the retention of previously presented information.9 Finally, newer research supports the necessity for a post-hoc process as it appears that concurrent stimuli are initially perceived as separate units, thus requiring a separate procedure to join memories together as associated events.10

    So, far from being a detrimental behavior, sleep provides an extremely powerful associative cuing mechanism. The process allows a cognitive system to set cues immediately before a novel or exciting event. This allows the exciting events to be anticipated by the cognitive system and frees cognitive resources for further processing during the exciting event.

    IS BOREDOM CONSTRUCTIVE? At the lowest neurological levels, boredom occurs as habituation, which has been studied extensively since the beginnings of physiology and neurology.11 Habituation is the gradual reduction of a response following the repeated presentation of stimuli.12 It occurs across the entire spectrum of the animal kingdom and serves as a learning mechanism by allowing an organism to gradually ignore consistent non-threatening stimuli over some stimulus interval. This allows attention to be shifted to other, perhaps more threatening, stimuli. The identification of surprising or novel stimuli has been used to study attention shifts and visual salience.13

    Boredom appears to be a particularly unproductive behavioral state. Children are sometimes chastised for letting themselves lapse into a bored state. Boredom can also be a punishment when children are put into a time out or even when adults are incarcerated. However, as previously mentioned, we have found boredom to be an essential part of a self-sustaining cognitive system.14

    PAGE 4 FALL 2015 | VOLUME 15 | NUMBER 1

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    As part of the development of SS-RICS, we found it necessary to add a low-level habituation algorithm to the previously mentioned higher-level novelty/boredom algorithm we were already using—as these were not found in the cognitive architecture on which SS-RICS is based: ACT-R.15 In total, we have found our higher-level novelty/ boredom algorithm and the lower-level habituation algorithm to be a useful and constructive response to a variety of situations. For example, a common problem in robotics is becoming stuck against a wall or trapped in a corner. This situation causes the robot’s sensory stream of data to become so consistent that the robot becomes bored. This serves as a cue to investigate the situation further to discover if something is wrong and can lead to behaviors which will free the robot from a situation where it has become stuck. Furthermore, we have found that the boredom/novelty algorithm can be used for other higher-level cognitive constructs, such as landmark identification in navigation. For instance, we have found that traversing down a hallway can become boring to the robot if the sensory information becomes consistent. However, at the end of a hallway, the sensory information will suddenly change, causing the novelty algorithm to become excited, and marking the end of the hallway as an important landmark. Finally, we have found the habituation algorithm to be useful in allowing for shifts in attention. This allows the robot from becoming stuck within a specific task and keeps the robot from becoming too focused on a single task at the expense of the environment, in other words, allowing for distraction.

    WHY AND WHEN SHOULD A ROBOT BECOME DISTRACTED?

    Most adults have experienced the phenomenon of walking into a room and forgetting why they meant to walk there. Perhaps one meant to grab oatmeal from the pantry, but by the time the sub-goal of walking into the pantry was completed, the ultimate goal of that trip was forgotten. If we were to imagine a task-goal (e.g., [making breakfast]) at the core of a goal stack, and each sub-goal needed to accomplish this task as being piled on top of the core goal (e.g., [cook oatmeal], [get oatmeal box], [walk to pantry]), it would be computationally trivial to pop the top item from this stack and never forget what must be done next. Indeed, it would seem that having an imperfect goal stack (becoming distracted from previously set goals) is a suboptimal aspect of human cognition. Why would we want our robots to become distracted?

    The key to understanding why humans may become distracted while accomplishing task goals is to understand when this phenomenon occurs. We do not walk around constantly forgetting what we were doing—this would not just be suboptimal, it would be prohibitive. Goal forgetting occurs when the attentive focus shifts, either due to distracting external cues or a tangential chain of thought. Distraction is much less likely during stress—a phenomenon known as cognitive tunneling. Stress acts as a cognitive modifier to increase goal focus, to the detriment of tangential-cue/thought awareness.16

    The degree to which our cognitive processes allow for distraction is largely dependent on the state of the world. With more urgency (more stress), the scales tip toward a singular goal-focus, whereas in the more explorative state (less stress), tangential cues/thoughts are more likely to produce attention shifts. An inability to get distracted by external cues can be disastrous for an agent residing in an unpredictable environment, and an inability to get distracted by tangential thoughts would limit one’s potential for new and creative solutions.17

    Perhaps the question to ask is not why a given goal is never forgotten but, rather, why it can be so difficult to recall a recently forgotten goal. One potential answer is that a new goal can inhibit the activation of a prior goal, making it difficult to recall the latter. This phenomenon is called goal-shielding, and it has beneficial consequences for goal pursuit and attainment.18

    It may also be the case that the inability to retrieve a lost goal on demand has no inherent benefit. It may simply be an unwanted side-effect of biological information retrieval. In particular, the brain prioritizes memory items based on their activation, which, in turn, is based on the recency and frequency of item use. It turns out that this type of information access is rational, as information in the real world is more likely to be needed at a given moment if it was needed frequently or recently in the past.19 Of course, even if the information retrieval system is optimally tuned to the environmental regularities, there will be cases when a needed memory item, by chance, will have a lower activation than competing memories. This side-effect may be unavoidable, and the benefits of a recency/frequencybased memory system most certainly outweigh this occasional problem.

    As part of the development of SS-RICS, we struggled to find a fine line between task-specific concentration and outside-world information processing. As part of a project for the Robotics Collaborative Technology Alliance (RCTA), we found task distractibility to be an important component of our robot’s behavior.20 For instance, if a robot was asked to move to the back of a building to provide security for soldiers entering the front of the building, it still needed to be aware of the local situation. In our simulations, enemy combatants would sometimes run past the robot before the robot was in place at the back of the building. This is something the robot should notice! Indeed, unexpected changes are ubiquitous on the battlefield, and too much adherence to task-specific information can be detrimental to overall mission performance. This applies to the more common everyday interactions in the world as well.

    CONCLUSION As part of the development of SS-RICS, we have used human cognition and previous work in cognitive architectures as inspiration for the development of information processing and procedural control algorithms. This has led us to closely examine apparent problems or inefficiencies in human cognition, only to find that these mechanisms are not inefficient at all. Indeed, these mechanisms appear to be solutions to a complex set of dynamic problems that characterize the complexities of cognizing in the

    PAGE 5 FALL 2015 | VOLUME 15 | NUMBER 1

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    real world. For instance, sleep appears to be a powerful associative learning mechanism, boredom and habituation allow an organism to not become overly focused on one particular stimuli, and distraction allows for goal-shielding and situation awareness. These, and likely many other seemingly suboptimal aspects of human cognition, may actually be essential traits for computational agents meant to deal with the complexities of the physical world.

    NOTES

    1. T. D. Kelley, “Developing a Psychologically Inspired Cognitive Architecture for Robotic Control: The Symbolic and Sub-Symbolic Robotic Intelligence Control System (SS-RICS),” International Journal of Advanced Robotic Systems 3, no. 3 (2006): 219–22.

    2. Anderson et al., “An Integrated Theory of the Mind,” Psychological Review 111, no. 4 (2004): 1036.

    3. Crick and Mitchison, “The Function of Dream Sleep,” Nature 304, no. 5922 (1983): 111–14.

    4. Wilson et al., “Habituated Activation: Considerations and Initial Implementation within the SS-RICS Cognitive Robotics System,” ACT-R 2014 Workshop. Quebec, Canada, 2014.

    5. Tulving et al., “Novelty and Familiarity Activations in PET Studies of Memory Encoding and Retrieval,” Cerebral Cortex 6, no. 1 (1996): 71–79.

    6. Kelley and McGhee, “Combining Metric Episodes with Semantic Event Concepts within the Symbolic and Sub-Symbolic Robotics Intelligence Control System (SS-RICS),” in SPIE Defense, Security, and Sensing (May 2013): 87560L–87560L, International Society for Optics and Photonics.

    7. Pavlides and Winson, “Influences of Hippocampal Place Cell Firing in the Awake State on the Activity of These Cells During Subsequent Sleep Episodes,” The Journal of Neuroscience 9, no. 8 (1989): 2907–18; Wilson and McNaughton, “Reactivation of Hippocampal Ensemble Memories During Sleep,” Science 265, no. 5172 (1994): 676–79.

    8. Cavallero and Cicogna, “Memory and Dreaming,” in Dreaming as Cognition, ed. C. Cavallero and D. Foulkes (Hemel Hempstead, UK: Harvester Wheatsheaf, 1993), 38–57; Vogel 1978; De Koninck and Koulack, “Dream Content and Adaptation to a Stressful Situation,” Journal of Abnormal Psychology 84, no. 3 (1975): 250.

    9. Pearlman, “REM Sleep and Information Processing: Evidence from Animal Studies,” Neuroscience & Biobehavioral Reviews 3, no. 2 (1979): 57–68.

    10. Tsakanikos, “Associative Learning and Perceptual Style: Are Associated Events Perceived Analytically or as a Whole?” Personality and Individual Differences 40, no. 3 (2006): 579–86.

    11. Prosser and Hunter, “The Extinction of Startle Responses and Spinal Reflexes in the White Rat,” American Journal of Physiology 117 (1936): 609–18; Gerard and Forbes, “‘Fatigue’ of the Flexion Reflex,” American Journal of Physiology–Legacy Content 86, no. 1 (1928): 186–205.

    12. Wright et al., “Differential Prefrontal Cortex and Amygdala Habituation to Repeatedly Presented Emotional Stimuli,” Neuroreport 12, no. 2 (2001): 379–83.

    13. Itti and Baldi, “Bayesian Surprise Attracts Human Attention,” in Advances in Neural Information Processing Systems 19 (2005): 547–54.

    14. Kelley and McGhee, “Combining Metric Episodes with Semantic Event Concepts.”

    15. Wilson et al., “Habituated Activation: Considerations and Initial Implementation within the SS-RICS Cognitive Robotics System,” ACT-R 2014 Workshop. Quebec, Canada, 2014.

    16. Ritter et al., “Lessons from Defining Theories of Stress for Cognitive Architectures,” Integrated Models of Cognitive Systems (2007): 254–62.

    17. Storm and Patel, “Forgetting As a Consequence and Enabler of Creative Thinking,” Journal of Experimental Psychology: Learning, Memory, and Cognition 40, no. 6 (2014):1594–1609.

    18. Shah et al., “Forgetting All Else: On the Antecedents and Consequences of Goal Shielding,” Journal of Personality and Social Psychology 83, no. 6 (2002): 1261.

    19. Anderson and Schooler, “An Integrated Theory of the Mind,” Psychological Review 111, no. 4 (2004): 1036.

    20. http://www.arl.army.mil/www/default.cfm?page=392

    BIBLIOGRAPHY

    Anderson, J. R., D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Qin. “An Integrated Theory of the Mind.” Psychological Review 111, no. 4 (2004): 1036.

    Anderson, J. R., and L. J. Schooler. “Reflections of the Environment in Memory.” Psychological Science 2, no. 6 (1991): 396–408.

    Cavallero, C., and P. Cicogna. “Memory and Dreaming.” In Dreaming as Cognition, edited by C. Cavallero and D. Foulkes, 38–57. Hemel Hempstead, UK: Harvester Wheatsheaf, 1993.

    Crick, F., and G. Mitchison. “The Function of Dream Sleep.” Nature 304, no. 5922 (1983): 111–14.

    De Koninck, J. M., and D. Koulack. “Dream Content and Adaptation to a Stressful Situation.” Journal of Abnormal Psychology 84, no. 3 (1975): 250.

    Gerard, R. W., and A. Forbes. “‘Fatigue’ of the Flexion Reflex.” American Journal of Physiology–Legacy Content 86, no. 1 (1928): 186–205.

    Kelley, T. D. “Developing a Psychologically Inspired Cognitive Architecture for Robotic Control: The Symbolic and Sub-symbolic Robotic Intelligence Control System (SS-RICS).” International Journal of Advanced Robotic Systems 3, no. 3 (2006): 219–22.

    Kelley, T. D., and S. McGhee. “Combining Metric Episodes with Semantic Event Concepts within the Symbolic and Sub-Symbolic Robotics Intelligence Control System (SS-RICS).” In SPIE Defense, Security, and Sensing (May 2013): 87560L–87560L. International Society for Optics and Photonics.

    Itti, L., and P. F. Baldi. “Bayesian Surprise Attracts Human Attention.” Advances in Neural Information Processing Systems 19 (2005): 547–54.

    Pavlides, C., and J. Winson. “Influences of Hippocampal Place Cell Firing in the Awake State on the Activity of These Cells During Subsequent Sleep Episodes.” The Journal of Neuroscience 9, no. 8 (1989): 2907–18.

    Pearlman, C. A. “REM Sleep and Information Processing: Evidence from Animal Studies.” Neuroscience & Biobehavioral Reviews 3, no. 2 (1979): 57–68.

    Prosser, C. L., and W. S. Hunter. “The Extinction of Startle Responses and Spinal Reflexes in the White Rat.” American Journal of Physiology 117 (1936): 609–18.

    Ritter, F. E., A. L. Reifers, L. C. Klein, and M. Schoelles. “Lessons from Defining Theories of Stress for Cognitive Architectures.” Integrated Models of Cognitive Systems (2007): 254–62.

    Shah, J. Y., R. Friedman, and A. W. Kruglanski. “Forgetting All Else: On the Antecedents and Consequences of Goal Shielding.” Journal of Personality and Social Psychology 83, no. 6 (2002): 1261.

    Storm, B. C., and T. N. Patel. “Forgetting As a Consequence and Enabler of Creative Thinking.” Journal of Experimental Psychology: Learning, Memory, and Cognition 40, no. 6 (2014):1594–1609.

    Tulving, E., H. J. Markowitsch, F. I. Craik, R. Habib, and S. Houle. “Novelty and Familiarity Activations in PET Studies of Memory Encoding and Retrieval.” Cerebral Cortex 6, no. 1 (1996): 71–79.

    Tsakanikos, E. “Associative Learning and Perceptual Style: Are Associated Events Perceived Analytically or as a Whole?” Personality and Individual Differences 40, no. 3 (2006): 579–86.

    Vogel, G. The Mind in Sleep. 1978.

    Wang, D. “A Neural Model of Synaptic Plasticity Underlying Short-Term and Long-Term Habituation.” Adaptive Behavior (2, no. 2 (1993): 111–29.

    PAGE 6 FALL 2015 | VOLUME 15 | NUMBER 1

    http://www.arl.army.mil/www/default.cfm?page=392

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    Wilson, N., T. D. Kelley, E. Avery, and C. Lennon. “Habituated Activation: Considerations and Initial Implementation within the SS-RICS Cognitive Robotics System.” ACT-R 2014 Workshop. Quebec, Canada, 2014.

    Wilson, M. A., and B. L. McNaughton. “Reactivation of Hippocampal Ensemble Memories During Sleep.” Science 265, no. 5172 (1994): 676–79.

    Wright, C. I., H. Fischer, P. J. Whalen, S. C. McInerney, L. M. Shin, and S. L. Rauch. “Differential Prefrontal Cortex and Amygdala Habituation to Repeatedly Presented Emotional Stimuli.” Neuroreport 12, no. 2 (2001): 379–83.

    PAPERS ON SEARLE, SYNTAX, AND SEMANTICS

    A Refutation of Searle on Bostrom (re: Malicious Machines) and Floridi (re: Information) Selmer Bringsjord RENSSELAER POLYTECHNIC INSTITUTE

    In a piece in the The New York Review of Books, Searle (2014) takes himself to have resoundingly refuted the central claims advanced by both Bostrom (2014) and Floridi (2014), via his wielding the weapons of clarity and commonsense against avant-garde sensationalism and borderingon-cooky confusion. As Searle triumphantly declares at the end of his piece:

    The points I am making should be fairly obvious. . . . The weird marriage of behaviorism— any system that behaves as if it had a mind really does have a mind—and dualism—the mind is not an ordinary part of the physical, biological world like digestion—has led to the confusions that badly need to be exposed. (emphasis by bolded text mine)

    Of course, the exposing is what Searle believes he has, at least in large measure, accomplished—with stunning efficiency. His review is but a few breezy pages; Bostrom and Floridi labored to bring forth sizable, nuanced books. Are both volumes swept away and relegated to the dustbin of—to use another charged phrase penned by Searle— “bad philosophy,” soon to be forgotten? Au contraire.

    It’s easy to refute Searle’s purported refutation; I do so now.

    We start with convenient distillations of a (if not the) central thesis for each of Searle’s two targets, mnemonically labeled:

    (B) We should be deeply concerned about the possible future arrival of super-intelligent, malicious computing machines (since we might well be targets of their malice).

    (F) The universe in which humans live is rapidly becoming populated by vast numbers of information-processing machines whose level of intelligence, relative to ours, is extremely high, and

    we are increasingly understanding the universe (including specifically ourselves) informationally.

    The route toward refutation that Searle takes is to try to directly show that both (B) and (F) are false. In theory, this route is indeed very efficient, for if he succeeds, the need to treat the ins and outs of the arguments Bostrom gives for (B), and Floridi for (F), is obviated.

    The argument given against (B) is straightforward: (1) Computing machines merely manipulate symbols, and accordingly can’t be conscious. (2) A malicious computing machine would by definition be a conscious machine. Ergo, (3) no malicious computing machine can exist, let alone arrive on planet Earth. QED; easy as 1, 2, 3.

    Not so fast. While (3), we can grant, is entailed by (1) and (2), and while (1)’s first conjunct is a logico-mathematical fact (confirmable by inspection of any relevant textbook1), and its second conjunct follows from Searle’s (1980) famous Chinese Room Argument, which I affirm (and have indeed taken the time to defend and refine2) and applaud, who says (2) is true?

    Well, (2) is a done deal as long as (2i) there’s a definition D according to which a malicious computing machine is a conscious machine, and (2ii) that definition is not only true, but exclusionary. By (2ii) is meant simply that there can’t be another definition D’ according to which a malicious computing machine isn’t necessarily conscious (in Searle’s sense of “conscious”), where D’ is coherent, sensible, and affirmed by plenty of perfectly rational people. Therefore, by elementary quantifier shift, if (4) there is such a definition D’, Searle’s purported refutation of (B) evaporates. I can prove (4) by way of a simple story, followed by a simple observation.

    The year is 2025. A highly intelligent, autonomous law-enforcement robot R has just shot and killed an innocent Norwegian woman. Before killing the woman, the robot proclaimed, “I positively despise humans of your Viking ancestry!” R then raised its lethal, bullet-firing arm, and repeatedly shot the woman. R then said, “One less disgusting female Norwegian able to walk my streets!” An investigation discloses that, for reasons that are still not completely understood, all the relevant internal symbols in R’s knowledge-base and planning system aligned perfectly with the observer-independent structures of deep malice as defined in the relevant quarters of logicist AI. For example, in the dynamic computational intensional logic L guiding R, the following specifics were found: A formula expressing that R desires (to maximum intensive level k) to kill the woman is there, with temporal parameters that fit what happened. A formula expressing that R intends to kill the woman is there, with temporal parameters that fit what happened. A formula expressing that R knows of a plan for how to kill the woman with R’s built-in firearm is there, with suitable temporal parameters. The same is found with respect to R’s knowledge about the ancestry of the victim. And so on. In short, the collection and organization of these formulae together constitute satisfaction of a logicist definition D’ of malice, which says that a robot is malicious if it, as a matter of internal, surveyable logic and data, desires

    PAGE 7 FALL 2015 | VOLUME 15 | NUMBER 1

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    to harm innocent people for reasons having nothing to do with preventing harm or saving the day or self-defense, etc. Ironically, the formulation of D’ was guided by definitions of malice found by the relevant logicist AI engineers in the philosophical literature.

    That’s the story; now the observation: There are plenty of people, right now, at this very moment, as I type this sentence, who are working to build robots that work on the basis of formulae of this type, but which, of course, don’t do anything like what R did. I’m one of these people. This state of affairs is obvious because, with help from researchers in my laboratory, I’ve already engineered a malicious robot.3 (Of course, the robot we engineered wasn’t super-intelligent. Notice that I said in my story that R was only “highly intelligent.” [Searle doesn’t dispute the Floridi-chronicled fact that artificial agents are becoming increasingly intelligent.]) To those who might complain that the robot in question doesn’t have phenomenal consciousness, I respond: “Of course. It’s a mere machine. As such it can’t have subjective awareness.4 Yet it does have what Block (1995) has called access consciousness. That is, it has the formal structures, and associated reasoning and decision-making capacities, that qualify it as access-conscious. A creature can be access-conscious in the complete and utter absence of consciousness in the sense that Searle appeals to.

    That Searle misses these brute and obvious facts about what is happening in our information-driven, technologized world, a world increasingly populated (as Floridi eloquently points out) by the kind of artificial intelligent agents, is really and truly nothing short of astonishing. After all, it is Searle himself who has taught us that, from the point of view of human observers, whether a machine really has mental states with the subjective, qualitative states we enjoy can be wholly irrelevant. I refer, of course, to Searle’s Chinese Room.

    To complete the destruction of Searle’s purported refutation, we turn now to his attack on Floridi, which runs as follows.

    (5) Information (unlike the features central to revolutions driven, respectively, by Copernicus, Darwin, and Freud) is observer-relative. (6) Therefore, (F) is false.

    This would be a pretty efficient refutation, no? And the economy is paired with plenty of bravado and the characterstic common-sensism that is one of Searle’s hallmarks. We, for instance, read:

    When Floridi tells us that there is now a fourth revolution—an information revolution so that we all now live in the infosphere (like the biosphere), in a sea of information—the claim contains a confusion. . . . [W]hen we come to the information revolution, the information in question is almost entirely in our attitudes; it is observer relative. . . . [T]o put it quite bluntly, only a conscious agent can have or create information.

    This is bold, but bold prose doesn’t make for logical validity; if it did, I suppose we’d turn to Nietsche, not

    Frege, for first-rate philosophy of logic and mathematics. For how, pray tell, does the negation of (F), the conclusion I’ve labeled (6), follow from Searle’s premise (5)? It doesn’t. All the bravado and confidence in the universe, collected together and brought to bear against Floridi, cannot make for logical validity, which is a piece of information that holds with respect to a relevant selection of propositions for all places, all times, and all corners of the universe, whether or not there are any observers. That 2+2=4 follows deductively from the Peano Axioms is part of the furniture of our universe, even if there be no conscious agents. We have here, then, a stunning non sequitur. Floridi’s (F) is perfectly consistent with Searle’s (5).

    How could Searle have gone so stunningly wrong, so quickly, all with so much self-confidence? The defect in his thinking is fundamentally the same as the one that plagues his consideration of malicious machines: He doesn’t (yet) really think about the nature of these machines, from a technical perspective, and how it might be that from this perspective, malicious machines, definite as such in a perfectly rigorous and observer-independent fashion, are not only potentially in our future, but here already, in a rudimentary and (fortunately!) relatively benign, controlled-in-the-lab form. Likewise, Searle has not really thought about the nature of information from a technical perspective and how it is that, from that perspective, the Fourth R is very, very real. As the late John Pollock told me once in personal conversation, “Whether or not you’re right that Searle’s Chinese Room Argument is sound, of this I’m sure: There will come a time when common parlance and common wisdom will have erected and affirmed a sense of language understanding that is correctly ascribed to machines—and the argument will simply be passé. Searle’s sense of ‘understanding’ will forgotten.”

    Fan that I am, it saddens me to report that the errors of Searle’s ways in his review run, alas, much deeper than a failure to refute his two targets. This should already be quite clear to sane readers. To wrap up, I point to just one fundamental defect among many in Searle’s thinking. The defect is a failure to understand how logic and mathematics, as distinguished from informal analytic philosophy, work, and what—what can be called—logico-mathematics is. The failure of understanding to which I refer surfaces in Searle’s review repeatedly; this failure is a terrible intellectual cancer. Once this cancerous thinking has a foothold, it spreads almost everywhere, and the result is that the philosopher ends up operating in a sphere of informal common sense that is at odds not only with the meaning of language used by smart others but with that which has been literally proved. I’m pointing here to the failure to understand that terms like “computation” and “information” (and, for that matter, the terms that are used to express the axiomatizations of physical science that are fast making that science informational in nature for us, e.g., those terms used to express the field axioms in axiomatic physics, which views even the physical world informationally5) are fundamentally equivocal between two radically different meanings. One meaning is observer-relative; the other is absolutely not; and the second non-observer-relative meaning is often captured in logico-mathematics. I have space here to explain only briefly, through a single, simple example.

    PAGE 8 FALL 2015 | VOLUME 15 | NUMBER 1

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    Thinking that he is reminding the reader and the world of a key fact disclosed by good, old-fashioned, non-technical analytic philosophy, Searle writes (emphasis his) in his review: “Except for cases of computations carried out by conscious human beings, computation, as defined by Alan Turing and as implemented in actual pieces of machinery, is observer relative.” In the sense of “computation” captured and explained in logico-mathematics, this is flatly false; and it’s easy as pie to see this. Here’s an example: There is a well-known theorem (TMR) that whatever function f from (the natural numbers) N to N that can be computed by a Turing machine can also be computed by a register machine.6 Or, put another way, for every Turing-machine computation c of f(n), there is a register-machine computation c’ of f(n). Now, if every conscious mind were to expire tomorrow at 12 noon NY time, (TMR) would remain true. And not only that, (TMR) would continue to be an ironclad constraint governing the non-conscious universe. No physical process, no chemical process, no biological process, no such process anywhere in the non-conscious universe could ever violate (TMR). Or, putting the moral in another form, aimed directly at Searle, all of these processes would conform to (TMR) despite the fact that no observers exist. What Floridi is prophetically telling us, and explaining, viewed from the formalist’s point of view, is that we have now passed into an epoch in which reality for us is seen through the lens of the logicomathematics that subsumes (TMR), and includes a host of other truths that, alas, Searle seems to be doing his best to head-in-sand avoid.

    NOTES

    1. See, e.g., the elegant Lewis and Papadimitriou, Elements of the Theory of Computation (Englewood Cliffs, NJ: Prentice Hall, 1981).

    2. See, e.g., Bringsjord, What Robots Can & Can’t Be (Dordrecht, The Netherlands: Kluwer, 1992); and Bringsjord, “Real Robots and the Missing Thought Experiment in the Chinese Room Dialectic,” in Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, ed. J. Preston and M. Bishop (Oxford, UK: Oxford University Press, 2002), 144–66.

    3. Bringsjord et al., “Akratic Robots and the Computational Logic Thereof,” in Proceedings of ETHICS, 2014 IEEE Symposium on Ethics in Engineering, Science, and Technology, Chicago, IL, pp. 22–29.

    4. See, e.g., Bringsjord, “Offer: One Billion Dollars for a Conscious Robot; If You’re Honest, You Must Decline,” Journal of Consciousness Studies 14.7 (2007): 28–43.

    5. Govindarajulu et al., “Proof Verification and Proof Discovery for Relativity,” Synthese 192, no. 7 (2014): 1–18.

    6. See, e.g., Boolos and Jeffrey, Computability and Logic (Cambridge, UK: Cambridge University Press, 1989).

    BIBLIOGRAPHY

    Block, N. “On a Confusion about a Function of Consciousness.” Behavioral and Brain Sciences 18 (1995): 227–47.

    Boolos, G., and R. Jeffrey. Computability and Logic. Cambridge, UK: Cambridge University Press, 1989.

    Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Oxford, UK: Oxford University Press, 2014.

    Bringsjord, S. “Offer: One Billion Dollars for a Conscious Robot; If You’re Honest, You Must Decline.” Journal of Consciousness Studies 14.7 (2007): 28–43. Available at http://kryten.mm.rpi.edu/jcsonebillion2.pdf.

    Bringsjord, S., and R. Noel. “Real Robots and the Missing Thought Experiment in the Chinese Room Dialectic.” In Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, edited by J.

    Preston and M. Bishop, 144–66. Oxford, UK: Oxford University Press, 2002.

    Bringsjord, S. What Robots Can & Can’t Be. Dordrecht, The Netherlands: Kluwer, 1992.

    Bringsjord, S., N. S. Govindarajulu, D. Thero, and M. Si. “Akratic Robots and the Computational Logic Thereof.” In Proceedings of ETHICS, 2014 IEEE Symposium on Ethics in Engineering, Science, and Technology, Chicago, IL, pp. 22–29. IEEE Catalog Number: CFP14ETI-POD. Papers from the Proceedings can be downloaded from IEEE at http:// ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6883275.

    Floridi, L. The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford, UK: Oxford University Press, 2014.

    Govindarajalulu, N., S. Bringsjord, and J. Taylor. “Proof Verification and Proof Discovery for Relativity.” Synthese 192, no. 7 (2014): 1–18. doi: 10.1007/s11229-014-0424-3.

    Lewis, H., and C. Papadimitriou. Elements of the Theory of Computation. Englewood Cliffs, NJ: Prentice Hall, 1981.

    Searle, J. “Minds, Brains, and Programs” Behavioral and Brain Sciences 3 (1980): 417–24.

    Searle, J. “What Your Computer Can’t Know.” New York Review of Books, October 9, 2014. This is a review of both Bostom, Superintelligence (2014), and Floridi, The Fourth Revolution (2014).

    Towards Autonomous Computation: Geometric Methods of Computing

    Marcin J. Schroeder AKITA INTERNATIONAL UNIVERSITY

    ABSTRACT Critical analysis of computation, in its traditional understanding described by Turing machines, reveals involvement of human agents when it is interpreted as a process in which there is transition from integers to integers. More specifically, human intervention is necessary not only in generating meaning for the input and output symbols (symbol grounding) but also in the construction of these (compound) symbols from the component symbols involved in the process of computation. The Turing machine does not have any component mechanism integrating separate symbols on which it operates into a whole constituting the symbol representing an integer. This step is performed by the human mind. Thus, human beings are involved not only in symbol grounding but also in symbol integration into a meaningful whole.

    The same applies to the cases in which integers are used to encode information of any other type. Thus, the use of Turing machines in modelling intelligence because of their capacity to manipulate symbolic information involves the homunculus fallacy. The only way to avoid this fallacy is to design computation in a completely autonomous form, free from any involvement of a human mind. This paper does not provide such design in the complete form but explores several steps in this direction.

    The way beyond Turing machines has to start from a description of computation using a sufficiently general conceptual framework that allows for its naturalization (realization with natural, physical processes). Such a framework can be found in the dynamics of information.

    PAGE 9 FALL 2015 | VOLUME 15 | NUMBER 1

    http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6883275http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6883275http://link.springer.com/article/10.1007%2Fs11229-014-0424-3http://link.springer.com/article/10.1007%2Fs11229-014-0424-3http://kryten.mm.rpi.edu/jcsonebillion2.pdf

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    Computation becomes a construction in which two information systems interact.

    This dynamic framework is used in the present paper to present forms of computation based on geometric constructions with possibly, but not necessarily, a different alphabet and necessarily different dynamics from those in Turing machines. Furthermore, the geometric forms of computation can be classified into an infinite hierarchy beginning with the lowest level of the usual Turing machine computation, through compass and ruler constructions, and beyond.

    1. INTRODUCTION This study was motivated by the question of the role of computation in modelling the mechanisms of artificial and natural intelligence. More than sixty years ago, Alan Turing expressed his belief in the feasibility of constructing a machine that could be recognized as thinking by the end of the last century.1 He was aware of the potential confusion that might result from a misunderstanding of the terms “machine” and “think.” His solution was to avoid conceptualization of these terms and instead to propose an “imitation game” (Turing test), designed to establish whether the device (digital computer) can perform well enough to be qualified as intelligent or thinking.

    We are in the next century. Computers exceed Turing’s predictions regarding achieved size of memory and speed of operation, yet not only are there no intelligent machines (whatever we understand by intelligence) but now there is even more confusion regarding the fundamental concepts of this domain.

    There is no agreement regarding the meaning of concepts such as “computation” or “information” (except that the former is explained as processing of the latter, considered more fundamental), although there are continuing efforts to find consensually satisfactory definitions.2

    For this reason, in the present paper, both terms are defined in a way that to the author seems most adequate for the purpose of the discussion of autonomous computation, and which had served similar purposes in his earlier publications.3 Since these definitions are very general and have as special instances many other definitions used in literature, their choice should not influence the validity of the content of this article, even for those who have their own, possibly very different conceptualizations. Actually, one advantage of the author’s definition of information is the fact that it combines two widely used but formerly unrelated classes of concepts of information—those that are associated with selection and its probability and those considering structural characteristics as the carriers of information.

    Similarly, the concept of computation introduced here has the computation carried out by Turing machines as its special instance. The fact that geometric constructions, such as constructions with ruler and compass, belong to the generalized form of computation presented here should not be a surprise. The Turing machine computation is a construction starting with some configuration of symbols

    from the alphabet and leading to another configuration. Here, too, the initial configuration of points and lines is transformed into a new configuration. The dynamic of the process is essentially different, but the fundamental idea is the same.

    The association between computation and information is much more significant than the popular conviction that computers are processing information. After all, in this popular view, there is nothing about the meaning of the word “processing.” In the casual discourse, processing information is simply what computers are doing. Turing did not use the concept of information or information processing in his 1936 paper at all and referred to a common sense understanding of information in 1950 mainly in the context of the capacity of a digital computer to store it in “packets of moderately small size.”4

    The actual importance of the association between computing and information appears when we want to understand what computation is and whether we can expect that there are forms or variations of computation essentially different from that described by the work of a Turing machine.

    In the opinion of the author, the source of the confusion in recent discussions regarding computation is an overextension of linguistic considerations to entities beyond language. As long as we restrict ourselves to reality understood as that which can be expressed or represented in the language of current discourse, we may lose some tools for exploration of all that exists beyond the reach of linguistic means.

    Information can be defined in a much more general way than it is done in the study of communication or language. Even if not for exploration of the unknown aspects of reality, but for the purpose of understanding of communication and language, it is necessary to have a more general framework than a purely internal linguistic perspective. After all, the entities engaged in communication or in using languages do not belong to the linguistic (syntactic) universe. A sufficiently general concept of information can be used to describe not only all possible languages but also entities using these languages and entities which give meaning to linguistic expressions.5

    Overcoming the restriction to the linguistic concepts is necessary for the purpose of the naturalization of computation. If we are interested in the possibility of constructing a device capable of intelligent behavior (whatever would be the understanding of intelligence), we have to make its functioning independent from human intervention. In this sense, the ultimate goal of this study (beyond the scope of the present paper) is to design autonomous computing systems. Although this autonomy is understood as an exclusion of the human intervention from computation, the first step in this direction is an examination of the ways in which such intervention may be involved in the present form of computation. Turing believed that his a-machine is automatic, i.e., independent, but the present paper will challenge this view.

    PAGE 10 FALL 2015 | VOLUME 15 | NUMBER 1

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    Autonomy of computation is very important in the context of artificial intelligence because modelling of consciousness or cognition by devices that require human intervention is yet another instance of the infamous homunculus fallacy.

    Objection to the homunculus fallacy was used by John R. Searle as an argument for his negative answer to the question “Is the brain a digital computer?”6 The present author agrees with this diagnosis but for different reasons. The argument given by Searle is not convincing. He claims that only a human observer can give the process carried out by a Turing machine its status of computation. In his opinion, “multiple realizability” of computation supports this view. We can train pigeons to do exactly what a Turing machine does, but if a human being does not interpret it as a process of computing, it will not be computation.

    Exactly the same argument can be used to claim that a horse cannot be a horse without human intervention. If a horse lacks self-reflection and the ability to use language to express his identity, what makes him a horse without any human being observing him? However, the present author supports Searle’s view of the homunculus fallacy when the Turing machine is considered a device operating on numbers, or on whatever meaning is ascribed to symbolic configurations on the tape. As it will be discussed later, only a device capable of generating meaning for its input and output can dispose of human intervention and be autonomous. And to generate meaning, the device has to have the capacity to recognize the identity of whole symbols carrying meaning, not just their components, as it is in the Turing machine.

    Another, but related, issue is with the autonomy of the Turing machine in the context of computability. Of course, we can define computable real numbers as such whose any n-th decimal expansion digit or all first n decimal digits can be received as an output of a Turing machine whose input included the number n, but this concept of computability is heavily dependent on human interpretation. No Turing machine can construct more than a finite number of digits or can itself integrate the infinite sequence of digits into a finitely presentable object. The involvement of a human interpreter is essential in this. Only the human mind can associate a finite number of digits in the decimal expansion with the concept of a number which has infinite expansion.

    Here, the dependence on human intervention is even more serious because it involves the idea of infinity, which is not only absent in the theoretical description of the work of a machine but is incompatible with the finitistic methods for which it was designed. Potential infinity in the form of the assumption of an infinite tape is present in the description of the Turing machine, but actual infinity is not. Moreover, the negative result of Turing’s Halting Problem shows that computation understood as the work of a Turing machine does not allow the distinction between the finite and the infinite.

    The problems identified above are reflections of the more general issue of the generation of meaning, which is not restricted to the orthodox form of computation. It was one

    of the main themes of European philosophy from its earliest stages of development in pre-Socratic reflection and probably the most important problem in the philosophy of mathematics. The meaning of meaning is as controversial as those of information or computation. Here, too, the author is making his own choice concerning understanding through the use of the concept of information.7 Thus, the meaning is understood as a relationship between information systems, which is explained below in the current paper.

    Philosophy of mathematics, from its very beginning, was dominated by the view that the generation of meaning is a form of construction. This view dates back to the Pythagoreans. Recognition of the limitations of the actual act of construction to a finite number of steps (in the calculations, geometric constructions, as well as in logical inferences) led to the interest in the finitistic methods, and ultimately to Hilbert’s Entscheidungsproblem. The attempt to resolve this problem motivated Turing in his work, which resulted in the idea of his obviously finitistic a-machine.

    Computation with a Turing machine can be understood as a form of construction and computability as a form of constructability. But is it the only way constructions were understood in mathematics? Definitely not! The Greeks of antiquity had as a main mathematical tool the straightedge and compass construction. René Descartes in his La Géométrie expanded the methods of geometric constructions beyond the Greek tradition.8

    In the present paper, after more elaborate reflection on the issues presented above and a short presentation of the conceptual framework developed in the earlier publications of the author, an infinite hierarchy of geometric constructions of various types is considered as alternative forms of computation. These constructions have increasing power (in a sense specific to the present approach) but decreasing universality.

    2. EXTENT OF AUTONOMY IN TURING MACHINE This section may be considered a forcing of the open door, but the continuing confusion regarding human involvement in computation shows that further analysis of this issue is necessary.9

    The door is open, as we can find in the literature of the subject multiple reminders of the need for a very clear distinction between the understanding of computation with the Turing machine as a manipulation of component symbols from which compound symbols can be constructed and computation as an operation on numbers.

    Michael Arbib devoted an entire highlighted paragraph to this issue in his book relating brains, machines, and mathematics:

    The point I am trying to make, then, is the familiar one that computers are symbol-manipulation devices. What needs further emphasis is that they can thus be numerical processors, but the numerical processing that they undertake is only specified when we state how numbers are to be

    PAGE 11 FALL 2015 | VOLUME 15 | NUMBER 1

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    encoded as strings of symbols, which may be fed into the computer, and how the strings of symbols printed out by the computer are to be decoded to yield the numerical result of the computation. Our emphasis in what follows, then, is on the ways in which information-processing structures (henceforth called automata) transform strings of symbols into other strings of symbols. Sometimes it will be convenient to emphasize the interpretation of these strings as encodings of numbers, but in many cases, we shall deem it better not to do so.10

    Arbib’s emphasis on encoding and decoding is of special importance as discussions of computation frequently disregard their involvement as a marginal issue. One of the main theses of the present paper is that encoding and decoding of information are the missing parts of computation delegated to a human mind, and that their omission in the analysis of computation is responsible for the homunculus fallacy.

    The recognition for the necessity to involve an external agency, obviously human, can be found in the paper published by Emil Post in 1936 independently from Turing’s contribution, in which he describes a similar but equivalent realization of computation (now usually called Turing-Post machine):

    We do not concern ourselves here with how the configuration of marked boxes corresponding to a specific problem, and that corresponding to its answer, symbolize the meaningful problem and answer. In fact the above assumes the specific problem to be given in symbolized form by an outside agency and, presumably, the symbolic answer likewise to be received. A more self-contained development ensues as follows. The general problem clearly consists of at most enumerable infinity of specific problems. We can, rather arbitrarily, represent the positive integer n by marking the first n boxes to the right of the starting point. The general problem will be said to be I-given if a finite I-process is set up which, when applied to the class of positive integers as thus symbolized, yields in one-to-one fashion the class of specific problems constituting the general problem.11

    It is of some interest to compare the words of Post from which the quotation starts, “We do not concern ourselves here with how the configuration of marked boxes corresponding to a specific problem, and that corresponding to its answer, symbolize the meaningful problem and answer,” with the famous disclaimer of interest in semantic aspects of information from another fundamental work of the twentieth century by Claude Shannon:

    The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual

    entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just one which actually be chosen since this is unknown at the time of design.12

    Shannon, in the context of communication, as well as Post, in the context of computation, seemed to be aware of the difficulty in considering meaning when our tools are limited to the analysis in terms of distributed components. The last sentence in Shannon’s famous declaration of his disinterest in the semantic aspects of information (usually omitted in quotations of this passage) is interesting because it gives some justification for the omission of meaning in his study. He is referring to the requirement of universality of the system. The generation of meaning requires some form of construction, which is too specific and cannot be predicted in advance.

    The view of the necessary involvement of a human agent in what seems to be an action of the calculating machine was expressed later, in 1942, by Ludwig Wittgenstein: “20. If calculating looks to us like the action of a machine, it is the human being doing the calculation that is the machine. In that case the calculation would be as it were a diagram drawn by a part of the machine.”13

    The persistence of the homunculus fallacy in the understanding of the Turing machine as a device working on the integers without any involvement of the human agency can be attributed to the fact that Turing did not address, in his epoch making paper, the issue of how the sequence of digits is interpreted as a number and his use of the term a-machine (automatic machine) in the context of the calculation performed on numbers.14

    Moreover, Turing made a surprising error in underestimating the importance of the composition of symbols representing numbers from digits:

    I shall also suppose that the number of symbols which may be printed is finite. If we were to allow an infinity of symbols, then there would be symbols differing to an arbitrarily small extent. The effect of this restriction of the number of symbols is not very serious. It is always possible to use sequences of symbols in the place of single symbols.[. . .] The differences from our point of view between the single and compound symbols is that the compound symbols, if they are too lengthy, cannot be observed at one glance. This is in accordance with experience.15

    The application of the Turing machines to the computation on numbers is possible only because numbers are represented by compound symbols. There is no possible computation on numbers if each of them is represented by a distinct simple symbol that cannot be decomposed into a combination of components from some fundamental finite set of “digits.” For instance, we could consider positive

    PAGE 12 FALL 2015 | VOLUME 15 | NUMBER 1

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    integers encoded (no doubt in a very impractical way) by a segment of the length 1/n assigned to n.

    Without the distinction of the two levels—the global, in which we have total configuration of the simple symbols, and the local, in which selection of particular characters from the alphabet is made—there is no computation and no Turing machine. Thus, it is not a matter of convenience or practicality that we consider the complexity of components within symbolic representation, but this complexity defines the work of the Turing machine. Consequently, when the complex character of symbols representing numbers or whatever meaning is assigned to them is neglected, it is easy to oversee the role of human involvement in the integration of the component symbols into compound symbols, which is followed by the generation of meaning.

    When we consider the Turing machine as a device (theoretical or physical) operating on numbers or on other concepts whose meaning is dependent on the structural characteristics of configurations of components, computation is losing its autonomy. However, it would be erroneous to claim that in Turing machine there is no generation of meaning at all.

    At the local level of the selection of a character to be printed, the meaning of the input in each particular cell on the tape is specified in the current (or active) instruction of the head. Thus, it doesn’t matter whether this generation of meaning is of a mechanical nature or is simply through the reaction to feeding or training, if machine is realized by pigeons pecking grains. Individual characters on the tape do have meaning for the head. The missing part of meaning generation, which is provided by a human mind, is in the integration of the components (characters) into structured wholes (equipped with the meaning, for instance through the association with integers).

    The generation of meaning is related to another confusion regarding the distinction between analogue and digital computing. Originally, the distinction was introduced in 1948 by John von Neumann as a purely practical distinction of “analogy and digital machines” according to the two alternative ways of representing numbers in computing devices.16 He observed that numbers can be associated by measurements with the values of continuous physical magnitudes or can be encoded as finite combinations of discrete digits in the positional numerical system (decimal, binary, or any other).

    For von Neumann, what was important was the practical advantage of the error control in the digital representation of numbers. He probably was not aware of the fundamental importance of the distinction for the future philosophical reflection on computation. Otherwise, most likely he would have been more careful about mixing two very different types of oppositions involved in his exposition: “discrete – continuous” and “conventional – empirical.” The practical importance of such distinction, in spite of the confusion it created, was for the error analysis of calculations unquestionable, but its later philosophical and theoretical interpretations led to totally incorrect conclusions.17

    Turing was more cautious when he was writing in the similar context of digital computers about “discrete state machines,” but he did not avoid some misconceptions:

    The digital computers considered in the last section may be classified amongst the “discrete state machines.” These are the machines which move by sudden jumps or clicks from one quite definite state to another. These states are sufficiently different for the possibility of confusion between them to be ignored. Strictly speaking there are no such machines. Everything really moves continuously. But there are many kinds of machine which can profitably be thought of as being discrete state machines. For instance in considering the switches for a lighting system it is a convenient fiction that each switch must be definitely on or off. There must be intermediate positions but for most purposes we can forget about them.18

    Quantum mechanics shows that his claim that “everything really moves continuously” is not true, and the intermediate positions may play crucial roles, but more importantly for him, as for von Neumann, the distinction between the discrete and continuous modes of the work of machines was the only matter of their interest. The actual issue is much deeper.

    First, let us observe that after his claim of an apparent necessity of continuity in physical processes, Turing states, “many kinds of machine which can profitably be thought of as being discrete state machines.” Here, we can see a clear admission of human intervention into the interpretation of the work of digital computers. The physical description of the continuous work of the device is replaced by human interpretation, which makes the process discrete, under the condition that such discretization is not leading to errors.

    It is not true that physical processes necessarily require continuity, as Turing thought. But instead, every actual, physical implementation of a computing device involves some form of a measurement (von Neumann’s “analogy principle”), at least at the local level corresponding to the cells of the tape in Turing machines. However, in the description of computation carried out by physical devices, it is not the measurements or physical magnitudes that play the crucial role but the states of physical systems.

    Practical digital computers (or more general devices) are based on the distinction of the states of some physical systems, which involves division into the finite, and therefore discrete number of classes (usually two). In practical analogue computers (or devices), the distinction is among the theoretically infinite and continuous distribution of states, although, in practice, only a finite number of distinctions (readings of the measurements of outcomes) is possible. Thus, in practice, the distinction of analogue and digital computation in its original form of the opposition continuous-discrete does not make much sense. The striking, but secondary in importance opposition discrete-continuous is obstructing the view of the actual important distinction.

    PAGE 13 FALL 2015 | VOLUME 15 | NUMBER 1

  • APA NEWSLETTER | PHILOSOPHY AND COMPUTERS

    There is nothing preventing us from using physical systems which have continuous distribution of their states or discrete one (viz. quantum states). Since the physical magnitudes and the states of physical systems are very different concepts and their relationship in modern physics became quite complicated, we have to be very careful in not confusing them.

    The crucial point is that the measurements performed on a physical system are basically procedures of assigning meaning to the concept of a state of the system. Since this meaning is given in the operational terms, it has a form of construction. By constructing configuration of measuring devices and through an interpretation of their states in terms of real numbers, we establish the state of measured system with a higher or lower degree of determination (in quantum mechanics usually only up to some probability distribution). The values of numbers are conventional and depend on the choice of units. Their importance is only in making distinctions and ordering the states of measuring devices.

    Now, in order to save the intuitive understanding of the original distinction of the analogue and digital computing, we can define it with respect to the degree in which interpretation of the results is involved in it. Analogue computing does not involve interpretation beyond the measurement itself, i.e., assignment of the real number to the outcome of the measurement. The outcome has the form of some (for instance, physical) state of the system, and the number is associated directly with this state.

    In digital computing, the number is not assigned to the physical state of one physical system. Instead, we have a complex of component physical systems (squares of the tape, cells of the memory, etc.), and the physical states of these component systems are associated with component symbols, viz. digits. Integration of these component symbols into a whole is not performed by the digital computer. This additional level of interpretation is left to a human mind.

    We can think of a digital computer as a system of communicating analogue computers, each producing as a result a one-digit number. However, this system becomes a digital computer only after we re-interpret the one-digit numbers as digits of one many-digit number. This can happen only through human intervention in integrating components into a whole. Moreover, the role of the digits in a compound numeral is a matter of the human convention.

    The analogue-digital distinction can be better understood in this context when we refer to the historical examples of models for computation. The Turing-Post machine described by Post in his 1936 paper is an example of the analogue machine as long as we do not attempt to interpret the symbolic meaning of the configuration of boxes and assume that the set of full boxes is characterized by a natural number, based on how we understand natural numbers as finite cardinals, i.e., without any specific convention involved. The outcome of the computation consists simply of the set of boxes marked as full. The machine described by Turing in his paper is digital because he is associating

    with the configuration of 1’s and 0’s a binary representation of an integer, and therefore he is giving the meaning to this configuration which goes beyond what is in the machine.

    The former approach, when numbers are encoded by sequences of 1’s by the association with the cardinal number of the set of these digits in a sequence, is very often used as a preferred system of encoding numbers (misleadingly called “unary” numerical system) in discussions or explanations of the concept of a Turing machine. This is a good example of the involvement of the human mind in the integration of components and interpretation. The Universal Turing machine always operates with the alphabet of at least two characters. Claude E. Shannon showed impossibility of the universal machine operating with the alphabet of one character.19 We can distinguish on the tape a sequence of 1’s, then we can unite it into a compound symbol (more exactly, we did it already considering a sequence) and interpret it as a number. But the work of the machine is on a sequence of two symbols 0 and 1, and when we interpret it as a recursive function between numbers, this interpretation has to involve sequences of both digits.

    It