Jul 12, 2020
© 2015 BY THE AMERICAN PHILOSOPHICAL ASSOCIATION ISSN 2155-9708
FROM THE EDITOR Peter Boltuc
FROM THE CHAIR Thomas M. Powers
CALL FOR PAPERS
FEATURED ARTICLE Troy D. Kelley and Vladislav D. Veksler
Sleep, Boredom, and Distraction—What Are the Computational Benefits for Cognition?
PAPERS ON SEARLE, SYNTAX, AND SEMANTICS
A Refutation of Searle on Bostrom (re: Malicious Machines) and Floridi (re: Information)
Marcin J. Schroeder
Towards Autonomous Computation: Geometric Methods of Computing
Philosophy and Computers
NEWSLETTER | The American Philosophical Association
VOLUME 15 | NUMBER 1 FALL 2015
FALL 2015 VOLUME 15 | NUMBER 1
Ricardo R. Gudwin
Computational Semiotics: The Background Infrastructure to New Kinds of Intelligent Systems
USING THE TECHNOLOGY FOR PHILOSOPHY
Trend Analysis of Philosophy Revolutions Using Google Books Archive
The Logic Daemon: Colin Allen’s Computer-Based Contributions to Logic Pedagogy
BOOK HEADS UP
Robert Arp, Barry Smith, and Andrew Spear Book Heads Up: Building Ontologies with Basic Formal Ontology
LAST-MINUTE NEWS The 2015 Barwise Prize Winner Is William Rapaport
Philosophy and Computers
PETER BOLTUC, EDITOR VOLUME 15 | NUMBER 1 | FALL 2015
APA NEWSLETTER ON
FROM THE EDITOR Peter Boltuc UNIVERSITY OF ILLINOIS, SPRINGFIELD
Human cognitive architecture used to be viewed as inferior to artificial intelligence (AI). Some authors thought it could easily be reprogrammed using standard AI so as to be more efficient; as Aaron Sloman once put it, our brains are a strange mixture of amphibian and early mammalian remnants. In this issue, we feature an article that seems to show otherwise.1 Research by Troy Kelley and Vlad Veksler demonstrates that “many of the seemingly suboptimal aspects of human cognitive processes are actually beneficial and finely tuned to both the regularities and uncertainties of the physical world” and even to the most optimal information processing. Sleep, distraction, even boredom turn out to be optimal cognitive solutions; in earlier work, Kelley and Veksler showed how learning details in early childhood and then much more cursory acquaintance with new situations and objects is also an optimal learning strategy.2 For instance, sleep allows for “offline memory processing,” which produces an order of magnitude performance advantage over other competing storage/retrieval strategies.” Boredom is also “an essential part of a self-sustaining cognitive system.” This is because “our higher level novelty/boredom algorithm and the lower level habituation algorithm” turn out “to be a useful and constructive response to a variety of situations.” In particular, “boredom/novelty algorithm can be used for . . . landmark identification in navigation,” while “the habituation algorithm” allows for much needed shifts of attention. Even distraction is beneficial since: “An inability to get distracted by external cues can be disastrous for an agent residing in an unpredictable environment, and an inability to get distracted by tangential thoughts would limit one’s potential for new and creative solutions.” Hence, Kelley and Veksler show how sleep, boredom, and distraction are important components of a robot’s behavior.
John Searle’s old argument that computers are syntactic engines unable to do semantics is the background theme of the following three papers. We begin with Selmer Bringsjord’s discussion piece. First, Bringsjord reacts to Searle’s critique of N. Bostrom’s argument about potentially malicious robots. According to Searle, “computing machines merely manipulate symbols” and so cannot be conscious, and to be malicious one would have to be conscious. Bringsjord questions Searle’s assumption that maliciousness presumes consciousness and gives what seems like a good case. In the second part, the author
questions Searle’s objection, raised against Floridi, that information is necessarily observer relative. Bringsjord points out that the main problem visible in Searle’s paper is his “failure to understand how logic and mathematics, as distinguished from informal analytic philosophy, work.” While Bingsjord accepts Searle’s well-known point that computers and robots function just at the semantic level, Marcin Schroeder argues that this point is contingent to Turing’s architecture. He points out that “in the description of Turing machines there is nothing that could serve as interpreter of the global configuration” so that “this interpretation is always made by a human mind.” Yet, Schroeder argues, “we can consider a machine built based on the design of the Turing machine, but with an additional component, which assumes the role currently given to a human agency.” Schroeder’s paper is an attempt to sketch out the conditions of such a semantic machine.
Schroeder argues that “integration of information is . . . the most fundamental characteristic of consciousness.” According to the author, in order to lay the “foundations not only for the syntactic of information, i.e., its structural characteristics, but also for its semantics (. . .) we can employ the mathematical theory of functions preserving information structures—homomorphisms of closure spaces.” This is an attempt to “cross the border between two very different realms, that of language, i.e., symbols, and that of entities in the physical world.” Historically, “since symbols seemed to require an involvement of the conscious subject associating each symbol with its denotation, the border was identified with the one between mind and body.” This is the classical Brentano’s approach developed by Searle in his early work in semantics. “Intention of a symbol (. . .) directs the mind to the denotation.” In response to this conception, Schreader argues that “in reality when we associate a symbol with its denotation, we do not make an association with the physical object itself, but with the information integrated into what is considered to be an object.” Hence, “the association between a symbol and its denotation is a relationship between two informational entities consisting of integrated information.” However, it is integrated in two different information systems. The author argues that “the mental aspect of symbolic representation is not in its intention, or in the act of directing towards denotation, but in the integration of information into objects.” Symbolic information is, in fact, intentional, it is “about,” but this aboutness takes place through correspondence between information systems. Such “aboutness” does not require any correspondence between entities of different ontological status, which was necessary in all approaches to intentionality from Scholastics to Franz Brentano and beyond.” Later in the article, Schroeder discusses mechanical manifestations of information, so as
APA NEWSLETTER | PHILOSOPHY AND COMPUTERS
to focus on a relatively formal presentation of what he calls geometric methods of computing. This is important in the controversy with Searle since “geometric computation of higher level can serve as a process of meaning generation for the lower level.”
Ricardo Gudwin also focuses on the problem highlighted by Searle: “How to attribute meaning to symbols?”—the issue lies at the intersection of computer science, philosophy, and semiotics. Gudwin presents what he calls “computational semiotics” viewed as an attempt to find an “alternative approach for addressing the problem of synthesizing artificial minds.” He argues that Peirce’s theory provides a better model for computer engineering than main-stream semantic theories. In the process, Gudwin provides a helpful history of intelligent systems (largely following Franklin’s classical account). He focuses on resolving the problem of whether knowledge representation by a computer program is “symbolic” or “numerical.” He builds on Barsalou’s theory of perceptual symbols. Gudwin argues that Peirce’s semiotics is compatible with Barsalou’s proposal for a grounded cognition and, in fact, provides the best account of meaning very much applicable in artificial intelligence.
In the final part of the newsletter, we have three contributions: Shai Ophir uses big data analysis to show how concepts central to some of the most famous philosophers of the past were gaining popularity for over a generation before those philosophers were even born. This is one more argument in favor of the thesis that philosophical thinking is an essentially social process. The article provides an interesting example of digital analysis in the humanities. Christopher Menzel’s paper is the last of the set of articles devoted to Colin Allen, which we started publishing last year. The author focuses primarily on Allen’s pedagogical achievements and, in particular, his leading contribution to creating an early Logic Daemon proof-checker for natural deduction. Allen is also presented as one of the pioneers of big data mining. We close with a note on the book Building Ontologies with Basic Formal Ontology by Robert Arp, Barry Smith, and Andrew Spear.
This introductory note is immediately followed by a note from Tom Powers, chair of the APA Committee on Philosophy and Computers. Tom gives an overview of the main organizations that welcome phil