Top Banner
5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy) http://plato.stanford.edu/entries/mental-representation/ 1/19 Stanford Encyclopedia of Philosophy Mental Representation First published Thu Mar 30, 2000; substantive revision Tue Dec 11, 2012 The notion of a “mental representation” is, arguably, in the first instance a theoretical construct of cognitive science. As such, it is a basic concept of the Computational Theory of Mind, according to which cognitive states and processes are constituted by the occurrence, transformation and storage (in the mind/brain) of information-bearing structures (representations) of one kind or another. However, on the assumption that a representation is an object with semantic properties (content, reference, truth-conditions, truth-value, etc.), a mental representation may be more broadly construed as a mental object with semantic properties. As such, mental representations (and the states and processes that involve them) need not be understood only in computational terms. On this broader construal, mental representation is a philosophical topic with roots in antiquity and a rich history and literature predating the recent “cognitive revolution,” and which continues to be of interest in pure philosophy. Though most contemporary philosophers of mind acknowledge the relevance and importance of cognitive science, they vary in their degree of engagement with its literature, methods and results; and there remain, for many, issues concerning the representational properties of the mind that can be addressed independently of the computational hypothesis. Though the term ‘Representational Theory of Mind’ is sometimes used almost interchangeably with ‘Computational Theory of Mind’, I will use it here to refer to any theory that postulates the existence of semantically evaluable mental objects, including philosophy's stock in trade mentalia — thoughts, concepts, percepts, ideas, impressions, notions, rules, schemas, images, phantasms, etc. — as well as the various sorts of “subpersonal” representations postulated by cognitive science. Representational theories may thus be contrasted with theories, such as those of Baker (1995), Collins (1987), Dennett (1987), Gibson (1966, 1979), Reid (1764/1997), Stich (1983) and Thau (2002), which deny the existence of such things. 1. The Representational Theory of Mind 2. Propositional Attitudes 3. Conceptual and Nonconceptual Representation 4. Representationalism and Phenomenalism 5. Imagery 6. Content Determination 7. Internalism and Externalism 8. The Computational Theory of Mind 9. Thought and Language Bibliography Academic Tools Other Internet Resources Related Entries 1. The Representational Theory of Mind
19
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 1/19

Stanford Encyclopedia of Philosophy

Mental RepresentationFirst published Thu Mar 30, 2000; substantive revision Tue Dec 11, 2012

The notion of a “mental representation” is, arguably, in the first instance a theoretical construct of cognitivescience. As such, it is a basic concept of the Computational Theory of Mind, according to which cognitivestates and processes are constituted by the occurrence, transformation and storage (in the mind/brain) ofinformation-bearing structures (representations) of one kind or another.

However, on the assumption that a representation is an object with semantic properties (content, reference,truth-conditions, truth-value, etc.), a mental representation may be more broadly construed as a mentalobject with semantic properties. As such, mental representations (and the states and processes that involvethem) need not be understood only in computational terms. On this broader construal, mental representationis a philosophical topic with roots in antiquity and a rich history and literature predating the recent“cognitive revolution,” and which continues to be of interest in pure philosophy. Though mostcontemporary philosophers of mind acknowledge the relevance and importance of cognitive science, theyvary in their degree of engagement with its literature, methods and results; and there remain, for many,issues concerning the representational properties of the mind that can be addressed independently of thecomputational hypothesis.

Though the term ‘Representational Theory of Mind’ is sometimes used almost interchangeably with‘Computational Theory of Mind’, I will use it here to refer to any theory that postulates the existence ofsemantically evaluable mental objects, including philosophy's stock in trade mentalia — thoughts, concepts,percepts, ideas, impressions, notions, rules, schemas, images, phantasms, etc. — as well as the various sortsof “subpersonal” representations postulated by cognitive science. Representational theories may thus becontrasted with theories, such as those of Baker (1995), Collins (1987), Dennett (1987), Gibson (1966,1979), Reid (1764/1997), Stich (1983) and Thau (2002), which deny the existence of such things.

1. The Representational Theory of Mind2. Propositional Attitudes3. Conceptual and Nonconceptual Representation4. Representationalism and Phenomenalism5. Imagery6. Content Determination7. Internalism and Externalism8. The Computational Theory of Mind9. Thought and LanguageBibliographyAcademic ToolsOther Internet ResourcesRelated Entries

1. The Representational Theory of Mind

Page 2: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 2/19

The Representational Theory of Mind (RTM) (which goes back at least to Aristotle) takes as its startingpoint commonsense mental states, such as thoughts, beliefs, desires, perceptions and imagings. Such statesare said to have “intentionality” — they are about or refer to things, and may be evaluated with respect toproperties like consistency, truth, appropriateness and accuracy. (For example, the thought that cousins arenot related is inconsistent, the belief that Elvis is dead is true, the desire to eat the moon is inappropriate, avisual experience of a ripe strawberry as red is accurate, an imaging of George W. Bush with dreadlocks isinaccurate.)

RTM defines such intentional mental states as relations to mental representations, and explains theintentionality of the former in terms of the semantic properties of the latter. For example, to believe thatElvis is dead is to be appropriately related to a mental representation whose propositional content is thatElvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involvedifferent relations to the same mental representation.) To perceive a strawberry is, on the representationalview, to have a sensory experience of some kind which is appropriately related to (e.g., caused by) thestrawberry.

RTM also understands mental processes such as thinking, reasoning and imagining as sequences ofintentional mental states. For example, to imagine the moon rising over a mountain is, inter alia, to entertaina series of mental images of the moon (and a mountain). To infer a proposition q from the propositions pand if p then q is (inter alia) to have a sequence of thoughts of the form p, if p then q, q.

Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can benaturalized — i.e., that all mental facts have explanations in the terms of natural science. This assumption isshared within cognitive science, which attempts to provide accounts of mental states and processes in terms(ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive andcomputational neuroscience) postulate a number of different kinds of structures and processes, many ofwhich are not directly implicated by mental states and processes as commonsensically conceived. Thereremains, however, a shared commitment to the idea that mental states and processes are to be explained interms of mental representations.

In philosophy, recent debates about mental representation have centered around the existence ofpropositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to beabout what they are about), and the existence of phenomenal properties and their relation to the content ofthought and perceptual experience. Within cognitive science itself, the philosophically relevant debates havebeen focused on the computational architecture of the brain and central nervous system, and thecompatibility of scientific and commonsense accounts of mentality.

2. Propositional AttitudesIntentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations weapply in everyday life in predicting and explaining each other's behavior (often collectively referred to as“folk psychology”) are both remarkably successful and indispensable. What a person believes, doubts,desires, fears, etc. is a highly reliable indicator of what that person will do; and we have no other way ofmaking sense of each other's behavior than by ascribing such states and applying the relevantgeneralizations. We are thus committed to the basic truth of commonsense psychology and, hence, to theexistence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsensepsychology will be vindicated by cognitive science, given that propositional attitudes can be construed ascomputational relations to mental representations.)

Page 3: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 3/19

Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no suchthings as propositional attitudes (and their constituent representational states) are implicated by thesuccessful explanation and prediction of our mental lives and behavior. Churchland denies that thegeneralizations of commonsense propositional-attitude psychology are true. He (1981) argues that folkpsychology is a theory of the mind with a long history of failure and decline, and that it resists incorporationinto the framework of modern scientific theories (including cognitive psychology). As such, it is comparableto alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology isfalse, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchlandis not an eliminativist about mental representation tout court. See, e.g., Churchland 1989.)

Dennett (1987a) grants that the generalizations of commonsense psychology are true and indispensable, butdenies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give anintentional explanation of a system's behavior is merely to adopt the “intentional stance” toward it. If thestrategy of assigning contentful states to a system and predicting and explaining its behavior (on theassumption that it is rational — i.e., that it behaves as it should, given the propositional attitudes it shouldhave, given its environment) is successful, then the system is intentional, and the propositional-attitudegeneralizations we apply to it are true. But there is nothing more to having a propositional attitude than this.(See Dennett 1987a: 29.)

Though he has been taken to be thus claiming that intentional explanations should be construedinstrumentally, Dennett (1991) insists that he is a “moderate” realist about propositional attitudes, since hebelieves that the patterns in the behavior and behavioral dispositions of a system on the basis of which we(truly) attribute intentional states to it are objectively real. In the event that there are two or moreexplanatorily adequate but substantially different systems of intentional ascriptions to an individual,however, Dennett claims there is no fact of the matter about what the individual believes (1987b, 1991).This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs tobe; though it is not the view that there is simply nothing in the world that makes intentional explanationstrue.

(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitudeis just to be interpretable in a particular way. It is, however, not entirely clear whether they intend theirviews to imply irrealism about propositional attitudes.)

Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomize mentalstates by their semantic properties at all, since attribution of psychological states by content is sensitive tofactors that render it problematic in the context of a scientific psychology. Cognitive psychology seekscausal explanations of behavior and cognition, and the causal powers of a mental state are determined by itsintrinsic “structural” or “syntactic” properties. The semantic properties of a mental state, however, aredetermined by its extrinsic properties — e.g., its history, environmental or intramental relations. Hence, suchproperties cannot figure in causal-scientific explanations of behavior. (Fodor 1994 and Dretske 1988 arerealist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of themind, on which the semantic properties of mental states play no explanatory role. (Stich has since changedhis views on a number of these issues. See Stich 1996.)

3. Conceptual and Non-Conceptual RepresentationIt is a traditional assumption among realists about mental representations that representational states come intwo basic varieties (cf. Boghossian 1995). There are those, such as thoughts, which are composed ofconcepts and have no phenomenal (“what-it's-like”) features (“qualia”), and those, such as sensations,which have phenomenal features but no conceptual constituents. (Nonconceptual content is usually defined

Page 4: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 4/19

as a kind of content that states of a creature lacking concepts might nonetheless enjoy.[1]) On this taxonomy,mental states can represent either in a way analogous to expressions of natural languages or in a wayanalogous to drawings, paintings, maps, photographs or movies. Perceptual states such as seeing thatsomething is blue, are sometimes thought of as hybrid states, consisting of, for example, a non-conceptualsensory experience and a belief, or some more integrated compound of conceptual and nonconceptualelements. (There is an extensive literature on the representational content of perceptual experience. See theentry on the contents of perception.)

Disagreement over nonconceptual representation concerns the existence and nature of phenomenalproperties and the role they play in determining the content of sensory experience. Dennett (1988), forexample, denies that there are such things as qualia at all (as they are standardly construed); while Brandom(2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the contentof sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske,Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke)argue that it is irreducible. (See the discussion in the next section.)

Some historical discussions of the representational properties of mind (e.g., Aristotle De Anima, Locke1689/1975, Hume 1739/1978) seem to assume that nonconceptual representations — percepts(“impressions”), images (“ideas”) and the like — are the only kinds of mental representations, and that themind represents the world in virtue of being in states that resemble things in it. On such a view, allrepresentational states have their content in virtue of their phenomenal features. Powerful arguments,however, focusing on the lack of generality (Berkeley Principles of Human Knowledge), ambiguity(Wittgenstein 1953) and non-compositionality (Fodor 1981c) of sensory and imagistic representations, aswell as their unsuitability to function as logical (Frege 1918/1997, Geach 1957) or mathematical (Frege1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that notheory of mind can get by with only nonconceptual representations construed in this way.

There has also been dissent from the traditional claim that conceptual representations (thoughts, beliefs) lackphenomenology. Chalmers (1996), Flanagan (1992), Goldman (1993), Horgan and Tienson (2002),Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991a), Pitt (2004, 2009, 2011, Forthcoming),Searle (1992), Siewert (1998) and Strawson (1994), claim that purely conceptual (conscious)representational states themselves have a (perhaps proprietary) phenomenology. (This view — bread andbutter, it should be said, among historical and contemporary Phenomenologists — has been gainingmomentum of late among analytic philosophers of mind. See, e.g., the essays in Bayne and Montague 2011and in Kriegel Forthcoming, Farkas 2008 and Kriegel 2011.) If this claim is correct, the question of whatrole phenomenology plays in the determination of content rearises for conceptual representation; and theeliminativist ambitions of Sellars, Brandom, Rey, et al. would meet a new obstacle. (It would also raiseprima face problems for reductivist representationalism (see the next section), as well as for reductivenaturalistic theories of intentional content.)

4. Representationalism and PhenomenalismAmong realists about phenomenal properties, the central division is between representationalists (alsocalled “representationists” and “intentionalists”) — e.g., Dretske (1995), Harman (1990), Leeds (1993),Lycan (1987, 1996), Rey (1991), Thau (2002), Tye (1995, 2000, 2009) — and phenomenalists (also called“phenomenists”) — e.g., Block (1996, 2003), Chalmers (1996,2004), Evans (1982), Loar (2003a, 2003b),Peacocke (1983, 1989, 1992, 2001), Raffman (1995), Shoemaker (1990). Representationalists claim thatthe phenomenal character of a mental state is reducible to a kind of intentional content, naturalisticallyconstrued (a la Dretske). Phenomenalists claim that the phenomenal character of a mental state is not soreducible.

Page 5: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 5/19

The representationalist thesis is often formulated as the claim that phenomenal properties arerepresentational or intentional. However, this formulation is ambiguous between a reductive and a non-reductive claim (though the term ‘representationalism’ is most often used for the reductive claim). (SeeChalmers 2004a.) On one hand, it could mean that the phenomenal content of an experience is a kind ofintentional content (i.e., the objective qualitative properties it represents). On the other, it could mean thatthe intrinsic, subjective phenomenal properties of an experience determine an intentional content.Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereasphenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Amongphenomenalists, there is further disagreement about whether qualia are intrinsically representational (Loar)or not (Block, Peacocke). (So-called “Ganzfeld” experiences, in which, for example, the visual field iscompletely taken up with a uniform experience of a single color, are a standard test case: Do Ganzfeldexperiences represent anything? It may be that doubts about the representationality of such experiences issimply a consequence of the fact that (outside the laboratory) we never encounter things that would producethem. Supposing we routinely did (and especially if we had names for them), it seems unlikely suchskepticism would arise.)

Most (reductive) representationalists are motivated by the conviction that one or another naturalisticexplanation of intentionality (see the next section) is, in broad outline, correct, and by the desire to completethe naturalization of the mental by applying such theories to the problem of phenomenality. (Needless tosay, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal— though not in the same way.)

The main argument for representationalism appeals to the transparency of experience (cf. Tye 2000: 45–51). The properties that characterize what it's like to have a perceptual experience are presented inexperience as properties of objects perceived: in attending to an experience, one seems to “see through it” tothe objects and properties it is experiences of.[2] They are not presented as properties of the experience itself.If nonetheless they were properties of the experience, perception would be massively deceptive. Butperception is not massively deceptive. According to the representationalist, the phenomenal character of anexperience is due to its representing objective, non-experiential properties. (In veridical perception, theseproperties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection isindirect perception: one comes to know what phenomenal features one's experience has by coming to knowwhat objective features it represents. (Cf. also Dretske 1996, 1999.)

In order to account for the intuitive differences between conceptual and sensory representations,representationalists appeal to structural or functional properties. Dretske (1995), for example, distinguishesexperiences and thoughts on the basis of the origin and nature of their functions: an experience of a propertyP is a state of a system whose evolved function is to indicate the presence of P in the environment; a thoughtrepresenting the property P, on the other hand, is a state of a system whose assigned (learned) function is tocalibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to berelations to sentences in the language of thought, and distinguishes them on the basis of (the functional rolesof) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of theirfunctional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and theintrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereasexperiences are image-like representations consisting of “symbol-filled arrays.” (Cf. the account of mentalimages in Tye 1991.)

Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explainingsome of the intuitive differences between thoughts and experiences; but they do not suppose that suchfeatures exhaust the differences between phenomenal and non-phenomenal representations. For thephenomenalist, it is the phenomenal properties of experiences — qualia themselves — that constitute thefundamental difference between experience and thought. Peacocke (1992), for example, develops the

Page 6: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 6/19

notion of a perceptual “scenario” (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is “correct” (a semantic property) if in the corresponding“scene” (the portion of the external world represented by the scenario) properties are distributed as theirphenomenal analogues are in the scenario.

Another sort of representation appealed to by some phenomenalists (e.g., Chalmers (2003), Block (2003)) iswhat Chalmers calls a “pure phenomenal concept.” A phenomenal concept in general is a concept whosedenotation is a phenomenal property, and it may be discurive (‘the color of ripe bananas‘), demonstrative(‘this color’; Loar 1996)), or even more direct. On Chalmers's view, a pure phenomenal concept is(something like) a conceptual/phenomenal hybrid consisting of a phenomenological “sample” (an image oran occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal conceptsare postulated to account for the apparent fact (among others) that, as McGinn (1991b) puts it, “you cannotform [introspective] concepts of conscious properties unless you yourself instantiate those properties.” Onecannot have a phenomenal concept of a phenomenal property P, and, hence, phenomenal beliefs about P,without having experience of P, because P itself is (in some way) constitutive of the concept of P. (Cf.Jackson 1982, 1986 and Nagel 1974.) (Chalmers (2004b) puts pure phenomenal concepts to use indefending the Knowledge Argument against physicalism. Alter and Walter 2007 is an excellent collectionof essays on phenomenal concepts.)

5. ImageryThough imagery has played an important role in the history of philosophy of mind, the importantcontemporary literature on it is primarily psychological. (McGinn 2004 is a notable recent exception.) In aseries of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard andCooper 1982), subjects' response time in tasks involving mental manipulation and examination of presentedfigures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figurespresented. The question of how these experimental results are to be explained kindled a lively debate on thenature of imagery and imagination.

Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination andmanipulation of mental representations that themselves have spatial properties — i.e., pictorialrepresentations, or images. Others, principally Pylyshyn (1979, 1981a, 1981b, 2003), argue that theempirical facts can be explained in terms exclusively of discursive, or propositional representations andcognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a languageof thought.)

The idea that pictorial representations are literally pictures in the head is not taken seriously by proponentsof the pictorial view of imagery (see, e.g., Kosslyn and Pomerantz 1977). The claim is, rather, that mentalimages represent in a way that is relevantly like the way pictures represent. (Attention has been focused onvisual imagery — hence the designation ‘pictorial’; though of course there may be imagery in othermodalities — auditory, olfactory, etc. — as well. See O'Callaghan 2007 for discussion of auditory imagery.)

The distinction between pictorial and discursive representation can be characterized in terms of thedistinction between analog and digital representation (Goodman 1976). This distinction has itself beenvariously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue ofcontinuously variable properties of the representation), while digital representation is discrete (i.e., in virtueof properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction mayalso be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digitaldistinction, imagistic representations, which represent in virtue of properties that may vary continuously

Page 7: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 7/19

(such as being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations,whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it isnot) would be digital.

It might be supposed that the pictorial/discursive distinction is best made in terms of thephenomenal/nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there maybe nonphenomenal properties of representations that vary continuously. Moreover, there are ways ofunderstanding pictorial representation that presuppose neither phenomenality nor analogicity. According toKosslyn (1980, 1982, 1983), a mental representation is “quasi-pictorial” when every part of therepresentation corresponds to a part of the object represented, and relative distances between parts of theobject represented are preserved among the parts of the representation. But distances between parts of arepresentation can be defined functionally rather than spatially — for example, in terms of the number ofdiscrete computational steps required to combine stored information about them. (Cf. Rey 1981.)

Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of pictorialand discursive elements. On Tye's account, images are “(labeled) interpreted symbol-filled arrays.” Thesymbols represent discursively, while their arrangement in arrays has representational significance (thelocation of each “cell” in the array represents a specific viewer-centered 2-D location on the surface of theimagined object).

6. Content DeterminationThe contents of mental representations are typically taken to be abstract objects (properties, relations,propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representationscome to have their contents. Here the issue is not how to naturalize content (abstract objects can't benaturalized), but, rather, how to specify naturalistic content-determining relations between mentalrepresentations and the abstract objects they express. There are two basic types of contemporary naturalistictheories of content-determination, causal-informational and functional.[3]

Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation isgrounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990a) cause itto occur.[4] There is, however, widespread agreement that causal-informational relations are not sufficient todetermine the content of mental representations. Such relations are common, but representation is not. Treetrunks, smoke, thermostats and ringing telephones carry information about what they are causally related to,but they do not represent (in the relevant sense) what they carry information about. Further, a representationcan be caused by something it does not represent, and can represent something that has not caused it.

The main attempts to specify what makes a causal-informational state a mental representation areAsymmetric Dependency Theories (e.g., Fodor 1987, 1990a, 1994) and Teleological Theories (Fodor1990b, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theorydistinguishes merely informational relations from representational relations on the basis of their higher-orderrelations to each other: informational relations depend upon representational relations, but not vice versa.For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens arecaused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then theyrepresent horses (or the property horse).

According to Teleological Theories, representational relations are those a representation-producingmechanism has the selected (by evolution or learning) function of establishing. For example, zebra-causedhorse-representations do not mean zebra, because the mechanism by which such tokens are produced has

Page 8: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 8/19

the selected function of indicating horses, not zebras. The horse-representation-producing mechanism thatresponds to zebras is malfunctioning.

Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation isdetermined, at least in part, by its (causal, computational, inferential) relations to other mentalrepresentations. They differ on whether relata should include all other mental representations or only someof them, and on whether to include external states of affairs. The view that the content of a mentalrepresentation is determined by its inferential/computational relations with all other representations isholism; the view it is determined by relations to only some other mental states is localism (or molecularism).(The non-functional view that the content of a mental state depends on none of its relations to other mentalstates is atomism.) Functional theories that recognize no content-determining external relata have beencalled solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, theformer determining semantic properties analogous to sense, the latter determining semantic propertiesanalogous to reference (McGinn 1982, Sterelny 1989).

(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories toprovide an explanation of the (non-conceptual) content of experiential states. They thus tend to beexternalists (see the next section) about phenomenological as well as conceptual content. Phenomenalistsand non-reductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, takeit that the representational content of such states is (at least in part) determined by their intrinsic phenomenalproperties. Further, those who advocate a phenomenally-based approach to conceptual content (Horgan andTienson, Kriegel, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of thecontent (if not the reference) of such states.

7. Internalism and ExternalismGenerally, those who, like informational theorists, think relations to one's (natural or social) environment are(at least partially) determinative of the content of mental representations are externalists, or anti-individualists (e.g., Burge 1979, 1986b, 2010, McGinn 1977), whereas those who, like some proponents offunctional theories, think representational content is determined by an individual's intrinsic properties alone,are internalists (or individualists; cf. Putnam 1975, Fodor 1981b).[5]

This issue is widely taken to be of central importance, since psychological explanation, whethercommonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause thebehaviors they do because they have the contents they do. For example, the desire that one have a beer andthe beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one'sgetting up and going to the kitchen.) If, however, a mental representation's having a particular content is dueto factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which,arguably, must be intrinsic (see Stich 1983, Fodor 1982, 1987, 1994). Some who accept the standardarguments for externalism have argued that internal factors determine a component of the content of amental representation. They say that mental representations have both “narrow” content (determined byintrinsic factors) and “wide” or “broad” content (determined by narrow content plus extrinsic factors). (Thisdistinction may be applied to the sub-personal representations of cognitive science as well as to those ofcommonsense psychology. See von Eckardt 1993: 189.)

Narrow content has been variously construed. Putnam (1975), Fodor (1982: 114; 1994: 39ff), and Block(1986: 627ff), for example, seem to understand it as something like de dicto content (i.e., Fregean sense, orperhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directlyexpressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radicallyinexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can

Page 9: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 9/19

be specified only indirectly, via specifications of context/wide-content pairings. On both construals, narrowcontents are characterized as functions from context to (wide) content. The narrow content of arepresentation is determined by properties intrinsic to it or its possessor, such as its syntactic structure or itsintramental computational or inferential role.

Burge (1986b) has argued that causation-based worries about externalist individuation of psychologicalcontent, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recentlyurged that a scientific psychology might not need narrow content in order to supply naturalistic (causal)explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz.,Twin-Earth cases and Frege cases, are either nomologically impossible or dismissible as exceptions to non-strict psychological laws.

On the most common versions of externalism, though intentional contents are externally determined, mentalrepresentations themselves, and the states they partly constitute, remain “in the head.” More radical versionsare possible. One might maintain that since thoughts are individuated by their contents, and some thoughtcontents are partially constituted by objects external to the mind, then some thoughts are partly constitutedby objects external to the mind. On such a view, a singular thought — i.e., a thought about a particularobject — literally contains the object it is about. It is “object-involving.” Such a thought (and the mind thatthinks it) thus extend beyond the boundaries of the skull. (This appears to be the view articulated inMcDowell 1986, on which there is “interpenetration” between the mind and the world.)

Clark and Chalmers (1998) and Clark (2001, 2005, 2008) have argued that mental representations may existentirely “outside the head.” On their view, which they call “active externalism,” cognitive processes (e.g.,calculation) may be realized in external media (e.g., a calculator or pen and paper), and the “coupledsystem” of the individual mind and the external workspace ought to count as a cognitive system — a mind—in its own right. Symbolic representations on external media would thus count as mental representations.

Clark and Chalmers's paper has inspired a burgeoning literature on extended, embodied and interactivecognition. (Menary 2010 is a recent collection of essays. See also the entry on embodied cognition.)

8. The Computational Theory of MindThe leading contemporary version of the Representational Theory of Mind, the Computational Theory ofMind (CTM), claims that the brain is a kind of computer and that mental processes are computations.According to CTM, cognitive states are constituted by computational relations to mental representations ofvarious kinds, and cognitive processes are sequences of such states.

CTM develops RTM by attempting to explain all psychological states and processes in terms of mentalrepresentation. In the course of constructing detailed empirical theories of human and other animalcognition, and developing models of cognitive processes implementable in artificial information processingsystems, cognitive scientists have proposed a variety of types of mental representations. While some of thesemay be suited to be mental relata of commonsense psychological states, some — so-called “subpersonal” or“sub-doxastic” representations — are not. Though many philosophers believe that CTM can provide thebest scientific explanations of cognition and behavior, there is disagreement over whether such explanationswill vindicate the commonsense psychological explanations of prescientific RTM.

According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories ofpsychological states should concern themselves only with the formal properties of the objects those statesare relations to. Commitment to the explanatory relevance of content, however, is for most cognitivescientists fundamental (Fodor 1981a, Pylyshyn 1984, Von Eckardt 1993). That mental processes arecomputations, that computations are rule-governed sequences of semantically evaluable objects, and that the

Page 10: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 10/19

rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.

Explanations in cognitive science appeal to a many different kinds of mental representation, including, forexample, the “mental models” of Johnson-Laird 1983, the “retinal arrays,” “primal sketches” and “2½-Dsketches” of Marr 1982, the “frames” of Minsky 1974, the “sub-symbolic” structures of Smolensky 1989,the “quasi-pictures” of Kosslyn 1980, and the “interpreted symbol-filled arrays” of Tye 1991 — in additionto representations that may be appropriate to the explanation of commonsense psychological states.Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975,2008 Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and use (Chomsky 1965, Pinker1989), and musical comprehension (Lerdahl and Jackendoff 1983).

A fundamental disagreement among proponents of CTM concerns the realization of personal-levelrepresentations (e.g., thoughts) and processes (e.g., inferences) in the brain. The central debate here isbetween proponents of Classical Architectures and proponents of Connectionist Architectures.

The classicists (e.g., Turing 1950, Fodor 1975, 2000, 2003, 2008, Fodor and Pylyshyn 1988, Marr 1982,Newell and Simon 1976) hold that mental representations are symbolic structures, which typically havesemantically evaluable constituents, and that mental processes are rule-governed manipulations of them thatare sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized bypatterns of activation in a network of simple processors (“nodes”) and that mental processes consist of thespreading activation of such patterns. The nodes themselves are, typically, not taken to be semanticallyevaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions ofConnectionism — “localist” versions — on which individual nodes are taken to have semantic properties(e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neitherdefinitive nor representative of the connectionist program (Smolensky 1988, 1991, Chalmers 1993).)

Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language ofThought Hypothesis (LOTH) (Fodor 1975, 1987, 2008), according to which the system of mental symbolsconstituting the neural basis of thought is structured like a language, provides a well-worked-out version ofthe classical approach as applied to commonsense psychology. (Cf. also Marr 1982 for an application ofclassical approach in scientific psychology.) According to the LOTH, the potential infinity of complexrepresentational mental states is generated from a finite stock of primitive representational states, inaccordance with recursive formation rules. This combinatorial structure accounts for the properties ofproductivity and systematicity of the system of mental representations. As in the case of symbolic languages,including natural languages (though Fodor does not suppose either that the LOTH explains only linguisticcapacities or that only verbal creatures have this sort of cognitive architecture), these properties of thoughtare explained by appeal to the content of the representational units and their combinability into contentfulcomplexes. That is, the semantics of both language and thought is compositional: the content of a complexrepresentation is determined by the contents of its constituents and their structural configuration. (See,e.g.,Fodor and Lepore 2002.)

Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparentlyconsists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuitedto carrying out classical serial computations. For one thing, processing in the brain is typically massivelyparallel. In addition, the elements whose manipulation drives computation in connectionist networks(principally, the connections between nodes) are neither semantically compositional nor semanticallyevaluable, as they are on the classical approach. This contrast with classical computationalism is oftencharacterized by saying that representation is, with respect to computation, distributed as opposed to local:representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this

Page 11: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 11/19

is to say that for classicists mental representations are computationally atomic, whereas for connectioniststhey are not.)

Moreover, connectionists argue that information processing as it occurs in connectionist networks moreclosely resembles some features of actual human cognitive functioning. For example, whereas on theclassical view learning involves something like hypothesis formation and testing (Fodor 1981c), on theconnectionist model it is a matter of evolving distribution of “weights” (strengths) on the connectionsbetween nodes, and typically does not involve the formulation of hypotheses regarding the identityconditions for the objects of knowledge. The connectionist network is “trained up” by repeated exposure tothe objects it is to learn to distinguish; and, though networks typically require many more exposures to theobjects than do humans, this seems to model at least one feature of this type of human learning quite well.(Cf. the sonar example in Churchland 1989.)

Further, degradation in the performance of such networks in response to damage is gradual, not sudden asin the case of a classical information processor, and hence more accurately models the loss of humancognitive function as it typically occurs in response to brain damage. It is also sometimes claimed thatconnectionist systems show the kind of flexibility in response to novel situations typical of human cognition— situations in which classical systems are relatively “brittle” or “fragile.”

Some philosophers have maintained that connectionism entails that there are no propositional attitudes.Ramsey, Stich and Garon (1990) have argued that if connectionist models of cognition are basically correct,then there are no discrete representational states as conceived in ordinary commonsense psychology andclassical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-levelpatterns of activity in a neural network may be roughly identified with the representational states ofcommonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson1996) argue that language-of-thought style representation is both necessary in general and realizable withinconnectionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in theclassicist/connectionist debate, and provides useful introductory material as well. See also Von Eckardt2005.)

Whereas Stich (1983) accepts that mental processes are computational, but denies that computations aresequences of mental representations, others accept the notion of mental representation, but deny that CTMprovides the correct account of mental states and processes.

Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systemsare dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of acomplex system consisting of (in the case of human beings) a nervous system, a body and the environmentin which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolicstates, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous andmutually determining states of the systems' components. Representation in a dynamic system is essentiallyinformation-theoretic, though the bearers of information are not symbols, but state variables or parameters.(See also Port and Van Gelder 1995; Clark 1997a, 1997b, 2008.)

Horst (1996), on the other hand, argues that though computational models may be useful in scientificpsychology, they are of no help in achieving a philosophical understanding of the intentionality ofcommonsense mental states. CTM attempts to reduce the intentionality of such states to the intentionality ofthe mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content isessentially bound up with the notions of convention and intention. So CTM involves itself in a viciouscircularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.

9. Thought and Language

Page 12: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 12/19

To say that a mental object has semantic properties is, paradigmatically, to say that it is about, or true orfalse of, an object or objects, or that it is true or false simpliciter. Suppose I think that ocelots take snuff. Iam thinking about ocelots, and if what I think of them (that they take snuff) is true of them, then my thoughtis true. According to RTM such states are to be explained as relations between agents and mentalrepresentations. To think that ocelots take snuff is to token in some way a mental representation whosecontent is that ocelots take snuff. On this view, the semantic properties of mental states are the semanticproperties of the representations they are relations to.

Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I amtalking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance istrue. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff.Many philosophers have thought that the semantic properties of linguistic expressions are inherited from theintentional mental states they are conventionally used to express (Grice 1957, Fodor 1978,Schiffer1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are thesemantic properties of the representations that are the mental relata of the states they are conventionally usedto express.

(Others, however, e.g., Davidson (1975, 1982) have suggested that the kind of thought human beings arecapable of is not possible without language, so that the dependency might be reversed, or somehow mutual(see also Sellars 1956). (But see Martin 1987 for a defense of the claim that thought is possible withoutlanguage. See also Chisholm and Sellars 1958.) Schiffer (1987) subsequently despaired of the success ofwhat he calls “Intention Based Semantics.”)

It is also widely held that in addition to having such properties as reference, truth-conditions and truth — so-called extensional properties — expressions of natural languages also have intensional properties, in virtueof expressing properties or propositions — i.e., in virtue of having meanings or senses, where twoexpressions may have the same reference, truth-conditions or truth value, yet express different properties orpropositions (Frege 1892/1997). If the semantic properties of natural-language expressions are inheritedfrom the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may beappropriate for mental representations.

BibliographyAlmog, J., Perry, J. and Wettstein, H. (eds.), (1989), Themes from Kaplan, New York: Oxford University

Press.Alter, T. and Walter, S. (2007), Phenomenal Concepts and Phenomenal Knowledge: New Essays on

Consciousness and Physicalism, Oxford: Oxford University Press.Aristotle, De Anima, in The Complete Works of Aristotle: The Revised Oxford Translation, Oxford: Oxford

University Press, 1984.Baker, L. R. (1995), Explaining Attitudes: A Practical Approach to the Mind, Cambridge: Cambridge

University Press.Ballard, D.H. (1986), “Cortical Connections and Parallel Processing: Structure and Function,” The

Behavioral and Brain Sciences, 9: 67–120.Ballard, D.H and Hayes, P.J. (1984), “Parallel Logical Inference,” Proceedings of the Sixth Annual

Conference of the Cognitive Science Society, Rochester, NY.Bayne, T. and Montague, M. (eds.), (2011), Cognitive Phenomenology, Oxford: Oxord University Press.Beaney, M. (ed.) (1997), The Frege Reader, Oxford: Blackwell Publishers.Berkeley, G. Principles of Human Knowledge, in M.R. Ayers (ed.), Berkeley: Philosophical Writings,

London: Dent, 1975.Block, N. (1983), “Mental Pictures and Cognitive Science,” Philosophical Review, 93: 499–542.

Page 13: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 13/19

––– (1986), “Advertisement for a Semantics for Psychology,” in P.A. French, T.E. Uehling and H.K.Wettstein (eds.), Midwest Studies in Philosophy, Vol. X, Minneapolis: University of Minnesota Press:615–678.

––– (1996), “Mental Paint and Mental Latex,” in E. Villanueva (ed.), Philosophical Issues, 7: Perception:19–49.

––– (2003), “Mental Paint,” in M. Hahn and B. Ramberg (eds.), Reflections and Replies: Essays on thePhilosophy of Tyler Burge, Cambridge, Mass.: The MIT Press.

Block, N. (ed.) (1981), Readings in Philosophy of Psychology, Vol. 2, Cambridge, Mass.: HarvardUniversity Press.

––– (ed.) (1982), Imagery, Cambridge, Mass.: The MIT Press.Boghossian, P. A. (1995), “Content,” in J. Kim and E. Sosa (eds.), A Companion to Metaphysics, Oxford:

Blackwell, 94–96.Brandom, R. (2002), “Non-inferential Knowledge, Perceptual Experience, and Secondary Qualities:

Placing McDowell's Empiricism,” in N.H. Smith (ed.), Reading McDowell: On Mind and World,London: Routledge.

Burge, T. (1979), “Individualism and the Mental,” in P.A. French, T.E. Uehling and H.K.Wettstein (eds.),Midwest Studies in Philosophy, Vol. IV, Minneapolis: University of Minnesota Press: 73–121.(Reprinted, with Postscript, in Burge 2007.)

––– (1986a), “Individualism and Psychology,” Philosophical Review, 95: 3–45.––– (1986b), “Intellectual Norms and Foundations of Mind,” The Journal of Philosophy, 83: 697–720.––– (2007), Foundations of Mind: Philosophical Essays, Volume 2, Oxford: Oxford University Press.––– (2010), Origins of Objectivity, Oxford: Oxford University Press.Chalmers, D. (1993), “Connectionism and Compositionality: Why Fodor and Pylyshyn Were Wrong,”

Philosophical Psychology, 6: 305–319.––– (1996), The Conscious Mind, New York: Oxford University Press.––– (2003), “The Content and Epistemology of Phenomenal Belief,” in Q. Smith & A. Jokic (eds.),

Consciousness: New Philosophical Perspectives, Oxford: Oxford University Press: 220–272.––– (2004a), “The Representational Character of Experience,” in B. Leiter (ed.), The Future for

Philosophy, Oxford: Oxford University Press: 153–181.––– (2004b), “Phenomenal Concepts and the Knowledge Argument,” in P. Ludlow, Y. Nagasawa and D.

Stoljar (eds.), There's Something About Mary: Essays on Phenomenal Consciousness and FrankJackson's Knowledge Argument, Cambridge, Mass.: The MIT Press.

Chisholm, R. and Sellars, W. (1958), “The Chisholm-Sellars Correspondence on Intentionality,” in H.Feigl, M. Scriven and G. Maxwell (eds.), Minnesota Studies in the Philosophy of Science, Vol. II,Minneapolis: University of Minnesota Press: 529–539.

Chomsky, N. (1965), Aspects of the Theory of Syntax, Cambridge, Mass.: The MIT Press.Churchland, P.M. (1981), “Eliminative Materialism and the Propositional Attitudes,” Journal of

Philosophy, 78: 67–90.––– (1989), “On the Nature of Theories: A Neurocomputational Perspective,” in W. Savage (ed.), Scientific

Theories: Minnesota Studies in the Philosophy of Science, Vol. 14, Minneapolis: University ofMinnesota Press: 59–101.

Clark, A. (1997a), “The Dynamical Challenge,” Cognitive Science, 21: 461–481.––– (1997b), Being There: Putting Brain, Body and World Together Again, Cambridge, MA: The MIT

Press.––– (2001), “Reasons, Robots and the Extended Mind,” Mind and Language, 16: 121–145.––– (2005), “Intrinsic Content, Active Memory, and the Extended Mind,” Analysis, 65: 1–11.––– (2008). Supersizing the Mind, Oxford: Oxford University Press.Clark, A., and Chalmers, D. (1998), “The Extended Mind,” Analysis, 58: 7–19.Collins, A. (1987), The Nature of Mental Things, Notre Dame: Notre Dame University Press.Crane, T. (1995), The Mechanical Mind, London: Penguin Books Ltd.Davidson, D. (1973), “Radical Interpretation,” Dialectica 27: 313–328.

Page 14: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 14/19

––– (1974), “Belief and the Basis of Meaning,” Synthese, 27: 309–323.––– (1975), “Thought and Talk,” in S. Guttenplan (ed.), Mind and Language, Oxford: Clarendon Press: 7–

23.––– (1982), “Rational Animals,” Dialectica, 4: 317–327.Dennett, D. (1969), Content and Consciousness, London: Routledge & Kegan Paul.––– (1981), “The Nature of Images and the Introspective Trap,” pages 132–141 of Dennett 1969, reprinted

in Block 1981: 128–134.––– (1987), The Intentional Stance, Cambridge, Mass.: The MIT Press.––– (1987a), “True Believers: The Intentional Strategy and Why it Works,” in Dennett 1987: 13–35.––– (1987b), “Reflections: Real Patterns, Deeper Facts, and Empty Questions,” in Dennett 1987: 37–42.––– (1988), “Quining Qualia,” in A.J. Marcel and E. Bisiach (eds.), Consciousness in Contemporary

Science, Oxford: Clarendon Press: 42–77.––– (1991), “Real Patterns,” The Journal of Philosophy, 87: 27–51.Devitt, M. (1996), Coming to Our Senses: A Naturalistic Program for Semantic Localism, Cambridge:

Cambridge University Press.Dretske, F. (1969), Seeing and Knowing, Chicago: The University of Chicago Press.––– (1981), Knowledge and the Flow of Information, Cambridge, Mass.: The MIT Press.––– (1988), Explaining Behavior: Reasons in a World of Causes, Cambridge, Mass.: The MIT Press.––– (1995), Naturalizing the Mind, Cambridge, Mass.: The MIT Press.––– (1996), “Phenomenal Externalism, or If Meanings Ain't in the Head, Where are Qualia?”, in E.

Villanueva (ed.), Philosophical Issues 7: Perception: 143–158.––– (1999), “The Mind's Awareness of Itself,” Philosophical Studies, 95: 103–124.––– (1998), “Minds, Machines, and Money: What Really Explains Behavior,” in J. Bransen and S. Cuypers

(eds.), Human Action, Deliberation and Causation, Philosophical Studies Series 77, Dordrecht:Kluwer Academic Publishers. Reprinted in Dretske 2000.

––– (2000), Perception, Knowledge and Belief, Cambridge: Cambridge University Press.Evans, G. (1982), The Varieties of Reference, Oxford: Oxford University Press.Farkas, K. (2008), The Subject's Point of View, Oxford: Oxford University Press.Field, H. (1978), “Mental representation,” Erkenntnis, 13: 9–61.Flanagan, O. (1992), Consciousness Reconsidered, Cambridge, Mass.: The MIT Press.Fodor, J.A. (1975), The Language of Thought, Cambridge, Mass.: Harvard University Press.––– (1978), “Propositional Attitudes,” The Monist 61: 501–523.––– (1981), Representations, Cambridge, Mass.: The MIT Press.––– (1981a), “Introduction,” in Fodor 1981: 1–31.––– (1981b), “Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology,” in

Fodor 1981: 225–253.––– (1981c), “The Present Status of the Innateness Controversy,” in Fodor 1981: 257–316.––– (1982), “Cognitive Science and the Twin-Earth Problem,” Notre Dame Journal of Formal Logic, 23:

98–118.––– (1987), Psychosemantics, Cambridge, Mass.: The MIT Press.––– (1990a), A Theory of Content and Other Essays, Cambridge, Mass.: The MIT Press.––– (1990b), “Psychosemantics or: Where Do Truth Conditions Come From?” in W.G. Lycan (ed.), Mind

and Cognition: A Reader, Oxford: Blackwell Publishers: 312–337.––– (1994), The Elm and the Expert, Cambridge, Mass.: The MIT Press.––– (1998), Concepts: Where Cognitive Science Went Wrong, Oxford: Oxford University Press.––– (2000), The Mind Doesn't Work that Way: The Scope and Limits of Computational Psychology,

Cambridge, Mass.: The MIT Press.––– (2003), LOT 2: The Language of Thought Revisited, Oxford: Clarendon Press.––– (2008), The Mind Doesn't Work that Way: The Scope and Limits of Computational Psychology,

Cambridge, Mass.: The MIT Press.Fodor, J.A. and Lepore, E. (2002), The Compositionality Papers, Oxford: Clarendon Press.

Page 15: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 15/19

Fodor, J.A. and Pylyshyn, Z. (1981), “How Direct is Visual Perception?: Some Reflections on Gibson's‘Ecological Approach’,” Cognition, 9: 207–246.

––– (1988), “Connectionism and Cognitive Architecture: A Critical Analysis,” Cognition, 28: 3–71.Frege, G. (1884), The Foundations of Arithmetic, trans. J.L. Austin, New York: Philosophical Library

(1954).––– (1892), “On Sinn and Bedeutung”, in Beany 1997: 151–171.––– (1918), “Thought”, in Beany 1997: 325–345.Geach, P. (1957), Mental Acts: Their Content and Their Objects, London: Routledge & Kegan Paul.Gibson, J.J. (1966), The senses considered as perceptual systems, Boston: Houghton Mifflin.––– (1979), The ecological approach to visual perception, Boston: Houghton Mifflin.Goldman, A. (1993), “The Psychology of Folk Psychology,” Behavioral and Brian Sciences, 16: 15–28.Goodman, N. (1976), Languages of Art, 2nd ed., Indianapolis: Hackett.Grice, H.P. (1957), “Meaning,” Philosophical Review, 66: 377–388; reprinted in Studies in the Way of

Words, Cambridge, Mass.: Harvard University Press (1989): 213–223.Gunther, Y.H. (ed.) (2003), Essays on Nonconceptual Content, Cambridge, Mass.: The MIT Press.Harman, G. (1973), Thought, Princeton: Princeton University Press.––– (1987), “(Non-Solipsistic) Conceptual Role Semantics,” in E. Lepore (ed.), New Directions in

Semantics, London: Academic Press: 55–81.––– (1990), “The Intrinsic Quality of Experience,” in J. Tomberlin (ed.), Philosophical Perspectives 4:

Action Theory and Philosophy of Mind, Atascadero: Ridgeview Publishing Company: 31–52.Harnish, R. (2002), Minds, Brains, Computers, Malden, Mass.: Blackwell Publishers Inc.Haugeland, J. (1981), “Analog and analog,” Philosophical Topics, 12: 213–226.Heil, J. (1991), “Being Indiscrete,” in J. Greenwood (ed.), The Future of Folk Psychology, Cambridge:

Cambridge University Press: 120–134.Horgan, T. and Tienson, J. (1996), Connectionism and the Philosophy of Psychology, Cambridge, Mass:

The MIT Press.––– (2002), “The Intentionality of Phenomenology and the Phenomenology of Intentionality,” in D.J.

Chalmers (ed.), Philosophy of Mind, Oxford: Oxford University Press.Horst, S. (1996), Symbols, Computation, and Intentionality, Berkeley: University of California Press.Hume, D. (1739), A Treatise of Human Nature, L.A. Selby-Bigg (ed.), rev. P.H. Nidditch, Oxford: Oxford

University Press (1978).Jackendoff, R. (1987), Computation and Cognition, Cambridge, Mass.: The MIT Press.Jackson, F. (1982), “Epiphenomenal Qualia,” Philosophical Quarterly, 32: 127–136.––– (1986), “What Mary Didn't Know,” Journal of Philosophy, 83: 291–295.Johnson-Laird, P.N. (1983), Mental Models, Cambridge, Mass.: Harvard University Press.Johnson-Laird, P.N. and Wason, P.C. (1977), Thinking: Readings in Cognitive Science, Cambridge

University Press.Kaplan, D. (1989), “Demonstratives,” in Almog, Perry and Wettstein 1989: 481–614.Kosslyn, S.M. (1980), Image and Mind, Cambridge, Mass.: Harvard University Press.––– (1982), “The Medium and the Message in Mental Imagery,” in Block 1982: 207–246.––– (1983), Ghosts in the Mind's Machine, New York: W.W. Norton & Co.Kosslyn, S.M. and Pomerantz, J.R. (1977), “Imagery, Propositions, and the Form of Internal

Representations,” Cognitive Psychology, 9: 52–76.Kriegel, U. (2011), The Sources of Intentionality, Oxford: Oxford University Press.Kriegel, U. (ed.) forthcoming, Phenomenal Intentionality: New Essays, Oxford: Oxford University Press.Leeds, S. (1993), “Qualia, Awareness, Sellars,” Noûs XXVII: 303–329.Lerdahl, F. and Jackendoff, R. (1983), A Generative Theory of Tonal Music, Cambridge, Mass.: The MIT

Press.Levine, J. (1993), “On Leaving Out What It's Like,” in M. Davies and G. Humphreys (eds.),

Consciousness, Oxford: Blackwell Publishers: 121–136.––– (1995), “On What It Is Like to Grasp a Concept,” in E. Villanueva (ed.), Philosophical Issues 6:

Page 16: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 16/19

Content, Atascadero: Ridgeview Publishing Company: 38–43.––– (2001), Purple Haze, Oxford: Oxford University Press.Lewis, D. (1971), “Analog and Digital,” Noûs, 5: 321–328.––– (1974), “Radical Interpretation,” Synthese, 27: 331–344.Loar, B. (1981), Mind and Meaning, Cambridge: Cambridge University Press.––– (1996), “Phenomenal States” (Revised Version), in N. Block, O. Flanagan and G. Güzeldere (eds.),

The Nature of Consciousness, Cambridge, Mass.: The MIT Press: 597–616.––– (2003a), “Transparent Experience and the Availability of Qualia,” in Q. Smith and A. Jokic (eds.),

Consciousness: New Philosophical Perspectives, Oxford: Clarendon Press: 77–96.––– (2003b), “Phenomenal Intentionality as the Basis of Mental Content,” in M. Hahn and B. Ramberg

(eds.), Reflections and Replies: Essays on the Philosophy of Tyler Burge, Cambridge, Mass.: The MITPress.

Locke, J. (1689), An Essay Concerning Human Understanding, P.H. Nidditch (ed.), Oxford: OxfordUniversity Press (1975).

Lycan, W.G. (1987), Consciousness, Cambridge, Mass.: The MIT Press.––– (1986), Consciousness and Experience, Cambridge, Mass.: The MIT Press.MacDonald, C. and MacDonald, G. (1995), Connectionism: Debates on Psychological Explanation,

Oxford: Blackwell Publishers.Marr, D. (1982), Vision, New York: W.H. Freeman and Company.Martin, C.B. (1987), “Proto-Language,” Australasian Journal of Philosophy, 65: 277–289.McCulloch, W.S. and Pitts, W. (1943), “A Logical Calculus of the Ideas Immanent in Nervous Activity,”

Bulletin of Mathematical Biophysics, 5: 115–33.McDowell, J. (1986), “Singular Thought and the Extent of Inner Space,” in P. Pettit and J. McDowell

(eds.), Subject, Thought, and Context, Oxford: Clarendon Press: 137–168.––– (1994), Mind and World, Cambridge, Mass.: Harvard University Press.McGinn, C. (1977), “Charity, Interpretation, and Belief,” Journal of Philosophy, 74: 521–535.––– (1982), “The Structure of Content,” in A. Woodfield (ed.), Thought and Content, Oxford: Oxford

University Press: 207–258.––– (1989), Mental Content, Oxford: Blackwell Publishers.––– (1991), The Problem of Consciousness, Oxford: Blackwell Publishers.––– (1991a), “Content and Consciousness,” in McGinn 1991: 23–43.––– (1991b), “Can We Solve the Mind-Body Problem?” in McGinn 1991: 1–22.––– (2004), Mindsight: Image, Dream, Meaning, Cambridge, Mass.: Harvard University Press.Millikan, R. (1984), Language, Thought and other Biological Categories, Cambridge, Mass.: The MIT

Press.Menary, R. (ed.) (2010), The Extended Mind, Cambridge, Mass.: The MIT Press.Minsky, M. (1974), “A Framework for Representing Knowledge,” MIT-AI Laboratory Memo 306 June.

(A shorter version appears in J. Haugeland (ed.), Mind Design II, Cambridge, Mass.: The MIT Press(1997).)

Nagel, T. (1974), “What Is It Like to Be a Bat?” Philosophical Review, 83: 435–450.Newell, A. and Simon, H.A. (1972), Human Problem Solving, New York: Prentice-Hall.––– (1976), “Computer Science as Empirical Inquiry: Symbols and Search,” Communications of the

Association for Computing Machinery, 19: 113–126.O'Callaghan, C. (2007), Sounds, Oxford: Oxford University Press.Osherson, D.N., Kosslyn, S.M. and Hollerbach, J.M. (1990), Visual Cognition and Action: An Invitation to

Cognitive Science, Vol. 2, Cambridge, Mass.: The MIT Press.Papineau, D. (1987), Reality and Representation, Oxford: Blackwell Publishers.Peacocke, C. (1983), Sense and Content, Oxford: Clarendon Press.––– (1989), “Perceptual Content,” in Almog, Perry and Wettstein 1989: 297–329.––– (1992), “Scenarios, Concepts and Perception,” in T. Crane (ed.), The Contents of Experience,

Cambridge: Cambridge University Press: 105–35.

Page 17: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 17/19

––– (2001), “Does Perception Have a Nonconceptual Content?” Journal of Philosophy, 99: 239–264.Pinker, S. (1989), Learnability and Cognition, Cambridge, Mass.: The MIT Press.Pitt, D. (2004), “The Phenomenology of Cognition, Or, What Is it Like to Think That P?” Philosophy and

Phenomenological Research, 69: 1–36.––– (2009), “Intentional Psychologism” Philosophical Studies, 146: 117–138.––– (2011), “Introspection, Phenomenality and the Availability of Intentional Content,” in Bayne and

Montague 2011.–––, forthcoming, “Indexical Thought,” in Kriegel (ed.) forthcoming.Port, R., and Van Gelder, T. (1995), Mind as Motion: Explorations in the Dynamics of Cognition,

Cambridge, Mass.: The MIT Press.Putnam, H. (1975), “The Meaning of ‘Meaning’,” in Philosophical Papers, Vol. 2, Cambridge: Cambridge

University Press: 215–71.Pylyshyn, Z. (1979), “The Rate of ‘Mental Rotation’ of Images: A Test of a Holistic Analogue

Hypothesis,” Memory and Cognition, 7: 19–28.––– (1981a), “Imagery and Artificial Intelligence,” in Block 1981: 170–194.––– (1981b), “The Imagery Debate: Analog Media versus Tacit Knowledge,” Psychological Review, 88:

16–45.––– (1984), Computation and Cognition, Cambridge, Mass.: The MIT Press.––– (2003), Seeing and Visualizing: It's Not What You Think, Cambridge, Mass.: The MIT Press.Raffman, D. (1995), “The Persistence of Phenomenology,” in T. Metzinger (ed.), Conscious Experience,

Paderborn: Schönigh/Imprint Academic: 293–308.Ramsey, W., Stich, S. and Garon, J. (1990), “Connectionism, Eliminativism and the Future of Folk

Psychology,” Philosophical Perspectives, 4: 499–533.Reid, T. (1764), An Inquiry into the Human Mind, D.R. Brooks (ed.), Edinburgh: Edinburgh University

Press (1997).Rey, G. (1981), “Introduction: What Are Mental Images?” in Block 1981: 117–127.––– (1991), “Sensations in a Language of Thought,” in E. Villaneuva (ed.), Philosophical Issues 1:

Consciousness, Atascadero: Ridgeview Publishing Company: 73–112.Rumelhart, D.E. (1989), “The Architecture of the Mind: A Connectionist Approach,” in M.I. Posner (ed.),

Foundations of Cognitive Science, Cambridge, Mass.: The MIT Press: 133–159.Rumelhart, D.E. and McCelland, J.L. (1986). Parallel Distributed Processing, Vol. I, Cambridge, Mass.:

The MIT Press.Schiffer, S. (1987), Remnants of Meaning, Cambridge, Mass.: The MIT Press.––– (1972), “Introduction” (Paperback Edition), in Meaning, Oxford: Clarendon Press (1972/1988): xi-

xxix.Searle, J.R. (1980), “Minds, Brains, and Programs,” Behavioral and Brain Sciences, 3: 417–424.––– (1983), Intentionality, Cambridge: Cambridge University Press.––– (1984) Minds, Brains, and Science, Cambridge: Harvard University Press.––– (1992), The Rediscovery of the Mind, Cambridge, Mass.: The MIT Press.Sellars, W. (1956), “Empiricism and the Philosophy of Mind,” in K. Gunderson (ed.), Minnesota Studies in

the Philosophy of Science, Vol. I, Minneapolis: University of Minnesota Press: 253–329.Shepard, R.N. and Cooper, L. (1982), Mental Images and their Transformations, Cambridge, Mass.: The

MIT Press.Shoemaker, S. (1990), “Qualities and Qualia: What's in the Mind?” Philosophy and Phenomenological

Research, 50: 109–31.Siewert, C. (1998), The Significance of Consciousness, Princeton: Princeton University Press.Smolensky, P. (1988), “On the Proper Treatment of Connectionism,” Behavioral and Brain Sciences, 11:

1–74.––– (1989), “Connectionist Modeling: Neural Computation/Mental Connections,” in L. Nadel, L.A.

Cooper, P. Culicover and R.M. Harnish (eds.), Neural Connections, Mental Computation Cambridge,Mass.:The MIT Press: 49–67.

Page 18: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 18/19

––– (1991), “Connectionism and the Language of Thought,” in B. Loewer and G. Rey (eds.), Meaning inMind: Fodor and His Critics, Oxford: Basil Blackwell Ltd.: 201–227.

Sterelny, K. (1989), “Fodor's Nativism,” Philosophical Studies, 55: 119–141.Stich, S. (1983), From Folk Psychology to Cognitive Science, Cambridge, Mass.: The MIT Press.––– (1996), Deconstructing the Mind, New York: Oxford University Press.Strawson, G. (1994), Mental Reality, Cambridge, Mass.: The MIT Press.––– (2008), Real Materialism and Other Essays, Oxford: Oxford University Press.Thau, M. (2002), Consciousness and Cognition, Oxford: Oxford University Press.Turing, A. (1950), “Computing Machinery and Intelligence,” Mind, 59: 433–60.Tye, M. (1991), The Imagery Debate, Cambridge, Mass.: The MIT Press.––– (1995), Ten Problems of Consciousness, Cambridge, Mass.: The MIT Press.––– (2000), Consciousness, Color, and Content, Cambridge, Mass.: The MIT Press.––– (2009), Consciousness Revisited, Cambridge, Mass.: The MIT Press.Van Gelder, T. (1995), “What Might Cognition Be, if not Computation?”, Journal of Philosophy, XCI:

345–381.Von Eckardt, B. (1993), What Is Cognitive Science?, Cambridge, Mass.: The MIT Press.––– (2005), “Connectionism and the Propositional Attitudes,” in C.E. Erneling and D.M. Johnson (eds.),

The Mind as a Scientific Object: Between Brain and Culture, Oxford: Oxford University Press.Wittgenstein, L. (1953), Philosophical Investigations, trans. G.E.M. Anscombe, Oxford: Blackwell

Publishers.

Academic Tools

How to cite this entry.Preview the PDF version of this entry at the Friends of the SEP Society.Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).Enhanced bibliography for this entry at PhilPapers, with links to its database.

Other Internet ResourcesMindPapers: Online Research in Philosophy, Editors, David Chalmers and David Bourget.A Field Guide to the Philosophy of Mind, General editors, Marcho Nani and Massimo Marraffa.Dictionary of Philosophy of Mind, Creator and Founding Editor, Chris Eliasmith, University ofWaterloo. Chief Editor, Eric Hochstein, University of Waterloo..Routledge Encyclopedia of Philosophy, General Editor, Tim Crane.

Related Entriescognition-embodied | cognitive science | concepts | connectionism | consciousness: and intentionality |consciousness: representational theories of | folk psychology: as mental simulation | information: semanticconceptions of | intentionality | language of thought hypothesis | logic and artificial intelligence | materialism:eliminative | mental content: causal theories of | mental content: externalism about | mental content: narrow |mental content: nonconceptual | mental content: teleological theories of | mental imagery | mentalrepresentation: in medieval philosophy | mind: computational theory of | neuroscience, philosophy of |perception: the contents of | perception: the problem of | qualia | reference

Acknowledgments

Page 19: Mental representation (stanford encyclopedia of philosophy)

5/19/2014 Mental Representation (Stanford Encyclopedia of Philosophy)

http://plato.stanford.edu/entries/mental-representation/ 19/19

Thanks to Brad Armour-Garb, Mark Balaguer, Dave Chalmers, Jim Garson, John Heil, Jeff Poland, BillRobinson, Galen Strawson, Adam Vinueza and (especially) Barbara Von Eckardt for comments on earlierversions of this entry.

Copyright © 2012 by David Pitt <[email protected]>

Open access to the SEP is made possible by a world-wide funding initiative.Please Read How You Can Help Keep the Encyclopedia Free

The Stanford Encyclopedia of Philosophy is copyright © 2014 by The Metaphysics Research Lab, Centerfor the Study of Language and Information (CSLI), Stanford University

Library of Congress Catalog Data: ISSN 1095-5054