Abstract: The neurophilosophy of
consciousness brings neuroscience to bear on philosophical issues concerning
phenomenal consciousness, especially issues concerning what makes mental states
conscious, what it is that we are conscious of, and the nature of the
phenomenal character of conscious states. Here attention is given largely to
phenomenal consciousness as it arises in vision. The relevant neuroscience
concerns not only neurophysiological and neuroanatomical
data, but also computational models of neural
networks. The neurophilosophical theories that bring such data to bear on the
core philosophical issues of phenomenal conscious construe consciousness
largely in terms of representations in neural networks associated with certain
processes of attention and memory.
Traditional
philosophical issues that phenomenal consciousness raises involve the relation
of phenomenal consciousness to the rest of the world, especially as that world
is conceived of by the natural sciences. Thus much philosophical discussion
concerns whether the world as conceived of by physical theory can adequately
accommodate phenomenal consciousness or if instead we are left with a dualism
that cleaves reality into, for example, a non-physical phenomenal consciousness
and a physical everything else. Even among philosophers that agree that
phenomenal consciousness is consistent with physicalism,
there is much disagreement, for there are several proposals for how best to
spell out the consistency of a physicalistic
world-view that makes room for phenomenal consciousness. One way of portraying
this cluster of issues is in terms of which natural science is best suited to
study phenomenal consciousness and how to conceive of the relation between that
science and the sciences involving the most basic aspects of reality (the
physical sciences). One major view is that psychology is the proper science for
understanding phenomenal consciousness and further, that psychological
investigation of phenomenal conscious should be regarded as autonomous from
sciences such as the neurosciences. In opposition is the view that the proper
science is neuroscience and whatever contributions come from psychology are
only valid insofar as psychological theories are reducible to neuroscientific
theories. Increasingly, proponents of the latter view identify themselves as
practitioners of neurophilosophy.
Neurophilosophy
is a sub-genre of naturalized philosophy—philosophy that embraces Quine’s (1969) vision of philosophy as continuous with the
natural sciences—wherein the natural science in primary focus is neuroscience.
The term “Neurophilosophy” entered philosophical parlance with the publication
of Patricia Churchland’s Neurophilosophy
(1986). Patricia Churchland and husband Paul Churchland are paradigmatic examples
of neurophilosophers. Their professional training is primarily philosophical,
their appointments are in philosophy departments, and they publish in
philosophy journals. Thus, neuroscience and philosophy do not have equal
influence over neurophilosophy. Instead the
primary forces that drive its development as an academic pursuit emanate from
conventions of philosophical institutions. Thus, neurophilosophical work on
phenomenal consciousness proceeds largely by bringing neuroscientific theory
and data to bear on philosophical questions concerning phenomenal
consciousness.
Such questions are
diverse. However, a useful way to focus the discussion—as well as to understand
what has been of primary concern to neurophilosophical theories of phenomenal
consciousness—will be to focus on just three questions: the question of state
consciousness, the question of transitive consciousness, and the question of
phenomenal character. (The terms “transitive consciousness” and “state consciousness”
are due to David Rosenthal. For discussion, see Rosenthal 1993.) The question
of state consciousness concerns in what consists the difference between mental
states that are conscious and mental states that are unconscious. We have
conscious mental states, such as my conscious perception of the words I type.
Mental states vary with respect to whether they are conscious. Consider, for example,
your memory of your mother’s name. You may have had that memory for years but
until you read the previous sentence it is unlikely that it was a conscious memory for the entire time
between its initial acquisition and its current retrieval. In what does the difference
between conscious and unconscious mental states consist? The question of
transitive consciousness concerns what it is that we are conscious of. When one has a conscious state,
typically, if not always, one is conscious of
something, as when I am conscious of a buzzing insect. Things may vary with
respect to whether I am conscious of them, as when I am only intermittently
conscious of the conversation at a nearby table in a restaurant. What does it
mean to be conscious of something?
The question of phenomenal character concerns the so-called qualia of conscious
states. Conscious states have certain properties—their phenomenal
character—properties in virtue of which there is “something it is like” to be
in that state. When I have a conscious perception of a cup of coffee there is,
presumably, something it is like for me to have that perception and, for all I
know, what it is like for you to have a conscious perception of a cup of coffee
is quite different. What makes a conscious state have “something it is like” to
be in that state?
Neurophilosophical
theories of consciousness bring neuroscience to bear on answering these three
questions of consciousness. The question arises, of course, of what motivates
neurophilosophy of consciousness. The primary answer is that it has a certain
appeal to those with an antecedent belief in physicalism
in that it seems especially well suited to bridge the gap between the
phenomenal and the physical. Attempting to bridge the gap by reducing the
phenomenal all the way down to chemistry or microphysics may strike many as too
far a distance to traverse. More plausible is to seek a higher level physical
set of phenomena as offered in biology. Of the biological phenomena, the most
plausible candidates are neural. The appeal of neurophilosophical approaches to
phenomenal consciousness may become more evident upon examination of some
sample theories.
Before
examining the neurophilosophical theories, it will be useful to look at a small
sample of some of the relevant neuroscience. Vision is one of the most
important and best understood senses. Accordingly, most of the fruitful progress
in combining philosophy and neuroscience to get a grip on phenomenal
consciousness has occurred in the domain of visual consciousness.
NEUROSCIENCE AND VISUAL
CONSCIOUSNESS
The
processing of visual information in the brain can be understood as occurring in
a processing hierarchy with the lowest levels in the retina and the highest
levels in areas of the cerebral cortex. Processing begins after light is transduced
by the rods and cones in the retina and electrochemical signals are passed to
the retinal ganglia. From there, information flows through the optic nerve to
the lateral geniculate nucleus (LGN) in the subcortex.
From the LGN, information is passed to the first stage of cortical processing
in the primary visual area of occipital cortex (area V1). From V1, the
information is sent to other areas of occipital cortex and is then sent along a
“ventral stream” from occipital to infero-temporal
cortex as well as along a “dorsal stream” from occipital to posterior parietal
cortex (Milner and Goodale1995). Beyond that information is sent to areas
of frontal cortex (Olson et al 1999) as well as the hippocampus (Milner
and Goodale1995). As will be discussed further later, information does
not simply flow from lower levels to higher levels but there are many instances
in which it flows from higher levels down to lower levels (Pascual-Leone
and Walsh 2001). Further, information is processed in various ways in
different regions of the different levels and can be briefly characterized in
the following ways. Information at the lowest levels is represented by neural
activations that serve as detectors of features localized to specific retinocentrically defined locations of the visual field.
Thus, at the lowest levels, neural activations in LGN and V1 constitute egocentric
representations of visual features as in, for instance, the detection of an
oriented line by a cell with a relatively small retinocentric receptive field.
At progressively higher level areas (such as visual areas V2 through V5),
locally defined visual features are “grouped” or integrated as in when local
information about shading is grouped to give rise to representations of depth.
Progressively higher levels of information processing increasingly abstract
away from the egocentric information of the lower level representations and
give rise to progressively allocentric (“other centered” ) representations as
in view-point invariant representations in inferior termporal
cortex that underwrite the recognition of objects from multiple angles and
other viewing conditions. Thus information represented at progressively higher
levels of processing becomes progressively less egocentric and progressively
more allocentric with the most allocentric representations being in frontal
areas and hippocampus (Mandik 2005).
The
question arises of how best to apply the concepts of consciousness of interest
to philosophers--state consciousness, transitive consciousness, and phenomenal
character--in the context of a neuroscientific understanding of visual
perception. We may make the most progress in this regard by focusing on
breakdowns and anomalies of normal vision. We will briefly examine two such
cases. The first is blindsight, a condition that results from a certain kind of
brain damage. The second is motion induced blindness, a condition that occurs
in normal subjects under certain unusual conditions conditions.
Blindsight
is a condition in which lesions to V1 cause subjects to report a loss of
consciousness in spite of the retention of visual ability. For so-called blind
regions of their visual fields, blindsight subjects are nonetheless better than
chance in their responses (such as directed eye movements or forced-choice
identifications) to stimulus properties such as luminance onset (Pöppel, E., Held, R., and Frost, D. 1973) wavelength
(Stoerig, P., Cowey, A.
1992) and motion (Weiskrantz, L. 1995). Lack of
consciousness is indicated in such studies by, for example, having the subject
indicate by pressing one of two keys “whether he had any experience whatever,
no matter how slight or effervescent” (Weiskrantz, L.
1996).
Blindsight
subjects’ responses to stimuli in the blind portions of their visual fields
give evidence that the stimuli are represented in portions of the brain.
However, it is clear that these representational states are not conscious
states. Thus, the kind of consciousness that seems most relevant in describing
what blindsight patients lack is state consciousness. Further, blindsight
patients arguably also lack transitive consciousness with respect to the
stimuli in the blind regions of their visual field. One consideration in favor
of this view arises when we take the subject’s own reports at face value. They
claim not to be conscious of the stimuli in question. It would be difficult to affirm
that blindsight subjects do have transitive consciousness of the relevant
stimuli without affirming that all instances of representation are instances of
transitive consciousness, and thus instances of unconscious consciousness.
Regarding
the question of qualia, of whether there is anything it is like for blindsight
subjects to have stimuli presented to the blind regions of their visual fields,
I take it that it is quite natural to reason as follows. Since they are not
conscious of the stimuli and since the states that represent the stimuli are
not conscious states, there must not be anything its like to have stimuli
presented to those regions. Of course, the reader may doubt this claim if the
reader is not a blindsight subject. It will be useful in this regard to
consider a case that readers will be more likely to have first-person access
to. For precisely this reason it is instructive to look at the phenomenon of
motion induced blindness (Bonneh et
al2001).
Motion
induced blindness may be elicited in normal subjects under conditions in which
they look at a computer screen that has a triangular pattern of three bright
yellow dots on a black background with a pattern of blue dots moving “behind”
the yellow dots. As subjects fixate on center of the screen it appears to them
that one or more of the yellow dots disappear (although in reality the yellow
dots remain on the screen). The effect is quite salient and readers are
encouraged to search the internet for “motion induced blindness” and experience
the effect for themselves. There are several lines of evidence that even during
the “disappearance” the yellow dots continue to be represented in visual areas
of the brain. The effect can be influenced by transcranial magnetic stimulation
to parietal cortex (a relatively late stage of visual processing in the brain).
Additionally, the effect can be shown to involve non-local grouping of the
stimulus elements. So, for example, if the yellow dots are replaced with a pair
of partially overlapping circles, one yellow and one pink—sometimes an entire
circle will disappear leaving the other behind even though some parts of the
two different circles are very close in the visual field. As mentioned
previously, the brain mechanisms thought to mediate such object groupings are
relatively late in the visual processing hierarchy.
We
may turn now to the applications of the concepts of transitive consciousness,
state consciousness, and qualia to motion induced blindness. First off, motion
induced blindness looks to be a phenomenon involving transitive consciousness
since in the one moment the subject is conscious of the yellow dot, in the next
they are not conscious of the yellow dot, and along the way they are conscious
of a yellow dot seeming to disappear. Second, we can see that motion induced
blindness allows for applications of the concept of state consciousness, since
studies of motion induced blindness provides evidence of conscious states that represent the presence
of yellow dots as well as unconscious states that represent the presence of
yellow dots.
Let
us turn now to ask how the concept of phenomenal character applies in the
context of motion induced blindness. The best grip we can get on this question
is simply by asking what it is like is like see yellow dots disappear. When
there is an unconscious state that represents the yellow dots or no transitive
consciousness of yellow dot, there is, with respect to the yellow dot, nothing
it is like to see it. Or, more accurately, what this instance of motion induced
blindness is like is like not seeing
a yellow dot. When the state representing the yellow dot is conscious, what it
is like to be in that state is like seeing a yellow dot. One might suppose,
then, as will be discussed later, that what it is like to be in the conscious state
is determined, at least in part, by the representational content of that state.
In this case, it is the content of the representation of a yellow dot.
NEUROPHILOSOPHICAL THEORIES OF CONSCIOUSNESS
I
will now turn to examine a sample of neurophilosophical theories of
consciousness. In keeping with the paradigmatic status of the work of the
Churchlands in neurophilosophy, my primary focus will be on Paul Churchland’s neurophilosophical
work on consciousness. However, other philosophers have produced neurophilosophical
accounts and I will mention their work as well.
Paul
Churchland articulates what he calls the "dynamical profile approach"
to understanding consciousness (2002). According to the approach, a conscious
state is any cognitive representation that is involved in (1) a moveable attention
that can focus on different aspects of perceptual inputs, (2) the application
of various conceptual interpretations of those inputs, (3) holding the results
of attended and conceptual interpreted inputs in a short-term memory that (4)
allows for the representation of temporal sequences.
Note
that these four conditions primarily answer the question of what makes a state
a conscious one. Regarding the question of what we are conscious of, Churchland
writes that "a conscious representation could have any content or subject
matter at all" (p. 72) and he is especially critical of theories of
consciousness that impose restrictions on the contents of conscious
representations along the line of requiring them to be self-representational or
meta-representational (pp. 72 – 74).
Much
of Churchland’s discussion of the dynamical profile account of consciousness
concerns how all of the four conditions may be implemented in recurrent neural
networks. A recurrent neural network may be best understood in terms of
contrast with feed-forward neural networks, but we should first give a general
characterization of neural networks.
Neural networks are collections of interconnected neurons. These
networks have one or more input neurons and one or more output neurons. They
may additionally have neurons that are neither input nor output neurons and are
called "interneurons" or "hidden-layer" neurons. Neurons
have, at any given time, one of several states of activation. In the case of input
neurons, the state of activation is a function of a stimulus. In the case of
interneurons and output neurons, their state of activation is a function of the
states of activation of other neurons that connect to them. The amount of
influence the activation of one neuron can exert on another neuron is
determined by the "weight" of the connection between them. Learning
in neural networks is typically thought to involve changes to the weights of
the connections between neurons (though it may also involve the addition of new
connections and the "pruning" of old ones). In feed-forward networks,
the flow of information is strictly from input to output (via interneurons if
any are present). In recurrent networks there are feedback (or
"recurrent") connections as well as feed-forward connections.
Let
us turn now to Churchland's account of how the four elements of the dynamical
profile of conscious states might be realized in recursive neural networks. It
helps to begin with Churchland’s notion of the conceptual interpretation of
sensory inputs and we do well to begin with what Churchland thinks a concept
is. Consider a connectionist network with one or more hidden layers that is
trained to categorize input types. Suppose that its inputs are a retinal array
to which we present grayscale images of human faces. Suppose that its outputs
are two units, one indicating that the face is a male and the other indicating
that the face is female. After training the configuration of weights will be
such that diverse patterns of activation in the input layer provoke the correct
response of “male” to the diversity of male faces and “female” for female
faces. For each unit in the hidden layer, we can represent its state of
activation along one of several dimensions that define activation space. A
pattern of hidden layer activation will be represented as a single point in
this space. This space will have two regions: one for males and one for females.
Regions in the center of each of the two spaces will constitute “attractors”
that define what, for the network, constitutes prototypical female faces and
prototypical male faces, respectively.
The
addition of recurrent connections allows for information from higher layers to
influence the responses of lower layers. As Churchland puts the point:
This information
can and does serve to 'prime' or 'prejudice' that neuronal population's
collective activity in the direction of one or other of its learned perceptual
categories. The network's cognitive 'attention' is now preferentially focused on
one of its learned categories at the expense of the others. (p.75
)
Churchland is not explicit about
what this might mean in terms of the example of a face categorization network,
but I suppose what this might mean is that if the previous face was a prototypical
female, then the network might be more likely to classify an ambiguous stimulus
as female. We can construe this as exogenous cueing of attention. Churchland
goes on to further describe shifts of attention in recurrent networks that we
might regard as endogenous. "Such a network has an ongoing control of its topical selections from, and its conceptual
interpretations of, its unfolding perceptual inputs." (p.76).
Recurrent
connections allow for both a kind of short term memory and the representation
of events spread out over time. In a feedforward
network, a single stimulus event gives rise to a single hidden layer response
then a single output response. With recurrence however, even after the stimulus
event has faded, activity in lower layers can be sustained by information
coming back down from higher layers and that activity can itself reactivate
higher layers. Also, what response a given stimulus yields depends in part on
what previous stimuli were. Thus recurrent connections implement a memory. Decreasing
connection weights shortens the time it takes for this memory to decay. The
ability to hold on to information over time allows for the representation of events
spread out over time, according to Churchland, and the representation in
question will not be a single point in activation space but a trajectory
through it.
Churchland
(2002) does not go into much neuroanatomical or neurophysiolgical detail, but adverts, though tentatively,
to the account in Churchland (1995) wherein he endorses Llinas'
view whereby consciousness involves recurrent connections between the thalamus (a bilateral structure at the rostral tip
of the brainstem) and cortex.
The
neurophilosophical account of consciousness by Prinz (2000,
2004) is relatively similar and fills in a lot of neuroanatomy
and neurophysiology that Churchland leaves out. Prinz
characterizes the processing hierarchy we discussed earlier and then notes that
the contents of consciousness seem to match it with representations at the
intermediate level of processing (areas v2-v5), meaning that the contents of
conscious states do not abstract entirely from points of view as do the highest
level of the processing hierarchy but neither are they the same as the
representations at the lowest level. However, Prinz argues
that intermediate representations are alone insufficient for consciousness. They
must additionally be targeted by attention. Prinz
thinks attention is required because of considerations having to due with the
pathology of attention known as “neglect.” Prinz cites Bisiach’s (1992) study of neglect patients who were able to
demonstrate certain kinds of unconscious recognition. Prinz
infers from such results that not only did high level stuff areas in the visual
hierarchy get activated (they are necessary for the kinds of recognition in
question) but also that intermediate levels had to have been activated. (Prinz seems to be assuming that information can only get to
higher levels of cortical processing by way of intermediate level but one
wonders if perhaps the intermediate level of was bypassed via a sub-cortical
route.)
Given the large role that Prinz assigns to attention in his theory of consciousness, the question naturally arises as to what Prinz thinks attention is and what it does. Prinz endorses the account of attention by Olshausen, Anderson, and van Essen (1994) whereby attention involves the modulation of the flow of information between different parts of the brain. Further, Prinz endorses the speculation that the attention crucial in making intermediate level representations conscious involves a mechanism whereby information flows from intermediate areas, through high level visual areas (infero temporal cortex) to working memory areas in lateral prefrontal cortex. Pieces of information in working memory “allow the brain to recreate an intermediate-level representation by sending information back from working memory areas into the intermediate areas.” (2004, p. 210). Prinz (2000) summarizes, emphasizing attention’s role, as follows:
When we see a visual stimulus, it is propagated
unconsciously through the levels of our visual system. When signals arrive at
the high level, interpretation is attempted. If the high level arrives at an
interpretation, it sends an efferent signal back into the intermediate level
with the aid of attention. Aspects of the intermediate-level representation
that are most relevant to interpretation are neurally
marked in some way, while others are either unmarked or suppressed. When no interpretation is achieved (as with fragmented images or
cases of agnosia), attentional mechanisms might be
deployed somewhat differently. They might ‘‘search’’ or ‘‘scan’’ the
intermediate level, attempting to find groupings that will lead to an
interpretation. Both the interpretation-driven enhancement process and the
interpretation-seeking search process might bring the attended portions of the
intermediate level into awareness. This proposal can be summarized by saying
that visual awareness derives from Attended Intermediate-level Representations
(AIRs). (p. 249)
Prinz's account of attention's role in consciousness seems
a lot like Churchland’s conceptual interpretation, short term memory, and of
course, attention requirements on consciousness. Tye raises objections to the
sort of view advocated by Churchland and Prinz. Tye
is critical of accounts of consciousness that build in constitutive roles for
attention. Tye’s claim is based on introspective
grounds (1995, p. 6). The thought here is that one might have a pain for a
length of time but not be attending it the entire time. Tye insists that there
is still something it is like to have an unattended pain. Tye infers from these sorts of considerations
that the neural correlate of visual consciousness is lower in the processing
hierarchy than an attention-based theory would locate it. Tye thus locates the
neural correlates of conscious states in “the grouped array” located in the
occipital lobe and, regarding the phenomenon of blindsight, rejects “the
hypothesis that blindsight is due to an impairment in the linkage between the
spatial-attention system and the grouped array” (Tye 1995 p. 215-216) Tye
accounts for the retained visual abilities
of blind sight subjects (p. 217) in terms of a subcortical pathway “tecto-pulvinar pathway” from retina to superior coliculus that continues through the pulvinar
to various parts of the cortex, including both the parietal love and area v4.
Thus Tye seems to think consciousness is in V1. Prinz
2000 argues against this, citing evidence against locating consciousness in V1 (see
Crick & Koch, 1995, and Koch & Braun, 1996, for reviews). Prinz writes:
As Crick and Koch emphasize, V1 also seems to lack
information that is available to consciousness. First, our experience
of colors can remain constant across dramatic changes in wavelengths
(Land, 1964). Zeki (1983) has shown that such color
constancy is not registered V1. Second, V1 does not seem responsive to illusory
contours across gaps in a visual array (von der
Heydt, Peterhans, &
Baumgartner, 1984). If V1 were the locale of consciousness, we would not
experience the lines in a Kanizsa triangle.” (pp.
245-246).
Turning from
disagreements to agreements, we may note that Churchland, Prinz,
and Tye agree that conscious states are representational states. They also
agree that what will differentiate a conscious representation from an unconscius representation will involve relations that the
representation bears to representations higher in the processing hierarchy. For
both Churchland and Prinz, this will involve actual
interactions, and further these interactions will constitute relations that
involve representations in processes of attention, conceptual interpretation
and short term memory. Tye disagrees on the necessity of actually interacting
with concepts or attention. His account is dispositional meaning that the
representations need only be poised for uptake by higher levels of the hierarchy.
Turning
to the question of transitive consciousness, we see both
agreements and disagreements between the three authors. For Churchland ,
Tye, and Prinz, they all agree that what one is conscious
of is the representational content of conscious states. In all cases what the
subject is conscious of is what the representational contents of the conscious
states are. However, these theorists differ somewhat in what they think the
contents can be. Churchland has the least restrictive view: any content can be
the content of a conscious state. Prinz’s is more
restrictive: the contents are not going to include high level invariant contents. Tye’s is the most
restrictive: the contents will only be first order and non-conceptual. Tye
thinks that they are non-conceptual since he thinks that creatures without
concepts—perhaps non-human animals and human infants—can have states for which
there is something it is like to have them even though they possess no
concepts. Tye says little about what concepts are and for this, among other reasons, it is difficult to evaluate his view. The reason
Tye thinks the contents of consciousness are first-order is because he believes
in the pre-theoretic obviousness of the transparency thesis whereby when one
has a conscious experience, all that one is conscious of is what the experience
is an experience of. Thus if one has a conscious experience of a blue square
one is only aware of what the mental state represents: the blue square. One is
not, Tye insists, able to be conscious of the state itself. So, for example, if
the state itself is a pattern of activity in one’s nervous system, one will not
be able to be conscious of this pattern of activity, but only be able to be
conscious of external world properties that the pattern represents. Mandik
(2005, 2006) argues that Churchland’s (1979) thesis of the direct introspection
of brain states provides the resources
to argue against the kinds of restrictions on content that Tye makes.
I will not spell
out the full argument here, just indicate the gist of
it. Conceptual content can influence what it is like to have a particular
experience. What it is to look at a ladybug and conceive of it as an example of
Hippodamia convergens
is, intuitively, quite different from what it would be like to conceive of it
as one’s reincarnated great-great-grandmother. Thus if a person had the
conceptual knowledge that consciously perceiving motion involved activity in
area MT, and acquired the skill of being able to automatically and without
conscious inference apply that conceptual knowledge to experience, then that
person would be able to be conscious of the vehicular properties of that
experience.
I
turn now to what neurophilosophical accounts have to say about phenomenal
character. I focus, in particular, on the suggestion that phenomenal character
is to be identified with the representational content of conscious states and
will discuss this in terms of Churchland’s suggestion of how qualia should be
understood in terms of neural state spaces.
Our experience of
color provides the most often discussed example of phenomenal character by
philosophers and Churchland is no exception. When Churchland discusses color
qualia, he articulates a reductive account of them in terms of Land’s theory
that human perceptual discrimination of reflectance is due to the sensory
reception of three kinds of electromagnetic wavelengths by three different
kinds of cones in the retina. In keeping with the kinds of state-space
interpretations of neural activity that Chruchland is
fond of, he explicates color qualia in terms of points in three dimensional
spaces, the three dimensions of which correspond to the three kinds of cells
responsive to electro-magnetic wavelengths. Each color sensation is identical
to a neural representation of a color (a neural representation of a spectral
reflectance). Each sensation can thus be construed as a point in this three
dimensional activation space and the perceived similarity between colors and
the subjective similarities between corresponding color qualia are definable in
terms of proximity between points within the three-dimensional activation
space. “Evidently, we can reconceive [sic] the cube
[depicting the three dimensions of coding frequencies for reflectance in color
state space] as an internal ‘qualia cube’”(1989, p.
105). Churchland thinks this approach
generalizes to other sensory qualia, such as gustatory, olfactory, and auditory
qualia (ibid, p. 105-106). Bringing this view in line with the thesis of the
direct introspection of brain states, Churchland writes:
The “ineffable”
pink of one’s current visual sensation may be richly and precisely expressible
as a 95Hz/80Hz/80Hz “chord” in the relevant triune cortical system. The “unconveyable” taste sensation produced by the fabled
Australian health tonic Vegamite [sic.] might be
quite poignantly conveyed as a 85/80/90/15 “chord” in one’s four-channeled
gustatory system (a dark corner of taste-space that is best avoided). And the
“indescribable” olfactory sensation produced by a newly opened rose might be
quite accurately described as a 95/35/10/80/60/55 “chord” in some six
dimensional system within one’s olfactory bulb.
This more penetrating conceptual
framework might even displace the commonsense framework as the vehicle of intersubjective description and spontaneous introspection.
Just as a musician can learn to recognize the constitution of heard musical chords,
after internalizing the general theory of their internal structure, so may we
learn to recognize, introspectively, the n-dimensional
constitution of our subjective sensory qualia, after having internalized the
general theory of their internal
structure (ibid, p. 106).
Three particular and related
features of Churchland’s view of qualia are of special note. The first is that
qualia are construed in representational terms. The second follows from the
first, namely, that qualia so construed are not intrinsic properties of
sensations and thus overturns a relatively traditional view of qualia. The
third is that it allows for intersubjective
apprehensions of qualia. To see these points more clearly it will be useful to
briefly examine the traditional account of qualia noting the role of supposedly
intrinsic properties in the account.
It
is difficult to say uncontroversial things about qualia; however, there are
several points of agreement among many of those philosophers that believe that
mental states have such properties. These philosophers describe qualia as (i) intrinsic properties of conscious states that (ii) are
directly and fully knowable only by that subject and (iii) account for “what it
is like” for a subject to be in that state. More briefly, qualia are (i) intrinsic, (ii) subjective, and (iii) there is
“something it is like” to have (states with) them. Less briefly, we can start
with (iii) and work our way to (i) as follows. When I
have a conscious perception of a cup of coffee there is, presumably, something
it is like for me to have that perception and, for all I know, what it is like
for you to have a conscious perception of a cup of coffee is quite different.
Further, for all that you can tell me about your experience, there is much that
cannot be conveyed and thus is subjective, that is, directly and fully knowable
only by you alone. The supposition that qualia are intrinsic properties of
conscious states serves as a possible, though questionable, explanation of
their subjectivity. (See Mandik 2001 for a neurophilosphical
account in which subjectivity is consistent with qualia being extrinsic.) The
inference from subjectivity to the intrinsic nature of qualia may be
articulated as follows. If something is defined by the relations that it enters
into then it is fully describable by the relations it enters into and if it is
not fully describable by the relations it enters into it must not be defined by
the relations it enters into.
To construe qualia
in terms of representational content, however, is to construe them as no longer
intrinsic, since typical accounts will spell out representational content in terms
of either (1) causal relations sensory states bear to states of the external
world, or (2) causal relations that they bear to other inner states, or (3) some
combination of the two sorts of relations. In neural terms, a pattern of
activation in a neural network is the bearer of representational content in
virtue of either (1) the distal or proximal stimuli that elicit the activation,
or (2) other patterns of activation that influence it via, e.g., recurrent
connections or (3) some combination of the two.
While it is
relatively clear how Churchland’s view is supposed to rule out the view of
qualia as being intrinsic, it is not so clear that it is equally able to rule out
their being subjective. The above quoted passage contains Churchland’s view
that properties of neural states previously inexpressible could, if one acquired
the relevant neuroscientific concepts and the skill to apply them introspectively,
become expressible. However, this view seems to be in tension with the earlier
mentioned view that concepts influence phenomenal character. The phenomenal
character of an experience prior to the acquisition and introspective
application of a concept will not, then, be the same as the phenomenal
character of an experience after the acquisition and introspective application
of that concept. Thus, even within a general neurophilosophical view of
consciousness, there may remain certain representational contents of neural
states that are directly and fully knowable only by the subject who has them.
Neurophilosophy, then, may be fully compatible with the subjectivity of
phenomenal consciousness.
Bisiach,
E. 1992: Understanding consciousness: Clues from unilateral neglect and related
disorders. In A. D. Milner and M. D. Rugg (eds.) The Neuropsychology of Consciousness, 113-139.
Bonneh, Y., A. Cooperman, and D. Sagi. 2001:
Motion induced blindness in normal observers. Nature 411(6839), 798-801.
Churchland, P. M. 1979: Scientific Realism
and the Plasticity of Mind,
Churchland,
P. S. 1986: Neurophilosophy,
Churchland, P. M. 1989: A Neurocomputational Perspective,
Churchland, P. M. 2002: Catching
consciousness in a recurrent net. In A.Brook, and D. Ross (eds.), Daniel
Dennett: Contemporary Philosophy in Focus, 64-80.
Crick, F.,
and C. Koch. 1995: Are we aware of activity in primary visual cortex? Nature
375, 121–123.
Koch, C., and
J. Braun. 1996: Towards a neuronal correlate of visual awareness. Current
Opinion in Neurobiology 6, 158–164.
Land, E.
H. 1964: The retinex. Scientific American 52, 247–264.
Mandik, P. 2001: Mental representation and the subjectivity of consciousness. Philosophical Psychology 14 (2),
179-202.
Mandik, P. 2005: Phenomenal consciousness and the
allocentric-egocentric interface in R. Buccheri et al. (eds.); Endophysics,
Time, Quantum and the Subjective. World Scientific Publishing Co.
Mandik, P. 2006: The introspectability of
brain states as such. In B. Keeley, (ed.) Paul
M. Churchland: Contemporary Philosophy in Focus.
Milner, A. and M. Goodale 1995: The Visual Brain in Action.
Olshausen, B. A.,
Olson, C.,
Gettner, S., and Tremblay, L. (1999) Representation of allocentric space in the
monkey frontal lobe, in N. Burgess, K. Jeffery, and J. O'Keefe (eds.), The
Hippocampal and Parietal Foundations of Spatial Cognition, Oxford University Press,
New York, pp. 359-380.
Pascual-Leone,
A., and Walsh, V. (2001) Fast backprojections from the motion to the primary
visual area necessary for visual awareness, Science 292, 510-512.
Pöppel, E., Held, R., and
Frost, D. (1973) Residual visual functions after brain wounds involving the
central visual pathways in man, Nature 243, 295-296.
Prinz, J. 2000 A Neurofunctional Theory of Visual
Consciousness Consciousness and Cognition 9, 243–259.
Prinz, J. 2004 Gut Reactions
Quine, W. (1969). “Epistemology Naturalized,”
in Ontological Relativity and Other Essays (
Rosenthal, D. 1993. "State Consciousness
and Transitive Consciousness", Consciousness and Cognition, 2, 4
(December 1993), pp. 355-363
Stoerig, P., Cowey, A. (1992) Wavelength
discrimination in blindsight, Brain 115, 425-444.
Tye, M. (1995). Ten Problems of Consciousness: A Representational Theory of the
Phenomenal Mind.
von der
Heydt, R., Peterhans, E., & Baumgartner, G. (1984). Illusory contours and
cortical neuron responses. Science, 224, 1260–1262.
Weiskrantz, L. (1995) Blindsight: not an
island unto itself, Curr. directions Psychol. Sci. 4, 146-151.
Weiskrantz, L. (1996) Blindsight revisited, Curr. Opin. Neurobiol, 6(2), 215-220.
Zeki, S.
(1983). Colour coding in the cerebral cortex: The reaction of cells in monkey visual
cortex to wavelengths and colour. Neuroscience, 9, 741–756.
PETE MANDIK
Biographical sketch:
Pete Mandik is Cognitive Science
Laboratory Coordinator, Associate Professor, and Chairman of the Philosophy
Department of William Paterson University,