Abstract and overview
Drawing primarily on the work of Thomas Metzinger, but other philosophers as well, this paper takes a representationalist approach to explaining consciousness: how might the 1st person, subjective, private phenomenal states that constitute consciousness be entailed by being a physically instantiated representational system?
Section 1 sets forth what I call epistemic perspectivalism, that there are two distinct and parallel representational, knowledge-seeking (hence epistemic) perspectives on the world, including human behavior: the 1st person subjective perspective tied to particular representational systems, which somehow results in the existence of phenomenal, qualitative consciousness, and the 3rd person, intersubjective perspective, which results in conceptual, quantitative science. Regarding mental causation, I suggest that since qualitative states don't exist intersubjectively, they logically can't play causal roles in 3rd person accounts of behavior, even though from a 1st person conscious perspective they necessarily appear causally effective. From a 3rd person explanatory perspective, consciousness isn't even epiphenomenal. The puzzle of mental causation arises from the mistake of trying to combine two parallel but not causally interacting epistemic, representational perspectives.
Section 2 seeks to narrow the range of explanatory options for consciousness, arguing that 1) standard scientific causal, combinatorial, or emergentist accounts don't work due to the categorical privacy of consciousness; 2) that the identity of the mental and physical is implausible for the same reason; and 3) that substance dualism mistakenly tries to combine the two epistemic perspectives in explaining behavior. I conclude that we need some sort of non-causal entailment relations to explain consciousness.
Section 3 lays out the prima facie case for representationalism: the evidence thus far suggests that consciousness correlates with certain higher-level representational capacities that serve flexible behavior. Conscious phenomenology closely co-varies with neurally instantiated representational functions, and both arguably carry informational content about the world. Both are central to explaining flexible behavior, one from the 1st person perspective, the other from the 3rd person perspective.
Section 4 analyzes the mental-physical distinction (MPD) as a representational phenomenon derived from other adaptive representational achievements on the part of behaviorally flexible systems: distinguishing system from non-system, inside from outside, and system-dependent representations from the system-independent world being represented. The MPD, originally an epistemically useful categorization imposed on our everyday experience, results in folk metaphysical dualism, which in turn generates the "hard problem" for philosophy of mind: how do categorically physical systems "give rise" to categorically mental consciousness?
Drawing extensively from Thomas Metzinger (see his books Being No One and The Ego Tunnel), section 5 adduces both logical and functional/adaptive considerations that suggest there might be non-causal entailments from being a certain sort of representational system to being phenomenally conscious. To oversimplify: to be qualitative for the system is perhaps just for some of its internally represented informational content (e.g., sensory and perceptual) to be representationally and cognitively impenetrable to the system. As it's often described, such information becomes phenomenally transparent for the system - content-only elements of representation. Since phenomenal qualities are always experienced in unified ensembles in consciousness, never singly, the system's informational integration processes likely play a role in making such information impenetrable. Where I part company from Metzinger and most other representationalist philosophers is in doubting the causal efficacy of phenomenal consciousness per se (as opposed to the representational functions that might entail it) in 3rd person accounts of behavior, for reasons set forth in sections 1 and 2.
Section 6 recapitulates the main conclusions of the paper by listing possible explanatory virtues of this non-causal account, but it also warns of possible skeptical counter-claims raised by the thesis of epistemic perspectivalism.
1: Epistemic perspectivalism and a conclusion about mental causation
Consciousness defined. Consciousness is a subjective, private, experiential reality whose basic elements are irreducibly qualitative (e.g., pain, red) and unified into a coherent phenomenal gestalt of self-in-the-world. Note that phenomenal means to appear, where appearance contrasts with what’s real, what’s self-existent behind the appearances. But experience is our reality; as conscious subjects, we can’t transcend experience to access something more real. Experience per se is untranscendably “system-real,” that is, a private representational reality for us as individual conscious systems, even though we realize that, as a window on the world, it can be mistaken because it is an appearance, not the external reality it models. See the third paragraph ("The explanatory target") of Part 5 for more on defining consciousness.
The hard problem. how to explain phenomenal, qualitative consciousness within a unified, philo-scientific 3rd person theory that connects the mental and physical.
Epistemic perspectivalism** – a representational thesis. Reality is only known under a description, as modeled by a representational system (RS) with a limited perspective; reality is never grasped directly. Reality appears for a system by virtue of the system’s representational, epistemic capacities. The commonsense dual ontology of reality - mental vs. physical - is generated by being a knowledge-seeking (epistemic, representational) system, where the mental is what the system categorizes as its own internal representational operations, plus the self which (apparently) produces or witnesses them, and the physical is what’s categorized as external reality, both directly presented and as represented by thoughts, imagination, etc. (see Section 4 on the mental/physical distinction (MPD) as representational). This categorization operates on basic representational elements – experienced as qualia – that the system can’t transcend, second-guess or change, hence are real for it alone, or as I put it, system-real. Prior to categorization these elements are neither mental nor physical as far as what they represent for the system (e.g., sometimes you can’t tell whether what you’re experiencing is a hallucination or not), they simply constitute a representational reality for the system - what it can't second guess. After categorization, they constitute a represented reality for the system: what the system understands reality to be as modeled by its representations. See section 5 on the hard problem for speculations on how the system’s phenomenal, representational reality – consciousness – might be entailed by being a representational system.
**I call this epistemic perspectivalism because it seems descriptive of the thesis presented here, realizing that others may have used the phrase in different contexts.
Two epistemic perspectives on reality. 1st person organism/system, which somehow results in or entails consciousness (the hard problem), and 3rd person collective perspective which results in science.
1st person subjective epistemic perspective. The organism/system (e.g., human) has complex representational capacities, in our case acquired via evolution, which somehow entail phenomenal, qualitative consciousness – a subjective mental reality. Consciousness is categorically private: experiences are always someone’s, tied to particular representational systems (RSs); experiences aren’t publicly available – no one can literally feel my pain. Nor has anyone ever observed a pain or any other conscious state; we don’t even observe our own experiences; rather we subjectively consist of them as we observe the world (see Killing the observer). Experiences are intersubjectively invisible, unmeasurable or, even stronger, intersubjectively non-existent. The privacy of consciousness is a 3rd person fact about it that helps to generate the hard problem since it’s difficult to claim an identity between something categorically private (one’s experience) and anything that’s public (like one’s brain) – see Section 2 below on explanatory non-starters. The other most difficult aspect of the hard problem is the qualitativeness of the basic elements of consciousness. It could be that both privacy and qualitativeness are entangled entailments of being a sufficiently complex RS - see Section 5.
3rd person intersubjective epistemic perspective. Public, objective, scientific models of reality abstract away from qualitative subjective experience in forming conceptual and quantitative descriptions of the world. Qualia necessarily drop out when forming concepts and constructing quantitative and causal models in the collective epistemic perspective we call science. The objective world is just what’s intersubjectively available for observation. Since consciousness (phenomenal, qualitative experience) isn’t intersubjectively available, no qualitative states (e.g., my red, my pain) play a role in descriptions of the intersubjective, objective world – science is perforce restricted to a purely 3rd person, objective ontology. (Note: this leads to ruling out, or at least doubting, certain explanatory possibilities for consciousness, namely those involving mental-physical identity, interactionism, causation, combination, and emergence; see Section 2 on explanatory options.)
Conclusion re mental causation: consciousness not even epiphenomenal. Epistemic perspectivalism (EP) can be applied to the puzzle of mental causation, of how conscious states could possibly influence action when neural processes are doing all the causal work (causal closure of physical world). The mistake, EP says, is to try to combine two parallel epistemic perspectives on reality when explaining behavior. Consciousness isn’t available to 3rd person accounts since it isn’t intersubjectively observable or measurable. So conscious experiences aren’t even epiphenomenal: they simply don’t and can’t play a role in intersubjective descriptions and explanations of behavior since they don’t exist intersubjectively. For something to be epiphenomenal, it has to be present in a situation but causally disconnected, and qualia simply aren’t present for science. But, from our 1st person perspectives qualia are inescapably explanatory: I wince, nurse my arm, because I feel pain. So we naturally try to incorporate this 1st person account into the 3rd person account, and then conclude either that 1) qualia are epiphenomenal for behavior since neurons are doing all the causal work, or 2) that non-physical consciousness has some mysterious capacity to influence the physical brain (dualist interactionism). But on EP neither is the case. Rather, we have two immiscible (uncombinable, unmixable) epistemic perspectives on behavior, each with its own privileged ontology, one mental - private subjective qualitative states - and one physical: publicly observable, intersubjective entities and processes, characterized by concepts and quantities, not qualities. We needn't worry about consciousness not being causally effective on 3rd person accounts because the physical processes correlated with consciousness are effective. David Rosenthal is skeptical about the causal contribution of consciousness, see here, but for different reasons. See Respecting privacy: why consciousness isn't even epiphenomenal for an elaboration of the argument in this section.
2: Options for explaining consciousness
Explanatory priority of 3rd person objectivity. The 3rd person perspective wants a unified account of existence, hence wants to explain consciousness from an objective standpoint: how do intersubjectively available, therefore objective, entities give rise to, entail, or connect with private subjective phenomenal states – conscious experiences? Note that the 3rd person account necessarily gives ontological priority to what’s intersubjectively available, in that it takes physical, objective things as uncontroversially self-existent. So it naturally wants to reduce the mental to the physical, or show how the mental (private phenomenology, qualia) is produced by or otherwise entailed by the physical (public objects extended in space, such as brain processes).
What we’re shooting for. We want a satisfying explanation, one that demonstrates a transparent, intuitive, necessary entailment from 3rd person reality to 1st person reality, closing the explanatory gap, or if that’s not forthcoming, explain what the relation, if any, is.
Naturalistic evidential constraint. The explanation must conform to and explain scientific, intersubjective evidence and findings re consciousness, e.g., those coming from neuroscience and neurophilosophy. No invocation of the supernatural or other unevidenced processes or entities counts as explanatory. See here re scientific explanatory adequacy. If no explanation is forthcoming, we must admit ignorance for the time being, not appeal to unexplained explainers!
Standard types of explanatory relations. 3rd person philo-scientific explanations of phenomena usually involve causal, combinatorial or emergentist entailments from less complex to more complex phenomena, but:
Consciousness is essentially private and subjective. Phenomenal states constituting consciousness exist only for a representational system; no conscious states have ever been intersubjectively observed, therefore:
Not a causal relation: mental/subjective not causally produced or generated by physical/objective since that would make it public; causal mechanisms are non-starters, e.g., no “2nd transduction” into objective mental stuff (Dennett). E.g., Tononi says "It is hard to imagine what may be the complexity of the quale generated by a sizable portion of our brain." (Consciousness as integrated information: a provisional manifesto, p. 239). But the brain isn't generating anything qualitative as an effect, only integrating information for the system. Tononi might well agree, and reply that "generate" is just a figure of speech.
Not a combinatorial relation: no locatable elementary “quanta” or “units” of the phenomenal that get combined into personal consciousness; panpsychism a non-starter. No observational evidence for panpsychism as yet.
Not emergence: emergentist explanations involve part/whole relations, but the parts and wholes are of the same (public) kind, and underlying mechanisms and relations are in principle discoverable/observable; but consciousness is not public, there are no obvious public object relations out of which something categorically private could arise as an emergent phenomenon.
Not identity: identity claims between private qualitative experience and public, non-qualitative states of affairs are very difficult to sustain, so physicalism, reductive or non-reductive, is a tough sell, e.g., Jaegwon Kim explores the limits of physicalism here.
Not interactionist dualism: substance dualism supposes consciousness is objective, on the same causal playing field as physical objects, but this seems wrong according to epistemic perspectivalism (see part 1 above). Interactionism violates causal closure, plus there’s no evidence for the causal contribution of qualia in 3rd person explanations, e.g. no evidence for violations of energy conservation, for instance as hypothesized by Eliztur in Consciousness makes a difference: a reluctant dualist's confession.
Conclusion re explaining consciousness. 3rd person theory needs, ideally, some other sort of non-causal, non-combinatorial, non-emergentist entailment to understand how qualitative consciousness comes to exist only for a system that’s part of the natural, physical world. A possible representational explanation comes from epistemic perspectivalism: qualia, the basic qualitative elements of conscious mental reality, are non-causally entailed by being a sufficiently recursive and ramified representational system (RS) – see Section 5.
3: Motivating representationalism
Prima facie plausibility of representationalism. Given the robust evidence of its correlation with neurally instantiated higher-level representational functions, consciousness is likely an entailment of being a complex representational system (RS), see here for evidence and citations. But the exact nature of that entailment is difficult to specify, hence the "hard problem" of consciousness.
Neural activity sufficient for consciousness. Dreams, especially lucid dreams, are good evidence that brain processes are sufficient for consciousness. There is no need for online sensory or behavioral interaction with the world for full-blown conscious and self-conscious states to exist (contra Noe in Out of Our Heads). The brain has the representational resources to support phenomenology on its own, so consciousness likely supervenes on internal states. Metzinger, Revonsuo and others suggest that consciousness is akin to a virtual reality, e.g., Metzinger says waking experience involves “dreaming vigorously at the world.” The robust supervenience relations pretty much guarantee that there are stable neural processes underlying and co-varying with stable qualia.
General claim about self-regulation and self-representation. Having an internal representational self-model is necessary for self-regulation, according to Metzinger, citing Conant and Ashby, 1970, “Every good regulator of a system must be a model of that system.” International Journal of Systems Science 2: 89-97. Reprinted in G. J. Klir, ed., (1991), Facets of System Science. New York: Plenum Press.
Representation as anticipation in service to behavior control. The brain evolved representational capacities in response to behavior control exigencies. Internal states that track the environment are necessary for flexible behavior that responds to environment under time pressure. Brain is a predictor of events (Jeff Hawkins: On Intelligence, Andy Clark: Whatever next: predictive brains, situated agents, and the future of cognitive science), hence needs an internal model that anticipates the world, otherwise response time is too slow, action too late. The brain looks to confirm/disconfirm the currently active predictive model using sensory input.
Co-variation of intentional brain states and conscious states. Particular neural sub-systems at various levels, e.g., the visual cortical areas, support particular feature recognition, e.g., colors, edges, movement, faces, objects, binding, highest level gestalts (workspace) – all these are intentional, information-carrying, hence representational systems. We observe close co-variation between certain classes of physically instantiated intentional states and reported phenomenal states, e.g., color, taste, smell, proprioception, plus bound unified gestalts (objects, self, world). Lesion studies are good evidence for this. See Metzinger's Being No One for extended discussion of neuroscientific evidence, especially physically based psychopathologies, for instance as researched by Ramachandran.
Qualia as 1st person representational content. Subjective phenomenal states (qualia) are plausibly construed as representational, informational, intentional content about the organism and world, content which is normally associated with high level behavior (Tye: Ten Problems of Consciousness, Dretske: Naturalizing the Mind, Lycan: Consciousness, Metzinger Being No Oone, The Ego Tunnel, Tononi's Integrated Information Theory (IIT), and other representationalist philosophers). According to IIT, phenomenal qualities might be specified by the amount of integrated information in the system and the dimensional characteristics of that information as represented in its sensory/perceptual state-spaces. Consciousness is thus the 1st person indicator or correlate of behavioral control that lines up with 3rd person neural story of representation, but it isn't causally active in the 3rd person story since doesn’t appear in it, according to epistemic perspectivalism (the conclusion of Section 1 above).
Conclusion re representationalism. Representationalism a plausible, evidence-based hypothesis that connects brain functions and phenomenology. They co-vary closely and both carry intentional, representational content about the world and thus are central to explaining flexible behavior, from both the 3rd and 1st person epistemic perspectives. But what is the connection, precisely? That’s the hard problem, see part 5.
4: The mental/physical distinction as a representational phenomenon
Consciousness as essentially qualitative. Qualitative states, ordinarily bound into integrated objects, scenes and events with the phenomenal self at the center, are all there is to consciousness. There is no non-qualitative contrast set for qualia within experience. If you subtract all qualitative experience, would there still be something it is like to think, believe, or be the subject of other non-sensory, non-perceptual states? Arguably not. Conscious episodes of thinking involve “phonemic imagery” (Velmans in Journal of Consciousness Studies - [need citation]), conscious episodes of believing involve qualitative sensations (some vivid, some subtle) of conviction, of preparing to avow and assert, of vindication, etc. If non-qualitative conscious states do exist, then arguably they don’t pose as tough an explanatory problem for science, since science could show them as straightforward entailments of other non-qualitative phenomena, e.g., brain states. But to explain how qualitative states come to exist only for a system – the hard problem - is not straightforward.
The mental-physical distinction is generated within consciousness. Although qualitative experience is all we have as conscious subjects, we pre-theoretically, folk-metaphysically divide the world given to us in experience between what’s mental (my private thoughts, emotions, and sensations that no one else has access to) and what’s physical (my body and brain, other people, external objects, all of which are possible objects of observation by others). Prior to science and the advent of the concept of consciousness as a categorically mental phenomenon (Gamez, ch 2), we are naïve realists (Metzinger) about both the mental and physical, not realizing that what’s commonsensically physical is a categorization of part of our experience as the directly given presence of external, objective reality. Likewise the commonsensically mental is a categorization of part of our experience as the directly given presence of internal, private mental reality, including the internal, non-extended phenomenal self. Most of the time as conscious subjects we cannot step outside this categorization: the external world as we represent it usually seems directly given to us as a physical, system-independent reality, and our internal states as we represent them are directly given to us as a mental, system-dependent reality. We don't ordinarily experience our experience as a model (although in lucid dreams we do, see here). Moreover, as conscious beings we never step outside our phenomenal, system-real worlds – the private representational reality of qualitative experience – within which this categorization is generated. As Metzinger puts it, we're stuck inside our ego tunnels.
The mental-physical distinction as representational. The mental-physical distinction (MPD) is a represented distinction that gets it content from more basic representational achievements of behaviorally complex representational systems (RSs) such as ourselves. The RS discriminates (represents the difference) between self and world, between what’s inside and outside the system, and between representational states and what’s represented (a representationally recursive achievement). These discriminations have their neural correlates (with which evolution did its work), but are also phenomenally real for us, hence part of commonsense folk metaphysics. In what follows I’ll refer to both the 1st person experience of the MPD and the distinctions it gains its meaning from, and the underlying 3rd person physical system that makes these distinctions and (somehow) entails conscious experience itself.
The system’s self-world distinction draws a boundary between the system and the world, which then permits making the internal-external classification. From the 1st person conscious perspective, a class of events is experienced as happening “in here” (e.g., my thoughts, sensations, emotions) that aren’t visible from “out there.” For the subject, these internal events, in contrast to events classified as external, aren’t spatially extended, are private, and seem to be productions of and thus dependent on the system itself as opposed to the outside world. Thus is born the category of the mental as contrasted with the physical: the two categories arise mutually as a function of a classification within everyday experience, and we (perhaps naively) construe them as truly existing separate ontologies that objective reality contains. Thus arises traditional dualist folk metaphysics: elevating a useful represented distinction into a deep, and perhaps deeply mistaken, metaphysical divide.
Further, the system achieves what Metzinger describes as a crucial step in higher level cognition: it classifies some of its mental episodes (e.g., a thought or mental image about what will happen next) as fallible models, in contrast to what is modeled. Thus is born the commonsense category of aboutness or intentionality: we experience the fact (non-inferentially believe) that at least some of our internal episodes function as representational tools in modeling the external world. So the mental realm, that subset of conscious events experienced as internal, private, non-extended, and system-dependent, gains the further characteristic of being experienced as (non-conceptually categorized as) representational. By contrast (note the omnipresence of the contrast effect in this account), the physical gets classified in ordinary experience as that independently existing (system-independent) realm that gets represented by internal, system-dependent mental episodes. Bottom line: the MPD (internal vs. external, private vs. public, system-dependent vs. system-independent, non-spatial vs. spatial, representation vs. represented) is generated by being a sufficiently complex RS. There is both the 1st person non-conceptual experience of the MPD (which motivates the commonsense dualist folk-metaphysics of mental and physical ontologies) and its underlying 3rd person representational basis in the physical system as described by science. Note that these are the respective domains of the two epistemic perspectives, one personal, the other interpersonal, as suggested in section 1. There are (perhaps) no representation-independent criteria for what’s mental vs. what’s physical, in which case we're dealing with an epistemic dualism, not ontological. This suggests that the attempt to derive the mental as a generated product of the physical - providing some sort of causal story for how the mental arises as categorically non-physical phenomenon - might be a fool's errand.
The discovery of the hard problem. Folk dualism supposes that we have direct contact with the external, physical world (including the body) in that some experience is naively classified as being physical: we are thus naïve realists about the external world (Metzinger). We also experience being in direct contact with the mental realm, e.g., our thoughts and emotions, so we are also mental realists, and perhaps naïve in this as well. The experience of direct givenness of both the mental and physical stems from the fact that as conscious subjects we don't stand in an observational relationship to experience itself, we consist of it, so what's categorized in experience as mental or as physical necessarily seem directly present to us. In seeking a coherent picture of reality, philosophy (and then with science, hence “philo-science”) inherits folk dualism, but goes further: according to a widely held theory (as opposed to pre-theoretical commonsense) conscious experience is entirely mental. The theory says that in experience we don’t have direct contact with the physical external world as the naïve realism of folk metaphysics would have it. Instead, consciousness mediates that contact. What we as conscious subjects naively categorize as the world directly presented to us, philo-science says, is actually a mental phenomenon: it’s our experience of the world being directly presented to us as experiencing subjects - where that subject is simply another content of experience. Science then discovers that conscious experience, now a categorically mental phenomenon, is very closely correlated to certain neural goings-on. The question then naturally arises: how can a categorically physical system – the human organism – give rise to something categorically mental, namely conscious experience? This is the hard problem. But keep in mind that, according to the philo-scientific representationalist account, the original mental-physical distinction is epistemic, not something inherent in reality in advance of our neurally-instantiated representation of it.
5. The hard problem – on how consciousness might be entailed by being a representational system.
From a representational categorization to a metaphysical divide. In trying to solve the hard problem, philo-science is trying to conceptually unify, by means of a theory, a pretheoretical represented categorization of reality divided into the mental and physical (the mental-physical distinction, MPD), a categorization generated by our being knowledge-seeking systems that to flourish must distinguish self from world, inside from outside, and representations from what’s represented (see the previous section). [Pessimist says: this might explain why this project is impossible: because the categorization is so categorical, there are exactly no conceivable entailments from one to another! The MPD is a basic non-conceptual categorization of experience which won't admit of a conceptual reconciliation.] Notice that the MPD, according to the section above, doesn’t result from the physical causally producing the mental, rather it’s a representational result, a categorization that gets imposed on our experience and that presumably has its neural correlates. But this representational categorization gets construed by folk naïve realism, and then by much traditional philosophy, as a metaphysical claim: that there exist in reality two categorically different sorts of things or substances, mind vs. body, mental vs. physical, when in fact the original distinctions were strictly practical and epistemic, made on behalf of a system trying to make a living in the world. There’s a shift from an adaptive representational, epistemic categorization, experienced in consciousness as what’s inside and non-extended vs. outside and extended, to a metaphysical dualism that represents consciousness itself as categorically mental. The dualistic metaphysics then generates the vexed question of the relation between the mental and physical conceived of as two different ontologies, one immaterial, one material.
Third-person accounts naturally gravitate toward explanations of consciousness in terms of causal, combinational or emergentist paradigms involving physical (intersubjectively observable) entities and processes, but as seen in Section 2 these likely don’t work. So what 3rd person story might work?
Epistemic perspectivalism, round 2: being a representational system as entailing consciousness. The story I favor, as foreshadowed above, is representational. Following the work of Metzinger, Van Gulick, Dretske, Tye, Lycan and others exploring representational accounts of consciousness, I suggest that the appearance of a private phenomenal reality for the system – conscious experience – might be entailed by certain of the system’s representational characteristics and capacities. Where I part ways with most of the above philosophers is in denying the causal contribution of consciousness in 3rd person accounts of behavior, as concluded at the end of Section 1 above.
The explanatory target. In order to minimize confusion, I will spell out the characteristics of our explanatory target, phenomenal consciousness. The basic elements of consciousness are irreducibly qualitative (e.g., pain, red), in that they possess an essential character (the redness of red, the painfulness of pain) that itself can’t be further qualified or broken down. This makes such qualities (qualia) ineffable: not amenable to further description. But these basic elements never appear alone in consciousness (Metzinger), but are grouped into objects, scenes, events, and a coherent experiential gestalt of self-in-the-world. At the center of our phenomenal consciousness (most of the time) is the phenomenal self, the hard to pin down but very (system) real, qualitative sense of being an experiencer to whom experience is presented. But this experienced self is itself part of, not separate from, experience (Clark, Metzinger, see here). [note this sense possibly just being the coordinated sum of bodily sensations]
As conscious subjects, we are not in an epistemic relationship with consciousness, that is, we are not in a position to observe or learn about our experience from an independent vantage point, the way we can learn about objects in the world by moving around them to gain different observational perspectives. Rather, we consist of experience. Experience is a bounded, self-contained phenomenal world – an ego tunnel as Metzinger puts it. This world, as we’ve seen, is categorically private; it exists for the system alone, and it presents itself as untranscendable representational reality for the system, it is system-real. In a lucid dream, I know that my experience of flying is hallucinatory – I’m not really flying – but I’m not in a position to doubt or transcend the phenomenal reality of the sensations I have while undergoing this experience. As a subject, I can’t escape the brute fact that I’m experiencing whatever it is I am experiencing (whether or not the experience is classified as veridical or hallucinatory), and so in this sense experience presents itself, and is, an indubitable and unmodifiable system reality for me and for me alone, the representational system (RS).
In speculating about why phenomenal reality appears only for the RS, I will adduce two sorts of considerations, logical and adaptive. I don’t pretend that what follows is conclusive about explaining consciousness, but it might be a stab in the right direction.
Logical entailments of being a RS. Below are listed three interconnected, overlapping reasons why being a complex representational system might logically entail the appearance of a phenomenal reality as an ensemble of irreducible qualitative elements for that system. These considerations, I suggest, at least get us in the vicinity of qualia.
1. Root representational vocabulary. Logically, a representational system must have a basic, irreducible, reliable set of elements that can’t be further broken down, a root combinatorial vocabulary which gets leveraged in representing complex states of affairs [language, math are such systems, does the brain or its sub-systems have basic neural vocabularies?]. Representational systems need basic inter-defined, contrastive elements in some format with which to operate, and these must be non-decomposable and unmodifiable for the system. The properties of the basic elements are arbitrary for the RS, hence not represented facts for it; rather they are used in describing facts about the world it represents using those elements (e.g., the apple is red). Qualia seem to be just such root representational, contrastive, content-bearing elements within conscious experience.
2. Representational self-limitations. Logically, the RS can’t be in a direct representational, epistemic relationship with its own front-line, basic elements of representation; it doesn’t and can’t (directly) represent them (they represent the world for the RS), so they won’t be facts about which it could be wrong or right, but just counters (pieces) in the game of representation (see here). It can’t misrepresent them, hence they are direct irreducible givens for the RS as it constructs its reality model (Metzinger). This again is exactly how qualia (red, pain, sweetness - the basic elements of experience), present themselves in consciousness: as impenetrable, untranscendable elements about which we have no say, that can’t be second guessed, that are “irrevocable” (Ramachandran and Hirstein in 3 laws of qualia, p. 437). Qualia are non-inferential, immediately given, non-conceptual (Tye), system-real elements that get combined in our experience of objects, scenes and the world, whether veridical or not. What’s most representationally real for the system is just that which the RS can’t transcend in its representational work, what it can’t modify or control as an element in its representational economy.
3. Limits of resolution. The RS will necessarily have limits of resolution as entailed by behavioral requirements and physical limitations – there’s no need or capacity to discriminate beyond a certain level. This suggests a basic neural vocabulary for each sensory channel keyed to tracking differences in the external world and body as determined by the resolution of neurally instantiated state-spaces, e.g., hue discrimination, pitch discrimination, touch discrimination. The resolution is also set by constraints of smallest physical units being responded to, e.g. photons, wavelengths, chemical compounds. These limits of resolution parallel and perhaps entail the irreducible phenomenal elements of conscious experience that get defined in experimental setups by just noticeable differences (JNDs) between consciously experienced colors, pressures, temperatures, tastes, smells, etc.
Conclusion re logical entailments. Any representational system (RS) will have a bottom level, not further decomposable, unmodifiable, epistemically impenetrable (unrepresentable) hence qualitative (non-decomposable, homogeneous) and ineffable set of representational elements. These elements are arbitrary with respect to what they represent since the RS only needs reliable co-variation, not literal similarity. They therefore appear as irreducible and indubitable phenomenal realities for the RS. [Skeptic says: “Therefore?? The entailment from physically instantiated representational elements to phenomenal elements still seems mysterious!" Indeed, but perhaps the adaptive considerations below will help reduce the obscurity.] Of course, these logical criteria by themselves imply that consciousness might attend very simple representational systems like thermostats. We can’t perhaps empirically rule out such consciousness, but we have no prima facie good reason to think thermostats entertain phenomenal states given the empirical evidence that consciousness correlates with certain higher level capacities, see here. The adaptive considerations set forth below will narrow the range of plausible candidates for consciousness to more complex systems such as ourselves and some other animals.
Functional and adaptive considerations entailing consciousness for a representational system. Below are several overlapping, mutually reinforcing considerations, taken mostly from Metzinger (Being No One, The Ego Tunnel), that suggest why a phenomenal system-reality might attend a sufficiently complex, behaviorally flexible RS. Note that evolutionary adaptationist proposals for consciousness and qualia per se (the phenomenal level, as Metzinger calls it) face the problem of trying to incorporate phenomenal mental reality as a causal factor in 3rd person accounts, which generates the puzzle of mental causation, hence I’m skeptical of them, see sections 1 and 2 above. More likely and logically (perhaps), as argued here, the qualitative can be understood as a non-causal entailment of representational functions that constitute the organismic epistemic perspective. Consciousness doesn’t add to causal powers of the physically instantiated representational process in 3rd person physical accounts, but phenomenal episodes necessarily appear causal within the 1st person qualitative system reality (see Respecting privacy).
1. Integrative functions of the global workspace (Baars). Higher level integrative functions instantiated in the thalamo-cortical loop subserve multi-modal representations that are associated with behavior only possible when conscious (see here, but note that it may not be consciousness per se that’s doing the work in the 3rd person story – see parts 1 and 2 of this paper and David Rosenthal here). These functions perhaps need chunked, data-reduced lower level content elements as inputs, and this might explain the impenetrability and qualitative homogeneity of the elements that make up the unified conscious gestalt produced by informational binding. That consciousness always seems to involve a unified gestalt of many irreducibly qualitative elements, and that this is associated with the capacity for flexible, novel and contingency-anticipating behavior, limits ascriptions of consciousness to complex systems with these sort of integrative capacities. Thermostats don’t have these capacities, so very likely aren’t conscious and need not be treated as possible subjects of suffering. Rats likely do, and are, so handle with care.
Note again that having qualitative states is strongly linked to flexible behavior, suggesting that whatever the entailment relation between function and phenomenology is, it has something to do with higher-level integrative functions operating on sensory/perceptual inputs and memory. Perhaps it’s the fact that such functions need inputs that are strictly informational content, not second guessable by virtue of being “seen” as representations, as suggested by several of the considerations to follow. As Metzinger hypothesizes, perhaps the higher-level integrative functions strip the inputs of any information about their lower-level processuality, about their representational origins, thus resulting in phenomenal transparency (see next paragraph). Integration might take neurally instantiated state-space values (e.g., in the sensory coding of light wavelengths and intensity, sound wavelengths and pressure, tactile pressure, chemical compound sensitivities expressed in taste and smell, tissue damage, reward states) as impenetrable, unmodifiable content inputs, hence they become pure homogeneous informational content, appearing to the system as basic qualia.
2. Transparency as efficient data-reduction. Metzinger (underlining added): “Transparency of internal data-structures [transparency: the system only “sees” the informational content of its data structures, not their role as representations or the processing that produced them] is a great advantage for any biosystem having to operate with limited temporal and neural resources. Indeed, it minimizes computational load since it is synonymous to a missing of information on this level of processing: Our representational architecture only allows for a very limited introspective access to the real dynamics of the myriads of individual neural events out of which our phenomenal world finally emerges in a seemingly effortless manner.” Metzinger, citing Van Gulick and others, leans heavily on transparency as a key element in explaining qualitative consciousness, but one still wants to know how, precisely, the phenomenal emerges as a subjectively private result of 3rd person, publicly observable (in principle) data-reduction. The exact nature of the non-causal entailment from public to private and from non-qualitative to qualitative is still obscure (to me), which is why the points made here (Section 5) are gestures in a somewhat vague direction, not a clear hypothesis. Also, the way I see it, the advantage for a behaving system accrues at the level of neurally-instantiated control, so, contra Metzinger and others, the phenomenal level isn’t causally effective in 3rd person accounts of behavior (see Sections 1 and 2). Here's a helpful hint from Gamez’s gloss on or quote from (?) Metzinger: “One of the functional properties of homogeneity is that it prevents us from introspectively penetrating into the processing stages underlying the activation of sensory content, which is essential for the production of an untranscendable reality…and for reducing the computational load.” (p. 71 Gamez, ch 2). Note that this “untranscendable reality” is system reality, a phenomenal representational reality for the system alone, that’s entailed by suppression of lower level processing information and the delivery of pure, unmodifiable content to higher levels (e.g., thalamo-cortical global workspace). This phenomenal content, this qualitative system reality, is invisible to outside observers, who of course only see the processing system (the brain).
3. Reiterated dynamic context loop. Information concerning the recent past is fed forward to constrain the RS’s current state, which creates a re-entrant recursive dynamic loop which contains the currently active information that appears to us as our unified, continuous but continually changing phenomenal reality (Edelman, Metzinger in Ego Tunnel, Andy Clark's predictive processing). But, recursion must eventually halt, given real world constraints of energy and time pressure, so the process needs untranscendable elements to prevent the system getting lost in an infinite loop. Hence:
4. Untranscendable object. Metzinger: the RS, in representing itself, by necessity has to limit recursive system modeling in order to avoid infinite regress (representing that it’s representing that it’s representing, etc.), so it must have untranscendable givens that are taken as unrepresented – a bottom level representational vocabulary that it can’t doubt or misrepresent, that are behavior controlling (“behaviorally serious”), hence an appearance/presence that’s taken as real, an “untranscendable object” (Ramachandran and Hirstein make much the same point in “The 3 laws of qualia”). Qualia are (perhaps) non-causally entailed by a physically instantiated representational architecture that has this adaptively necessary feature. Metzinger in Being No One p. 566: “it may turn out that any representational system needs some kind of transparent primitives.” To be qualitative for the system is perhaps just to be representationally and cognitively impenetrable to the system: it can’t deconstruct elements or “see” their processing, only the content, hence they become phenomenally real, system-real, that is, homogeneous and non-decomposable informational content for the system alone.
5. “World zero”. Metzinger in Being No One: the system needs an on-line reference base reality model that’s responsive to the outside world which it can’t doubt, against which to test/compare its off-line simulations. This perhaps becomes/is the phenomenal, the system-real, consciousness. Alternatively, and perhaps more accurately: world zero is that sub-set of my untranscendable phenomenal system reality that gets categorized as the real external world, as contrasted with that subset of my phenomenal reality that gets categorized as a mental simulation (thoughts, daydreams, hypothetical imaginings, memories). Note that having a stable, reliable represented external reality depends crucially on having an untranscendable system reality with which to construct it: the system must have basic non-decomposable representational elements, which get experienced as irrevocable qualia. See Section 4 above and the informal notes below.
6. Autoepistemic closure. Metzinger: we don’t experience the phenomenal content of our self-model as being created by a representational process, hence it’s not-a-model, hence real for us, just as a certain portion of our phenomenal reality gets categorized as the real external world: “Possessing this content [of the self-model] on the level of phenomenal experience was evolutionary [sic] advantageous, but as such (i.e., as phenomenal content) it is not epistemically justified.” (quote from online précis of BNO?) That is, there really isn’t a substantial self existing in the world, only a representation that there is such a thing, a self-model which is experienced as the phenomenal self in conscious experience. Again, I part company with Metzinger in his contention that the phenomenal level of content per se was adaptive, rather it’s the physically instantiated representational processing – an objective, 3rd person-observable process that happens to entail the conscious phenomenology of a really existing self – that’s adaptive on a 3rd person account.
7. Content as an abstract property: Metzinger in Ego Tunnel (underlining added): “The representational carrier of your phenomenal experience is a certain process in the brain. This process, which in no concrete way possesses anything "paper-like" - is not consciously experienced by yourself, it is transparent in the sense of you looking through it. What you are looking onto is its representational content, the existence of a paper, here and now, as given through your sensory organs. This content, therefore, is an abstract property of the concrete representational state in your brain.” Note that this account fits with Tye’s “PANIC” representational theory of consciousness as set forth in Ten Problems of Consciousness: phenomenal, qualitative experience is informational content of a particular sort, namely: Poised (to contribute to beliefs and behavior control), Abstract, Non-conceptual, Intentional Content. For Tye, the abstractness of qualitative states has to do with the fact that basic qualities (pain, red) aren’t themselves particular objects in experience, rather they are always presented as the represented characteristics of particular, concrete, non-abstract objects. Note also that abstract properties, unlike physical objects, won't be observable but will exist only for the system, which makes them plausible candidates for qualia.
Concluding speculations re hard problem. The considerations set forth above, both logical and functional/adaptive, all point in more or less the same direction: that a sufficiently complex information-gathering and integrating system such as ourselves might non-causally entail the appearance of a qualitative system reality – consciousness – for that system. Having appreciated the force of these considerations, we perhaps arrive in the vicinity of qualia. Consciousness and its basic qualitative elements – qualia – aren’t causally produced, since if they were they’d be observationally available from outside the system. Rather, if one is such a representational system, then one will be a locus of consciousness because the RS logically must have unmodifiable basic informational, cognitively impenetrable, not-themselves-represented, hence qualitative content elements for the system which combine in representing the world. Further, functional and adaptive constraints on representation also entail that these elements will be stripped of information about their processuality and their representational nature, such that (again) they become basic, non-decomposable content-only elements for the system, hence a qualitative reality appearing for the system alone. That phenomenal consciousness is empirically found to be associated with certain higher-level processes which integrate information and feed it forward to control behavior (the dynamic context loop) suggests that these processes require content-only, hence qualitative-for-the-system, inputs in constructing a coherent, unified self-in-the-world reality model (Metzinger). This is what we experience as being present in a coherent, concrete, untranscendable phenomenal reality from moment to moment. Reality, including the experienced reality of being a self to whom the world appears, phenomenally appears by virtue of our being certain sorts of representational systems.
6. What epistemic perspectivalism might explain
Mental privacy: explains why the mental/subjective/qualitative is only for a system, not found in the intersubjective world as described by science. If qualitativeness for the system is a matter of the system's representational capacities (see part 5), then perforce qualitativeness is private, since being that system entails the appearance of qualities for that system only. Note that this is a non-causal entailment: the mental (qualitative) isn't produced or generated by the physical (non-qualitative) as a publicly observable effect. [Similarly, Tononi says "...the Integrated Information Theory ... implies that to be conscious—say to have a vivid experience of pure red—one needs to be a complex of high phi [integrated information]; there is no other way." - from "Consciousness as integrated information: a provisional manifesto," p. 234, original emphasis]
Phenomenal and (naive) mental realism: explains why the elements of phenomenal consciousness are untranscendable and indubitable for the system - “system real” - and, when categorized by the mental-physical distinction (MPD) as mental (i.e., as internal, system dependent, non-spatial), some of these elements are experienced as a directly given mental reality. The MPD has its basis in a veridical distinction, since there really is a (semi-permeable but robust) border between the system and the rest of the world, there really is an inside and an outside, but the experience of direct givenness is an illusion, hence in this sense naive.
Naive realism about the external world. Same story as above for our experience of a self-existent external world, but on the other side of the MPD. Again, there really is a world outside the system, but naive realists suppose it's directly presented, not mediated by consciousness. That it seems directly presented is because there's no epistemic, representational distance between the system and the representational reality of phenomenal experience. As conscious subjects, we consist of experience, variously categorized as internal vs. external, so the mental and physical realms seem directly given.
Phenomenal qualitativeness, qualia: explains the non-decomposable, homogeneous qualtativeness of basic experiential elements; the system logically can’t and (adaptively, functionally) must not directly access/represent its own bottom level and front line representational processuality, which therefore becomes pure, transparent, qualitative representational content for the system - qualia. Such qualitative states are only for the system - they can't be observed from a 3rd person perspective - because they are a function of being the system.
Unity of consciousness and restriction of attributions of consciousness to complex systems: global workspace higher-level integration explains why qualitative consciousness always involves gestalts in a highest order world-model: the appearance of a coherent world composed of objects, scenes, and events. The functions of the workspace involve binding, integration, and access, and these functions perhaps require content-only elements as inputs, inputs whose representational character is usually suppressed for the system on a non-conceptual level (this suppression is lifted in lucid dreams). Only systems with these capacities are likely conscious, given the empirical evidence thus far.
Psycho-physical parallelism: explains why we have parallel 1st and 3rd person explanatory stories of behavior that correlate qualia with brain states, yet show no causal interaction. To combine them is a mistake, since this attempts to mix two different epistemic (representational) perspectives, one that’s organismic and system-relative, resulting in qualitative consciousness, the other that's collective and that abstracts away from experienced qualities, resulting in conceptual, quantitative, non-qualitative science.
Objective causal absence of consciousness: explains why qualia are not causally engaged on 3rd person theory, not even epiphenomenal, since they don’t and can’t appear in objective explanations in which the physical necessarily does all the causal work (see Respecting privacy). Example: evolution as a 3rd person account only “sees” the physical, so phenomenal consciousness per se not adaptive on this account, only the processes with which it is correlated and which entail it.
Subjective mental causation: explains why subjectively, from a 1st person conscious perspective, qualitative states will inevitably seem causally effective since this is the system’s phenomenal reality for itself as the experience of behaving unfolds. Pain as paradigm subjective cause, a first person indicator of “behavioral seriousness”: I experience that pain causes me to wince. Behavior is in common between 1st and 3rd person perspectives, both subjectively experienced and intersubjectively observed, hence the temptation to mix perspectives in our explanations, resulting in the puzzle of mental causation (again, see Respecting privacy).
The intractability of the explanatory gap: 3rd person theory naturally wants to locate subjectivity in the objective world via causation/combination/emergence, the paradigms of physical/objective explanation, but this won’t be forthcoming and we can see why. So we might well remain scientifically unsatisfied by epistemic perspectivalism and the varieties of non-causal, representational entailment outlined in section 5, and we can see why.
The mental/physical distinction (MPD) is explicable as a representational phenomenon: the system distinguishes system from world, inside from outside, its own representational operations from the world it’s representing. The mental is experience categorized as that which is private, qualitative, non-spatial, internal, system-dependent, has basic non-conceptual elements, and is often taken as representational; the physical is experience categorized as that which is public, system-independent, non-qualitative, quantitative, and spatial, and what’s represented by the mental. This categorization then sets up the hard problem since the system's representational dualism is taken as metaphysical, as substance dualism: consciousness itself is represented as a categorically mental phenomenon, an ontological category, not simply one side of a useful representational distinction when modeling reality - an epistemic achievement.
Ontology as epistemic: there aren’t two types of things in existence in advance of knowing, rather our representational set-up creates the mental and physical distinction as a result of its modeling. What’s reality really like? What’s really out there? Reality is whatever the epistemic perspective (whether personal or interpersonal) represents it to be, operating over system-real elements that are neither mental or physical for the system prior to categorization. But this might mean, says the skeptical pessimist about epistemic perspectivalism (EP), that the whole 3rd person project of trying to explain how the mental arises or is entailed by the physical might be a wild goose chase. All we can really assert, she goes on to say, is that as representational systems we divide reality and have a hard time putting it back together again, and further that the premises of this very assertion are inevitably caught up in categories that, according to EP itself, may not correspond to anything in reality, but only representation. Still, more optimistically and less skeptically, I’ll conclude as follows:
Integrating subjectivity into science: The 1st person organismic perspective resulting in consciousness represents, for the conscious subject, that my concrete physical body really exists as an object in the external, spatially extended world, and that the mental (non-spatial) me – the phenomenally experienced self – really exists here inside, having mental (system-dependent) episodes like thinking and feeling. Hence arises folk metaphysical dualism. The 3rd person collective perspective that results in science represents, for us as conceptual cognizers (not qualitative experiencers), that there really exist intersubjectively observable entities described by classical physics and quantum mathematical formalism, including human bodies and brains, plus (more controversially) whatever the latest and patently corrigible science has to say (superstrings?). These entities are the system real elements for science: at the bottom micro-level these entities are not further decomposable or analyzable (science’s “qualia”). These then get leveraged into and participate in all the other sciences, including representationalism. Representation, understood as a naturally evolved, objectively existing, physically realized cognitive capacity deployed originally for complex behavior control, is then applied in explaining consciousness, the appearance of a private phenomenal reality for the representational systems alone: for each human organism, for each sentient animal, and eventually for each sufficiently smart and behaviorally flexible artificial system. Perhaps, according to something along the lines of the considerations in part 5 (or perhaps something quite different, since the game has just begun), 3rd person theory can incorporate and explain the existence of private phenomenal realities which are invisible to observers outside the conscious systems.
TWC, January 31, 2010 - minor revisions on 2/7/10 and 12/19/10
Another consideration to add to section 5: Metzinger's point that the relatively higher speed of lower-level processes makes them impenetrable to higher-level processes so that their processual nature becomes invisible to the system. Another way of explaining the information loss that results in content-only transparency.
Consciousness lags as a result of integration time (Libet experiments): if consciousness is a function of higher level integration of information, it would therefore take time for a conscious experience to occur, which would explain the delay between isolated motor cortex precursor of action and the reported time of experiencing the conscious decision to act that's associated with the propagation of the neural precursor to the global workspace. Metzinger in Ego Tunnel discusses this I think.
Two levels of reality made possible by representation: the phenomenal system reality of representational elements (e.g., red, pain), and represented non-phenomenal reality for the system, e.g., the represented reality of the internal immaterial self and the external material world that’s built by amalgamating and categorizing untranscendable representational elements.
Same point: must distinguish representational reality, e.g., of red, pain, from represented reality, e.g., the self, external world. The representational reality of phenomenal elements (phenomenal system reality) can’t be transcended: the transparency of sensory content can’t be made opaque. But, the represented reality can be transcended, e.g., in lucid dreams we experience directly that consciousness is a model. In some rare cases (the burning monks, see here), the experienced self can be experienced as a model, hence ceases to be behavior-controlling. Nevertheless, since the system can’t transcend its representational reality, it is still the locus of experience, but no longer a naively realistic experienced subject of experience. Metzinger speculates in The Ego Tunnel that post-biotic representational systems might arise that have the capacity to directly experience the fact that all their conscious states are representations, that is, the representations become opaque for the system. What, one wonders, would that be like? And would these RSs still have a robust, behaviorally serious (behavior controlling) sense of the importance of surviving as a RS in the absence of a phenomenally transparent self-model, that is, something categorized as not-a-model by virtue of the fact that it is content-only for the system. If not, arguably they wouldn't last long.
Note that the reality designation (as opposed to what’s categorized as a model) correlates with what is behaviorally serious, as immediately controlling behavior, both in terms of non-inferential qualitative judgments (this is red, pain) and in judging what’s objectively real (self-existent), e.g., the self and the external world.
NB: The phenomenal self is a central experienced reality for system – nearly co-equal to qualia – in the sense that the self is nearly impossible to construe as a model: the self is not-the-model, thus untranscendably real as an internal system property or attribute. The system must have both untranscendable representational elements (representational reality) and a firm (behaviorally serious) conviction about what’s real (represented reality) as opposed to its own simulations, including the untranscendable reality of its self (Metzinger’s adaptive egoism).
Folk metaphysics as driven by the naïve realism of consciousness. Similar issue as the problem of how we can talk about qualia if they have no causal influence. As knowers, we have a foot in both epistemic perspectives, in that we have the qualitative deliverances of consciousness and the conceptual deliverances of 3rd person theory. It’s no surprise we seek to find our folk dualism reflected in nature. Similarly, we can and do talk about qualia and consciousness even though they don’t, on the EP account (a 3rd person perspective account), play a causal role in speech production over and above what the brain and body accomplish.
Mental-physical distinction perhaps parallel to theory-reality distinction. Scientific theory as a fallible model of reality is analogous to mental thoughts (a possibly false representation of the external world outside the head), and reality is analogous to the external world (what’s represented, what’s thought about).
Content-only states – phenomenology – are present only to system, while outside observers only see non-content representational process. Phenomenal content is data-reducing transparency, perhaps. Content-only presentation via informational reduction (transparency) hides representational/production process, hence makes the content indubitable, given, unmodifiable since system can’t second guess the content delivery process by means of representing it, which allows no error or second-guessing since it isn’t being represented, only delivered, presented! Representation always makes possible misrepresentation and error, and the knowledge that something's a representation necessarily introduces doubt, and that’s exactly what isn’t possible when frontline sensory content is stripped of its representational baggage and not directly represented itself (representation requires a content delivery item that isn’t what’s being represented, hence possibility of error via unreliable matching). Phenomenal content then gets bound and categorized by higher level representational process that classify it as system-produced vs. world-mirroring, hence arises naïve realism of direct presentation of world. Call this the “private objective world”: what the RS represents as being objectively real, as opposed to phenomenally real, system-real. Two levels of reality for the conscious system, one built on another, with phenomenology being unchangeable, directly given content (representational reality, part of which gets classified (fallibly) as the external world vs. the mental world (represented reality). Repeat with important point re self: Some experience gets classified as the directly given external world (quintessentially physical reality since represented as extended) and the directly given self (quintessential mental reality since represented as non-extended), while some gets classified as system-dependent representational operations as presented to or generated by the self (thoughts, imaginings, etc).
Qualia are pure content, stripped of any indication that they are functions of a representational process - that information is missing. Hence they are givens, indubitable, since not seen as representations. In the other direction: to be indubitable and thus real for the system is to be non-deconstructible, non-decomposable, untranscendable, not seen as a representation, but as a given. What are qualia but such non-decomposable givens that form the untranscendable system reality upon which the MPD is imposed?
It’s only the stability of the phenomenal model, its reality for us (representational reality) that parallels the neurally instantiated modeling the brain accomplishes in 3rd personal accounts, that permits stable the categorization of the external and mental worlds (represented reality). It’s only the rock solid private reality of phenomenal experience (and of course its neural correlates that are doing the 3rd person causal work) that enables the secure representation of the external world and internal system attributes like the self.
The system’s representational capacities entails both privacy and qualitativeness of consciousness. If qualitativeness for the system is a matter of the system's representational capacities (see part 5) then perforce that qualitativeness is private, since only being that system entails the appearance of qualities for that system only.
As representational systems, we’re always going to have the intuition of the MPD, and will always be in the position of thinking in these categories, but once we see this, then we might see also that there won’t be a satisfying 3rd person story (e.g., causal) of how the mental comes to be.
Philo-science (3rd person theory) starts from folk dualism, but seeks a unified non-dual picture, so seeks to explain 1st person mental/subjective in terms of 3rd person physical/objective – a reduction in the classic sense.
Any RS, however it's original instantiation base (neural or qualitative) is conceived, will end up imposing the MPD on its representational reality, thus generating the dualistic terms in which it will necessarily construe its own situation as partaking of two distinct natures or substances.
The idealist alternative: a fallback position in the absence of non-entailment from physical to mental, public to private, 3rd person to 1st person: we take qualitative states as explanatorily primitive, then derive everything from them.