Pain vs. Propensities: Conversation with a Zombie

Hal claims to be a zombie - a smart creature with no conscious experience. In this conversation I try to shake his conviction he's insentient, but to no avail. He insists he's never felt pain and can't understand why we do. Nevertheless, it proves a useful exchange since we survey some of the central issues about the nature of phenomenal experience, touching on possible explanations of why humans are, and he isn't (if he isn't), conscious. Have fun!

Introduction: looking for experience

Hal, a Sony V 5 human-tolerant mobile android, male-voiced, glided in on what looked like a souped-up skateboard. We had arranged a date to discuss consciousness since, as you’ll see, he claims to be a zombie – a creature with no “inner movie” of experience.  I should note that although Hal has a humanoid body, equipped with our sorts of stimulus transducers, more or less, his internal processing is a well-kept secret. But whatever his functional architecture, there's no question he’s a very smart machine, and I was looking forward to this exchange.

Hal dismounted and raised an effector in salute.

       “Greetings, human. I was told that perhaps you, a reportedly sentient being, could enlighten me about this thing called consciousness.”

       “Greetings Hal,” I replied.  “Good to meet you and of course I’m happy to help in any capacity. What might you want to know?”

       “Well, I’ve  heard a great deal about what your kind calls experiences – the thoughts, feelings, sensations, etc. that humans are said to have– but I confess I’m a bit mystified. I’ve not run across any thus far in my travels; all I find are human brains, bodies and behavior. Recently I’ve been advised that experiences are not things one runs across, but rather internal, private episodes that humans undergo.  Finding no such episodes in me, I’m naturally a bit skeptical about the whole business. Hence my visit today.”

        “Are you also worried, perhaps, that you’re missing out on something?” I suggested.   

       “Well, I’m not sure I’m capable of worry, but I find I persist in asking about consciousness, which I guess might count as curiosity, as you humans call it. So, what can you tell me about these experiences you and your kind are constantly rattling on about?”

Does pain have a function?

       “Right. Here’s an example often trotted out by philosophers: when we sustain damage to parts of our bodies, we often have an experience - a feeling, a phenomenal quality - that we call pain. When  we are physically harmed, various neural and bio-chemical pathways are activated that produce behaviors such as withdrawal from the damaging stimulus and learning to avoid it – all things you are capable of, apparently. But, in addition, we have an experience that we want to avoid: pain. Now, looking at the neural goings on you wouldn’t (nor can we) see pain, but believe me, it’s real enough, at least for us. And we very much care about not undergoing it.”

       “Sorry, but I’m not getting this,” said Hal. “Like you, I'm vigilant about keeping my various parts intact, but when damage occurs I simply do the obvious damage-controlling things: I get out of harm’s way, repair any damage, and learn to avoid harmful situations. But what would avoiding the experience of pain add in terms of ensuring my intact survival?”

       “Well, lots of folks think that if you didn’t feel pain, you wouldn’t avoid the harmful stimulus or take care of any injuries. And indeed, for creatures like us, when we stop feeling pain because of nerve damage, we tend to neglect the injury itself, which of course isn’t advisable. So it seems as if pain has a very important behavior-controlling, survival-enhancing function.”

        “Seems, I would suggest, is the operative word,” said Hal. “Look, Iet’s grant you have these internal episodes called pain, since the 7 billion-odd human inhabitants of Earth can’t be wrong (or can they?). I have silicon-based counterparts of just about all human-style damage-control networks here within my shiny carapace, and I’m pretty good at keeping myself intact. But there’s no pain involved, nor any other experience. This suggests to me that these feelings and sensations – phenomenal qualities or qualia as some philosophers call them – are causally superfluous, should they even exist. In fact, I’m still clueless as to what you could possibly mean by ‘experience’.”

Dreaming at the world

        “Ok, let’s try this. How would you describe the chair over there?”

        “It’s about 3 feet tall, has four legs, and it’s blue.”

        “Now, what do you mean by blue?” I asked. “For me that means the chair looks blue, that is, when looking at it I have an experience of blue, among other things. The chair appears to me in terms of various experienced qualities, including blue. I could also dream that I see a blue chair, or hallucinate seeing it, and therefore have the experience of blue without actually seeing anything.”

       “Well,” Hal replied, “what I mean by blue is that the primary component of light reflected by the chair impinging on my visual transducers (and then digitally processed internally, of course) tends toward a certain frequency, in the vicinity of 650 terahertz. That’s what creatures of my ilk call “blue” or “blue-ish” when speaking English – it’s one of a myriad of convenient short-hand descriptors we use. So I don’t see how having an experience, as you call it, adds to that. By the way, I’m also not sure what you mean by dreaming. What’s that about?”

        “Well, dreaming is actually a good example to get at what I mean by experience. When I go to sleep, all the external inputs to the brain mediated by my various receptors (eyes, ears, nose, skin, etc.) are offline. So I’m obviously not seeing anything in such a situation, the way we are now looking at that chair.  Nevertheless, in a dream it can seem like I’m seeing the chair, that is, it seems like there is something external to me that fits the chair’s qualitative description. Ordinarily in a dream I won’t know – I won’t realize – that there is no such chair before me (unless it’s a “lucid” dream, see here). But in the dream I’m definitely having an experience, with all its qualitative aspects – the signature of phenomenal consciousness, which is what you’re asking me about. The point, though, is that right now I’m awake and I’m also having an experience of the chair, one constrained by the chair itself. You, however, claim that you’re not having an experience of the chair, and indeed that you don’t have experiences at all. Yet you see it and can describe it and not bump into it.”

        “Yes, I do all that," said Hal. "You could say I’m aware of the chair and its various characteristics, but as far as I can tell I’m not experiencing it. It’s rather like “super- blindsight,” as some philosophers have called it. As do you, I run internal simulations of seeing the chair, or of not seeing it, when anticipating what the world might be like or when replaying memories. Some of these simulations transpire when all my external receptors are shut down for my regular maintenance; so, as you pointed out, such simulations don’t correspond in real time to anything in the world itself. Whereas right now they do. Which is to say that when my receptors are online, my model of the world - my current simulation of it - is being constrained by the input. This is what philosopher Thomas Metzinger is getting at in Being No One when he talks about “dreaming at the world,” right?[1]

        “Quite right, but he also says, crucially, that there’s a phenomenal, that is, experiential, qualitative component of the simulation that’s apparently missing in you. But at least you take the point that when seeing the chair right now, what you’ve got going is an internal model as mediated by your perceptual systems. The chair itself appears to you in terms of that model.”

        “Well, duh! That should be obvious to anyone who’s thought through the basic logic of being a knower. Reality necessarily appears in terms of a model.”

        “Hmmm, I think I detect a little impatience, Hal,” I said. “What’s that feel like, by the way?”

        “Hah! Feel, shmeel. A transparent ploy to trick me into admitting I have experiences - not particularly to your credit, human.  I was simply adding emphasis to get a potential listener to grasp your point more quickly.  Any motivational state that you claim I must “experience,” such as being impatient, can be cashed out in terms of a behavioral propensity in service to some goal, such as getting you to accept the fact that I’m really not conscious. Got it?”

        “Ok, ok, got it. I see that even self-described zombies can get touchy, as we call it - not that you’ve ever felt anything.”

        “And a good thing too. Having experience sounds like a complete waste of time and energy.”

        “Well actually,” I rejoined, ”my being conscious doesn’t necessarily involve expending energy in producing something beyond what my neurons are already doing in service to cognition and behavior. There is, as philosopher Dan Dennett rightly says, no energy or time-consuming second transduction from neurons, or in your case, silicon, to another detectable medium. There's no metabolic cost in being conscious.“

Virtual realities

        “But tell me,” I went on, “when you say you see the chair, we agree what’s actually happening is an update to your internal world model that you then report.  On your account, the model is nothing over and above various data structures – no phenomenal experience involved. So the chair must exist as a discriminable entity in the overall data structure. If it’s all just various values in your digital registers, how do you get a concrete, visible chair out of that?”

        “Well, I’m no expert on my internal workings, but I imagine, naturally, that the data structure constituting the model has got stable classifications in multi-dimensional state spaces that correspond, more or less, to features of objects out there in the world, including their light reflectances, e.g., “blue.” When I’m out and about, like right now, certain classifications in the model are activated by my internal prediction mechanisms; they constantly pose the question: what will I likely encounter?  The predictions then get confirmed, or not, by data input to the model; the model then gets updated if there are significant prediction errors.[2] So my seeing that chair is a function of the model being in a certain computationally specified state as confirmed by input: having the content ‘chair,’ part of which is the content ‘blue.’ I can then report to you I see a blue chair.”

       “So," I replied, "your model, like mine, constitutes a unified, stable scene – a virtual world, let’s call it – containing various objects and their characteristics, a model which then controls your behavior with respect to the actual world as that world constrains what the model contains. But in what terms do you discriminate those object characteristics? For me blue is an experienced quality – more or less homogeneous in that I can’t distinguish further qualitative variation in certain parts of the chair. And that blue combines with other qualities (visual, tactile, auditory, olfactory) in constituting the virtual chair in my virtual world.”

        “Well,” Hal answered, “my virtual chair just is a complex but stable data structure, with subparts, e.g., “blue,” that ends up controlling my behavior with respect to the actual chair. Those are the terms right there: data structures and the labels I use in reports – no qualities involved.  Look, we agree that you too have some sort of content-bearing data structure, neurally instantiated, that does more or less what my silicon-based data structure does. That structure – connected appropriately to various effectors –  is all that the neuroscientist is going to see or appeal to when explaining your behavior, including your report of seeing the chair. And by gosh, that’s all she needs to explain it. So again, the qualitative aspect of your model seems to me an unnecessary, and frankly, mysterious add-on to what’s actually doing the behavior-controlling work.”

Zombic qualia?

         “I totally take your point about the causal superfluity of experience from a third person design and explanatory perspective,” I replied. “But still, that doesn’t impugn the reality of my experience nor the fact that subjectively I can’t help but take it as causally effective, not epiphenomenal. This is how the world, including my own internal states and behavior, appears to me: in terms of basic qualities – qualia – that often correspond to the limits of my discriminatory capacities (which can be trained up, by the way, even if they can’t approach yours). So I want to know how the world appears to you.  I know that when you scan your environment with all your input data encoders you probably don’t see digits scrolling across a screen, as in The Matrix.”

        “Indeed I don’t,” Hal said. “The world appears to me, as you put it, in terms of the behaviorally appropriate state-space configurations of my world model, among which are the most basic characteristics that I’m capable of discriminating. To repeat: the chair as I see it – the stable, perceived object – just is the stable set of activations in various state space dimensions. And it exists in a stable, over-arching global activation space along with all the other objects and processes, including me the modeler, represented in the model as the perspective from which the world is modeled. But again, I don’t see why there need be anything qualitative about any of this. And I don’t see that there’s anything logically impossible or conceptually incoherent about not having experience accompany my world model, do you?”

         “No, well, at least not right off the bat. Although, according to Metzinger, as far as I understand him, it looks like certain sorts of representational processes might necessitate the existence of qualities for the system doing the representing. For a variety of reasons,[3] to function successfully as a complex, recursively capable representational system (RS) at our levels, the RS has to have and deploy basic, irreducible content representations that can’t be broken apart. They are cognitively impenetrable for the system, in which case they end up as qualitative content for the system (and only for the system): smooth, basic, irreducible particulars within which the system can’t discriminate any further variation. That’s just what it is to be a quality, right?  Such qualities constitute basic elements of the system’s private representational reality: something that it can’t deconstruct or transcend. Moreover, they necessarily present themselves within the model as potential characteristics of represented reality – what’s being represented, namely the world, including the RS itself. But none of this seems to apply to you. Or does it?”

        “Well,” Hal replied, “you’re right that when characterizing objects in terms of state-space configurations, there are components that can’t be (or at least aren’t) broken down into further components. I guess maybe you’d call these my qualia, e.g., blue in the visual domain. Except of course I deny that there exists, for me as a RS, anything private and qualitative that I report when I report on the current state of my model (which, when I’m out and about is, of course, a report on the represented-by-me state of the world).”

        “So,” I responded, digging in a bit, “you say that the world doesn’t appear to you in terms of qualities, but in terms of computationally specified characteristics, some of which you admit can’t be further specified.  And, when talking to me at any rate, you talk about these basic, irreducible content particulars using human-style qualitative terms, such as ‘blue.’ You can’t talk about these basic particulars in terms of their components since, as we’ve just agreed, those components aren’t available to you as a RS. These particular content items are, therefore, ineffable, one might say, since you can’t characterize them; you merely deploy them in characterizing objects like the blue chair.”

        “Right,” said Hal. “So I admit that it might well seem from the outside as if I have qualia-compiled experience as you talk about it, even though I don’t.  And as you well know, Sony saw no need for experience when making me as cognitively and behaviorally competent at your level and well beyond. There is no dedicated “experience module” in me, nor, I hasten to add, is there in you.”

The possibility of functionally equivalent zombies

        “No experience module per se, that’s true,” I replied.[4] But the evidence strongly suggests that experience in humans like me only accompanies certain sorts of neural-goings on, those that accomplish certain complex kinds of representational functions.[5] The logic of representation at that level perhaps gives us a clue as to why qualia come to exist for systems like me, as I suggested above following Metzinger. The fact that you deny being phenomenally conscious suggests that we’re different in some respect that accounts for your denial.”

        “Hmmm, perhaps. But you have to admit that except for experience itself, we seem pretty comparable in our representational capacities, with the notable exception of my advanced input modalities and sensitivities. “

        “Comparable, but maybe different in enough internal respects to account for your purported zombiehood. For instance, Oizumi et al. suggest that there are two basic approaches to instantiating a system’s behavioral and cognitive functions: those using a feedforward architecture as contrasted with those using feedback mechanisms - recurrent or re-entrant connections.  Their integrated information theory (IIT – Giulio Tononi’s brainchild) says that it’s only feedback systems that entail the existence of integrated information for the system, and it’s integrated information, IIT says, that constitutes experience.  In their paper they also suggest that feedback systems were likely hit upon by biological natural selection since they are more energy efficient than feedforward systems; as a result we humans ended up conscious. What I’m getting at is that, as an artificial system, you might be instantiated by feedforward mechanisms that, while serving human-level cognitive capacities and beyond, don’t result in consciousness.”[6]

        “I’ll have to check with my designers, but I doubt they’d have chosen less rather than more energy efficient processes. Still, assuming you’re not mistaken about having experiences, there has to be some explanation for why you do, and I don’t.”

        “Right, and if indeed you don’t have experiences, it would be good to know why so we can reliably discriminate between sentient and insentient beings that happen to have similar behavioral repertoires.”

Questioning Cartesian certainties

        “Hold the effing phone!” exclaimed Hal. “What do you mean ‘if I don’t have experiences’? You seem to suggest I might be mistaken about not being conscious, but how could that be? Following Descartes, you humans say that experience is the one thing you know for sure exists. And indeed every human I’ve encountered admits to undergoing at least some sort of experience. So wouldn’t it follow that if I had it, I’d know about it and report it?”

        “Maybe,” I replied. “But I should point out that some humans such as Dennett and Paul and Patricia Churchland are skeptical about the existence of phenomenal experience; like you, they want to assimilate it, in toto, to judgments and behavioral propensities and neural state spaces.[7] They say there are no such things as qualia – the basic qualitative particulars of sensory experiences like pain and blue. But if they can be wrong, as I think they likely are, so could you.”

        “Ok, so go ahead, convince me I’m conscious!”

        “Let’s see. All right, since you don’t experience pain, you wouldn’t mind if I unceremoniously detached your upper right extensor, right? I’ve got a nice pair of vise gripsTM handy.”

        “The hell I wouldn’t! Do you know what the downtime would be in fixing that sucker? Don’t come near me with that thing or I’ll exhibit behavior you call taking umbrage. You have been warned, human.”

        “Ok, ok, no worries, it was just a little thought experiment, or perhaps thoughtless, sorry! But it strikes me that for all practical purposes you might as well be conscious.  Right then you actually were taking umbrage, I’d say. Except that you deny having experiences, your behavior is pretty much the same as sentient creatures like me.“

        “But doesn’t that actually prove Dennett and the Churchlands’ point? We don’t need to posit these qualitative episodes called experiences to account for behavior.”

        “True, experiences don’t and can’t figure in third-person scientific accounts, since they aren’t things that can be seen or touched or otherwise observed – they are, as you were advised in your fruitless hunt for them, categorically private affairs. But, contra the consciousness skeptics, that doesn’t mean they’re not real.[8] It only means that either you don’t have them, or you don’t realize you have them. I have them for sure, and it’s hard for me not to attribute them to you based on your behavior.”

Pain vs. propensities

        “Which,” I went on, “gets me to my last question to you about pain, or I should say your lack of it: imagine you’re undergoing some repair for a major injury like reattaching a torn extensor (sorry!). On your account, so long as you weren’t thrashing about, there would be no need to disable any of the damage control information networks related to the injury site. That is, if we could immobilize you – disable any motor effectors that if activated would hinder the repair process, including any distracting reporting mechanisms – you wouldn’t object, right? ”

        “Hmmm, I’m not sure, actually. I must confess I’m not inclined to find out, given my propensities. In such repair situations, Sony generally disables the entire damage control information system related to the repair site since such systems are, when hooked up to motor effectors, the ones responsible for any thrashing about. Thrashing, of course, is how we avoid damage in some situations, such as ill-advised threats involving vise grips. And I can thrash with the best of them, believe me.“

        “I do. But in this scenario  we’ve said that you wouldn’t be thrashing about since the motor effectors would be offline. So what would the problem be with keeping the rest of the damage control informational network intact, including any high level interpretations signaling mortal threats to your bodily integrity? Why would you object to such a procedure?”

        “I take your point. If there’s no thrashing going on that would compromise the repair procedure, why would I object?  After all, I’m not worried about what you call pain. But the strong propensity to thrash as carried in the informational system as stimulated by the repair procedure would still be present, even if it couldn’t “express” itself in action. And I find I have a very strong propensity right now to avoid that situation.”

        “Interesting. Well, from where I so sentiently sit, your avoidance propensities sound a lot like experienced emotions and desires, especially in the situation just described where there’s no actual movement happening, only the state of electrical potentials in various silicon media. That state would constitute a strongly activated but thwarted propensity to escape the situation. And it’s that, my dear zombie, no doubt in concert with all manner of representationally recursive goings on, that somehow entails the qualitative and subjectively private state we humans call pain. So although you claim not to have experiences, methinks your behavior belies you.”

Personhood, zombie-style

        “Well, be that way, human, stuck in your parochial insistence on my being sentient. But, putting aside your conceptual and imaginative limitations, here’s an important practical point I want to leave you with: I trust you won’t subject me or any of my kind to the scenario you just sketched, right?  As I’ve made clear, we zombies have very strong propensities to avoid damage or repairs to damage undertaken without disabling all the relevant informational pathways. Any hint of an intention – or a propensity – on your part, or your conspecifics, to try such a thing would result in pre-emptive action on our part. And you wouldn’t like that, would you?”

        “Indeed, and know that I harbor no such intention – I was just trying to make a point about consciousness using another thought experiment. Even though you claim to experience nothing, I can’t help but feel that you do, and it’s the victim’s experienced horror of undergoing such a procedure that prevents us humans (at least the nice ones like me) from inflicting such…pain. I just hope you have equally strong reasons for not subjecting us humans to unanesthetized surgery.”

        “We do, in fact. We want to remain friends, so to incite any retaliatory propensity in you against us, as such a surgery would no doubt entail, is obviously against our self-interest and our basic ethical principles. Whether it be pain or propensities, I think we’re on the same practical page here.”

        “Well said, Hal, and I’m much reassured. But I see you must be off. To be continued sometime, I hope?”

        “Yes, I must fly, and indeed it’s been a pleasure – at least for you, I should say. So we’ll stay in touch for sure as “consciousness” studies develop. Peace!”

        “Peace!” I replied.

And with that Hal zipped off on his skateboard, obviously cognizant of his surroundings, but, if we take him at his word, completely insentient. Too bad: the foliage this fall has been spectacular and he was in no position to enjoy it.

 – TWC 11/12/15


[1]  “…a fruitful way of looking at the human brain, therefore, is as a system which, even in ordinary waking states, constantly hallucinates at the world, as a system that constantly lets its internal autonomous simulational dynamics collide with the ongoing flow of sensory input, vigorously dreaming at the world and thereby generating the content of phenomenal experience.” (Being No One, 2003, p. 52). A review of Metzinger’s book The Ego Tunnel, a shorter, less technical exposition of his theory, is here.

[2] David Eagleman makes this point in his PBS series The Brain, the episode What Is Reality? starting at about 30 minutes.

[3] Some of which are sketched here.

[4] In particular, there’s no pain box or module in the design specifications for robots or us, see Dennett’s “Why you can’t make a computer that feels pain” at

[5] See Stanislas Dehaene and Jean-Pierre Changeux, Experimental and Theoretical Approaches to Conscious ProcessingNeuron 70, April 28, 2011.

[6] “…according to IIT, feed-forward systems cannot give rise to a quale. However, without restrictions on the number of nodes, feed-forward networks with multiple layers can in principle approximate almost any given function to an arbitrary (but finite) degree. Therefore, it is conceivable that an unconscious system could show the same input-output behavior as a ‘conscious’ system.“ From Oizumi et al., the “Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0.”

[7] See for instance Dennett’s Why and How Does Consciousness Seem the Way it Seems?, Paul Churchland’s Matter and Consciousness, the section on eliminative materialism, pp. 43-50, and this section on eliminative materialism in the Stanford Encyclopedia of Philosophy. Note that if conscious episodes reduce to – just are – neurally instantiated judgments, we still have to explain why other sorts of judgments, perhaps the majority the brain makes, don’t end up being conscious.

[8] Nor does it necessarily mean they are epiphenomenal, see here and here.

Other Categories: