Holding Mechanisms Responsible

This is an invited response to bioethicist Walter Glannon's Lahey Clinic Medical Ethics Journal article "Free will and moral responsibility in the age of neuroscience." Glannon's article appeared in Spring 2006, this response in the following issue, online here.

The Mind-Body Problem

Times are tough for those who suppose that the mind is something categorically other than the body. Neuroscience increasingly reveals that the physical brain carries out those higher mental functions traditionally attributed to the immaterial soul: feeling, thinking, planning, and moral decision-making.[1] If we tie our status as morally responsible beings to something non-physical that supposedly evades scientific explanation – consciousness, for instance – we’re skating on thin ice. Those inclined to science, not mind-body dualism, must find a basis for moral agency that’s compatible with being strictly physical creatures.

In “Free Will and Moral Responsibility in the Age of Neuroscience,” bioethicist Walter Glannon denies he’s a dualist, but nevertheless says consciousness transcends materiality: “The brain is subject to material analysis, but the mind is not. The essence of mind is consciousness.” To declare that the mind, and in particular consciousness, is beyond the reach of scientific physicalism is a very strong claim indeed, given the tendency of science to explain heretofore mysterious phenomena. But for Glannon, we must be more than physical creatures to count as moral agents, and consciousness seems to fill the bill. 

What motivates Glannon, I suspect, is what philosopher Daniel Dennett has called the fear of “creeping mechanism”[2]: if we are, even in our highest capacities, merely the working out of mechanistic causality (neural mechanisms, in this instance) then on what basis can we consider ourselves deserving of praise and blame?  If, as Glannon says, “Moral responsibility for our behavior presupposes free will, which in turn presupposes the requisite control,” then it might seem consciousness only gives us the requisite control if it isn’t reducible to deterministic, cause-effect relationships instantiated by the brain. But are we conscious creatures more than very complex mechanisms? And need we be more than mechanisms to possess the sorts of control capacities that confer responsible agenthood? 

Such questions have of course exercised philosophers for millennia, but the recent advent of science has shifted the debate considerably in favor of a materialist understanding of the mind, what Francis Crick called the “astonishing hypothesis.”[3] Of course, it isn’t that human beings or human choice-making disappear on a neuroscientific account – we and our capacities don’t get explained away. But scientific explanations don’t (and can’t) appeal to anything that’s categorically non-physical, nor can they posit some inner homunculus pulling the strings, since that simply begs the question of explaining the homunculus. In principle, there’s a complete causal story to be told about human action that runs from perceptual input to behavioral output, a story that need not invoke an immaterial agent exerting freely willed control independent of the brain’s workings. If we’re to hold each other responsible on a scientific understanding of ourselves, then it seems the brain’s control capacities must suffice.

It’s important to see that control, not any exemption from determinism, is the key factor in being a responsible agent. Imagine there were some random, indeterministic element that affected your behavior – for instance a cosmic ray triggered a neural spike that then caused you to raise your hand. Could you take credit or blame for that action? Obviously not. As philosopher David Hume pointed out over 250 years ago, what we want is for our character and motives to determine behavior, not anything random, so determinism is a necessary condition of responsibility.[4] If, as seems the case, our character and motives are themselves fully determined by factors that ultimately we had no control over (our genetics, upbringing, peer group, etc.) they are still the main proximate causes of our behavior. Seeing this, we can properly say “I control my actions, not anyone else,” at least in cases when no one’s compelling us to act against our will.

But, we might well ask, is the I in this deterministic picture anything more than what the brain, ensconced in the body, does? After all, it certainly seems as if I’m more than a physical thing, more than a well-organized collection of deterministic, behavior-controlling neural algorithms. And if that’s all I am, why should we praise or blame something that is ultimately just the unfolding of complex causal processes? 

Here we’ve reached the nub of the twin questions of consciousness and responsibility, where commonsense intuitions about control conflict with the somewhat disconcerting findings of neuroscience. As the famous experiments by Benjamin Libet mentioned by Glannon show, conscious choices can have causal antecedents in unconscious brain processes, so consciousness isn’t necessarily a privileged initiator of voluntary behavior. Further, and perhaps even more tellingly, the neuroscientific consensus is that consciousness itself is completely dependent on neural processes. It isn’t something that plays an independent causal role in controlling behavior above and beyond what your neurons do, rather it supervenes (as philosophers put it) on what neurons do.[5]

To see the force of this, imagine building a robot that learns to avoid hurting itself. You design what we might call “hot stove” circuits that, in the event of damaging behavior, cause the robot to remove itself from danger and lower the probability of repeating the behavior. Now, if this circuitry works well to keep the robot from harm, why do human beings need, in addition, the conscious experience of pain? After all, our neural circuits pull our hands off the stove even in advance of feeling pain, and we know that learning occurs by means of neural mechanics such as gene expression, neurotransmitter release, adjustments in the strength of synaptic connections, and perhaps the development of circuits that cause us to specifically avoid hot stoves. What essential role in governing behavior does conscious pain play that isn’t being played by these mechanistic “robotic” control capacities?  It isn’t at all obvious what phenomenal consciousness – the subjective, qualitative feel of experience – is for, causally speaking.[6] 

Philosophers and neuroscientists say, well, maybe pain just is a certain subset of physical processes in action, for instance the firing of C-fiber neurons, or maybe it just is the carrying out of certain damage-minimizing functions, whether performed by neurons or silicon chips. Or, perhaps it’s something extra produced by physical processes or functions.[7] But on all these accounts, pain per se still doesn’t seem to play an irreplaceable role in controlling behavior, since again, it’s the neural mechanisms that actually produce movement. In which case, to get back to Glannon’s thesis, perhaps we’re mistaken to think that conscious experience, even that which accompanies higher cognitive functions, has much to do with giving us the behavioral control required for responsibility.

To continue our thought experiment, imagine our robot is designed to learn appropriate behavior by being responsive to voice commands. If it “misbehaves” – for instance, rudely cuts in front of a guest to get us a drink – we say “no, robot, no!” which triggers hot stove circuitry, making it less likely to be rude in the future. Similarly, saying “good robot!” upon appropriate behavior will strengthen propensities we want to encourage, such as common courtesy. Now, we don’t have to suppose our robot is conscious to see that, in a primitive sense, we’re holding it responsible and accountable – applying rewards and sanctions – in order to shape its behavior. 

This suggests that our responsibility practices vis a vis each other make sense even if consciousness turns out not to play an essential role in regulating behavior. As an agent with very complex and sensitive control capacities instantiated by my brain – the deterministic result of my genetic endowment interacting with my upbringing – I’m exquisitely responsive to the prospect of rewards and sanctions, whether verbal, monetary, or otherwise. Moral norms control me in ways that only my properly functioning neural wiring makes possible, which is why it makes sense to hold me responsible. Rewards and sanctions are justly applied to me since I have those neural capacities that define moral agenthood. Those with defective capacities (the insane) or undeveloped capacities (children before the age of reason) aren’t capable of having their behavior controlled in this fashion, so it’s unjust to hold them responsible as moral agents. Consciousness doesn’t explain this crucial moral distinction (since we’re all conscious), only the differing control capacities instantiated by specific brain structures, for instance the frontal cortex. When assessing criminal and moral responsibility – to distinguish the mad from the bad – we can use as benchmarks the neural parameters of normal adult self-control, for instance as suggested by neurophilosopher Patricia Churchland.[8]

But even if the capacity for responsible agency is based in neural mechanisms, we are of course essentially conscious beings, capable of joy and suffering. Since we want to minimize suffering, our responsibility practices should be as humane as possible. Understanding that people are fully determined to become who they are helps undercut retributive attitudes based on the idea that people just choose their bad character and actions, using a free will that transcends causality.[9] Acknowledging the causal roots of responsible action, we’ll also want to promote physical and social conditions that increase the likelihood that children grow up with intact, norms-responsive brains,[10] and that they’ll want to behave ethically. Both our inclination to punish as a first resort, and the need to punish as a last resort, might diminish should we adopt a fully naturalistic, physicalist view of ourselves. Thus can neuroscience contribute to the creation of a less punitive society. 

 - TWC, November 2006

Notes

[1] Greene, J. and Cohen, J., For the law, neuroscience changes nothing, and everything, Philosophical Transactions of the Royal Society of London. B (2004) 359, 1775–1785.

[2] Dennett, D., Quining qualia, in Marcel, A. and Bisiach, E. (Eds.) Consciousness in Contemporary Science, Oxford University Press, 1988.

[3] Crick, F., The Astonishing Hypothesis: The Scientific Search for the Soul, Scribners: New York, 1995.

[5] Kim, J., The mind-body problem at century’s turn, in Leiter, B. (Ed) The Future for Philosophy, Oxford University Press, 2004.

[6] See for instance the Journal of Consciousness Studies special issue on epiphenomenalism, Volume 13, No. 1-2, January-February 2006.

[7] For an overview of theories of consciousness, see Blackmore, S., Consciousness: A Very Short Introduction, Oxford University Press, 2005.

[8] Churchland, P., Moral decision-making and the brain, in Illes, J. (Ed.) Neuroethics: Defining the Issues in Theory, Practice, and Policy, Oxford University Press, 2005.

[9] Clark, T., Science and freedomFree Inquiry, V22 #2, 2002.

[10] Farah, M., Noble, K., Hurt, H., Poverty, privilege and the developing brain: empirical findings and ethical implications in Illes, J. (Ed.) Neuroethics: Defining the Issues in Theory, Practice, and Policy, Oxford University Press, 2005.

Other Categories: