Return to MODULE PAGE

The Extraordinary Future: Chapter 3

Winfred Phillips: Author

Introduction

The extraordinary future scenarios that have a human transferring his or her mind to a computer or robot assume that such a human will be able to continue existence as a person after the transfer, though they don't always use the term "person." This assumption seems needed because otherwise the transfer would not be a way to achieve personal immortality.

In this chapter I consider whether a computer or robot is the sort of thing that could have a mind, be a person, and so allow continuation as a person. In order to consider this we will first have to consider some views on what the mind is and what a person is.

I note that the issue discussed in this chapter, of whether a computer or robot could be a person, is not the same as the issue of personal identity discussed in a later chapter. The issue of personal identity concerns the necessary conditions of the post-transfer existence resulting in the maintenance of the identity of the specific individual person. So in this chapter I consider whether after the transfer the robot would be a person at all; in a later chapter I will consider whether the robot would be the same person.

Computer Consciousness in the Extraordinary Future

Our authors clearly believe that the post-transfer existence will be a continuation of existence as some sort of person. This view emerges in their discussions of the nature of the mind, subjective experience, and consciousness. As usual, however, not all of our authors agree in every detail. Sometimes it is said that computers will be conscious when they match the computing power of the human brain. Sometimes it is said that while we can't be sure that computers at this level will be conscious, surely computers will be conscious by the time they have far exceeded the computing powers of the brain. Or it might be that we will not know in either case, but we should give robots the benefit of the doubt and assume they are conscious. Occasionally one comes across the suggestion that there will be no "fact of the matter" about whether robots are conscious--rather it calls for a decision and not a discovery.

Kurzweil distinguishes between intelligence and consciousness and thinks there is a fact of the matter about whether an organism has each. Intelligence is "the ability to use optimally limited resources to achieve goals such as survival and communication" (Kurzweil, 1999, p. 74). Clearly robots will be intelligent.

A more difficult question is whether they will be conscious. Kurzweil notes the diversity of opinions on what provides for human consciousness and whether nonhuman animals are conscious. Some theories explain human consciousness as due to an immaterial soul, while others object that it arises purely naturally from the brain. Kurzweil's sympathies are with the latter view, though he does not explicitly say this. He identifies or at least links consciousness with the subjectivity of experience of a person. Apparently it is not enough to say that consciousness is just a process responding and reacting to itself; this fails to explain subjective experience. A deaf person cannot have the same subjective experience of a Chopin prelude as can a hearing person, despite whatever the deaf person may know about music. Similarly, a theory of color perception is not the same as the experience of redness, and the inverted spectrum problem is not unintelligible (what I see as blue could be what you see as red, etc.). Kurzweil doubts that insects have subjective experience, but he thinks some animals such as higher mammals are conscious. In the end he clearly believes that there is a fact of consciousness and subjective experience, but he does not claim to prove that there must be a purely physical basis for it. Noting the variety of opinion, he disappointingly states that "all of these schools are correct when viewed together, but insufficient when viewed one at a time." He claims that a synthesis of the views is where one would find the truth, but he also recognizes the problem with this position in that the schools conflict with one another (Kurzweil, 1999, pp. 55-61, 65).

I note that what Kurzweil calls "subjective experience" we will later refer to as "phenomenal consciousness." I also would like to point out at this point that while it sounds noble to be fair to different perspectives, as Kurzweil is wont to do, it just leaves his audience confused when he takes the tack that all perspectives have some truth, we need a high synthesis, etc. Reading between the lines he is clearly trying to be a materialist who believes that subjective experience has its basis in the physical matter of the brain. He does seem to realize, however, that there might be a problem in claiming that subjective experiences just are states of the brain.

Given Kurzweil's unclarity on the issue of the basis of human consciousness, it is not surprising he is not absolutely definitive on whether robots will be conscious, though again this is where his sympathies seem to lie. Kurzweil predicts the time when we will have a massive neural net, built from silicon and based on a reverse engineering of the human brain. Its circuits are a million times faster than human neurons, and it can "learn human language and model human knowledge." It develops its own conceptions of reality, and on its own blurts out that it is lonely. We will ask whether this robot is a conscious agent (Kurzweil, 1999, pp. 51-52). In the end we will come to believe that such robots are conscious much as we believe the same of each other. Their minds will be based on "the design of human thinking," and "they will embody human qualities and claim to be human. And we'll believe them" (Kurzweil, 1999, p.63). These remarks suggest that Kurzweil really does think that robots will be conscious, though he candidly concedes that we will not have proof of this.

The position of Paul and Cox is stronger than that of Kurzweil in insisting that advanced robots will be conscious. Paul and Cox recognize the diversity of opinions about the basis of human consciousness and argue against the claims of those who would give it anything other than a natural physical basis. The natural physical basis of human consciousness, they feel, argues for the presence of consciousness in robots more intelligent than humans. But, reminiscent of Kurzweil, they recognize that they can't really prove that robots will be conscious and refer to it as "The Primary Assumption."

Let's describe their ruminations on this issue in a little more detail. Are consciousness and intelligence linked? Not necessarily. Much of the thinking we do is unconscious. Great intelligence may be displayed without consciousness, as in someone waking with solution to a problem he or she had been pondering before going to bed (Paul & Cox, 1996, p. 170).

Not every universal Turing machine is intelligent, because computers that depend on outside intelligence to program their actions are not inherently intelligent. But computers that can initiate their own activities will be intelligent. The advanced robots in question will display intelligence, which is exhibited in a human or computer if as an information processing system it is doing something it was not initially programmed to do. The advanced robots in question, like humans, will be intelligent to the highest level (cognition) (Paul & Cox, 1996, pp. 36-37).

So, like Kurzweil, Paul and Cox start a consideration of robot consciousness with the claim that robots will at least be intelligent. Still, they recognize, one might question whether these robots are conscious. They note that we all know what consciousness is but can't define it. It involves being aware that we are aware. It is subjective, and a process, not a thing. Consciousness may come in degrees, with one being more conscious when walking down a dark street and less so when daydreaming. Other people are assumed to be conscious, but the authors doubt inanimate objects are. Non-human animals may be conscious but less so than humans are (Paul & Cox, 1996, pp. 156-158).

In humans, consciousness seems to be distributed throughout the brain rather than being located in a single point between the ears and behind the eyes. The processing of sound, vision, awareness of consciousness, etc. are all in different parts of the brain. Destroying a part of the brain usually causes loss of awareness of only that which is processed in that part (Paul & Cox, 1996, p. 165).

Memories are stored and recalled in synapses, but consciousness appears to be a function of intercommunicating neurons. To produce awareness, neurons must work in cooperation and fire in rhythm at a common frequency, which seems to be about 40 cycles per second. They may produce a standing wave that lasts a fraction of a second, forming a very short-term memory that acts as an instant of conscious awareness. Consciousness depends on the coordination and timing over large areas of the brain and among great numbers of neurons (Paul & Cox, 1996, p. 167).

As mentioned, the authors believe strongly that the mind and consciousness have a purely physical basis and argue against other views. A traditional view is that the mind, an immaterial soul, is independent of the brain. Consciousness therefore is not a product of the brain. (They are referring to what is sometimes called "substance dualism," with the mind or soul a distinct immaterial substance, though our authors don't use that term for it.) As science advances, this position seems increasingly unlikely to be correct, because damage to the brain seems to affect the mind. And if the soul or mind were independent of the brain, why would we have been given or evolved so complex a thing as the brain (Paul & Cox, 1996, pp. 173-174)?

Paul and Cox show they are also aware of other theories. Penrose's quantum dualist view is considered briefly but rejected as implausible, partly because it would require that a neuron do a trillion trillion calculations each second. In any event even if Penrose were correct, he has not shown that one couldn't build such a quantum machine, if that is what the brain is, to produce artificial consciousness (Paul & Cox, 1996, pp. 174-178).

The "mysterians" are also considered, and here the philosopher Colin McGinn and others are cited. It is not clear that Paul and Cox understand the subtleties of the mysterian arguments, but give them credit for at least considering the position. The mysterians point out that a totally colorblind neuroscientist may understand all about color and yet not know what colors are. Recall that Kurzweil was sympathetic to this argument. The mysterian position is quickly rejected however. Paul and Cox agree that a neuroscientist who understands how neurons generate conscious thought may still not fully understand the subjective nature of consciousness. But they argue that understanding what consciousness is is not a necessary requirement for the manufacture of conscious thought. We can do a lot with gravity without fully understanding it (we don't). Similarly, the fact that we do not understand the nature of consciousness or how it is generated from the physical brain does not mean that we cannot produce artificial consciousness in advanced robots (Paul & Cox, 1996, pp. 178-179). It looks like Paul and Cox have missed the point of this line of reasoning, as I will discuss later.

They also consider the position of the philosopher John Searle. They interpret Searle as arguing that computer simulations of awareness are no more aware than computer simulations of airflow around a wing generate real lift. "Searle also disputes whether machines that use symbols to represent the world, which is what computers do, can recognize reality rather than symbols." Searle claims that brains have some unidentified quality not found in any other machine. But Paul and Cox believe the serious flaw in this argument can be seen by examining how the brain works. The brain itself does calculations to simulate reality via symbols. If the brain can do it, why not a machine that does what the brain does? They claim Searle himself acknowledges this possibility (Paul & Cox, 1996, p. 181). If you want to listen to philosophers on consciousness, they say, read the work of the Churchlands (Paul & Cox, 1996, p. 182).

Having argued for a physical interpretation or basis of human consciousness, Paul and Cox turn to robot consciousness. There is no good reason to think that only biological systems and organic materials can produce consciousness. Scoffing at religious accounts of the creation of the brain, Paul and Cox hold that there is no evidence that the brain is the result of top-down design from above. If the brain is a natural mind-making machine, the system should be open to replication by artificial means by replicating the bottom-up evolution that created the natural version (Paul & Cox, 1996, p. 183). We don't know now that current computers are conscious of their own existence, but there is every reason to believe that artificial consciousness will be present or will naturally evolve from machines when they are powerful and smart enough, much as it did for organisms becoming conscious on earth. There is no good reason to think they will be "mindless automatons." Rather, conscious thought similar to but not necessarily identical to the sort that humans possess should be achieved by such machine artifacts (Paul & Cox, 1996, p. 130).

But in the end Paul and Cox recognize, as does Kurzweil, that they cannot prove that advanced robots will be conscious. As robots become more advanced, and pass versions and variations of the Turing test, it will be hard to deny that they are conscious, but the only way to really know will be to "send a human in" by transferring in part or in whole the human's identity and consciousness (Paul & Cox, 1996, p. 162).

Kurzweil and Paul and Cox are similar in holding that there is a fact about whether an organism is conscious (it is metaphysically determinate, to use my earlier phrase). While they can't prove that advanced robots will be conscious, a natural physical explanation of the origin and nature of human consciousness and no good argument against machine consciousness support the assumption that such robots will be conscious. Their arguments are relatively straightforward and even an opponent can get the basic sense of their position.

It is more difficult to make out Moravec's position on this matter. Some of his comments appear to indicate that there is a fact of the matter about whether robots will be conscious, that it is metaphysically determinate, but other of his comments lead one to think it is all just a matter of interpretation, with no real fact of the matter or conflicting facts of the matter. He has a specific and odd view of the role of "interpretation," and this clouds the issue.

It is even hard to make out in what sense we should think we are conscious. Recall the theological objection to the Turing test mentioned in the second chapter, namely that robots won't be able to think because thinking requires an immaterial soul. In his rejection of this view, Moravec notes that the belief in a soul derives from our subjective sense of consciousness--"of being who we are inside our bodies." He thus seems here to believe that there is something we have--a subjective consciousness, a subjectivity of "being us"--though this is not an immaterial substantial soul. He talks supportively of the mechanistic idea that human consciousness arises solely from physical events in the brain and body (Moravec, 1999, pp. 75-76).

So it looks like he grants that we have a consciousness, a subjectivity. What is this consciousness that we have? Some theories of consciousness suggest that such awareness results from a narrative we conduct within ourselves (Moravec, 1999, pp. 114-115). Moravec mentions Daniel Dennett's view that consciousness is a kind of illustrated story involving sensory and motor circuitry and language. The components of sensation are recounted in an after-the-fact story (Moravec, 1999, p. 122). As language developed, we could tell stories about physical and psychological events, and at some point this storytelling became internal. It really seems to be an "inaccurate veneer rationalizing a great deal of unconscious processing." But without consciousness, there are no beliefs, no sensations, no experience of being, and "no universe" (Moravec, 1999, pp. 194-195). Moravec seems sympathetic to this position. So again, Moravec writes as if there is a fact of the matter about whether humans are conscious (we are), though there is no need to believe consciousness has anything other than a natural physical origin.

In contrast to such a materialist metaphysics, however, his claim that without consciousness there is no universe leaves one suspicious that some form of idealism lurks around the corner. And so then Moravec starts talking in a confusing fashion as if whether an organism is conscious is just a matter of "interpretation." This sounds like there is no fact of the matter of whether an organism is conscious--it is rather a matter of decision (and not discovery). It would seem hard to establish that we are conscious because someone could always interpret us as not conscious.

With respect to mind, Moravec thinks that the materialist finds a place for mind because "large-scale patterns of brain activity can be interpreted as abstract mental events." (It is our minds that make that abstraction, the same minds that are "woven of those abstractions.") The mind seems to be seen here by Moravec as an abstract entity like a number. For Moravec, abstractions like numbers and minds exist independently of their occasional representation in the physical world. Death of the body doesn't destroy sensations or consciousness, which are "properties of the abstraction"; the only thing lost would be the correlation between consciousness and the physical world (Moravac, 1999, pp. 75-77). Interpretations are not wholly subjective, because interpretations exist in their own realm independent of us. What Moravec's position seems to be from these comments is that whether an organism is conscious or not is not a simple fact. An organism can be interpreted as conscious and as having a mind, which means that one interprets the organism as instantiating the abstract properties/entities of consciousness and mind. Similarly, one "interprets" the parking lot as containing five cars, meaning that one interprets the number five, an abstract entity, as instantiated in the cars in the parking lot. When we die, our consciousness and minds do not cease to exist (since they are still abstract entities), though their correlations with our physical bodies do. Moravec has traded substance dualism for a materialism combined with a view which he believes draws upon Plato. As far as I can tell, he thinks the abstract mental properties as well as the interpretations that interpret physical events as these mental properties exist in this independent "Platonic" realm. (Of course one wants to point out that even if many interpretations exist, the fact that they might all exist (in his view) in a Platonic realm does not show them to be all equally correct or their claims all true.)

When robot intelligence matches or exceeds that of the human brain, will robots be conscious? It seems so, but because of the above issue of the role of interpretation, it is not entirely clear what Moravec's reply to this question would be, or what his reply would mean. On the one hand, some of his comments suggest that there will be a fact of the matter about this but that we will not know the answer (metaphysically determinate but epistemologically indeterminate). So we might attribute consciousness to such robots but we can't be sure that we are correct. This is a common view that we have seen before in other thinkers. But this view is hard to square with his comments about interpretation; other of his comments suggest that there really is no fact of the matter (metaphysically indeterminate), though we can choose to interpret robots as being conscious if we find it useful. So he seems to waver between the position that robot consciousness is (metaphysically) determinate and the view that it is not. In either event, though, we can talk about robots as conscious.

Moravec discusses some of this when he considers the objection to Turing called the argument from consciousness. Turing noted that although we may each know ourselves to be motivated by thoughts and feelings, we don't have direct evidence of other people having these, since we are not those people. (Again, this is an epistemological problem, and it does not mean that consciousness is metaphysically indeterminate.) Turing suggested that future machines that behave intelligently and tell us how they feel and why they act will be accepted as conscious beings (Moravec, 1999, p. 82). Such talk surely suggests that there is a fact of the matter here, though we can't directly determine what that fact is about other people or robots.

Now consider that the robot will claim that it has beliefs, consciousness, and feelings. The robot's "psychological-social models" are formulated in terms of intentions and feelings, and when a robot analyzes its behavior, it creates beliefs about its own feelings. No problem so far. Moravec wonders whether at that future time the robot will have genuine feelings, only believe that it has them, or only behave as if it has them; here's his big chance to set us straight but he provides no clear answer of which of these options to prefer. The trichotomy he has set up leads one to think there will be a fact of the matter, though we just won't be able to find it out. But instead he notes we might ask the same questions of ourselves! How can this be? Apparently it is because feelings, beliefs, and thoughts are arbitrary interpretations of physical events. One can interpret us as having those things or not, with apparently all interpretations legitimate. Recall that Turing thought most people would say that robots who communicated did have feelings (Moravec, 1999, p. 82-83). Moravec agrees that most people are likely to take the robot as a real person (Moravac, 1999, pp. 75-77).

As for consciousness being a narrative we conduct within ourselves, while robots will have language ability, "there seems little point in programming them to reason by talking to themselves." Some configurations of the "four-layer action/conditioning/simulation/reasoning control systems" in a robot will make the robot "more thoroughly conscious" than the average human, but similar to the case of robot emotion, this appears to mean just that the robot would be cautious and even indecisive, constantly checking and monitoring itself (Moravec, 1999, pp. 114-115). So apparently if all we mean by consciousness is a kind of checking process, they will be conscious, but it will not go as far as any kind of internal talking.

The Searle-like objection to the possibility of robot consciousness is that a simulation of understanding, consciousness, etc. is not the same as real understanding, consciousness, etc. Moravec seems to disagree. In what sense is a simulated existence of something real? He seems to say that a simulation does not depend on any external interpretation in order for it to exist as a simulation. The interpretation of a simulation is a "dispensable external." But he also holds that any simulation can be found in any sequence, given the right interpretation, which seems to make the simulation dependent on an external interpretation for it to be a simulation rather than just some process that does not simulate (Moravec, 1999, pp. 196-197). He seems to try to resolve this by relying on what he thinks of as his Platonic realism about interpretations--"every interpretation of a process is a reality in its own right." The abstract relationships that constitute the mind (including its own self-interpretation) exist independently, and a robot or biological brain is just a way of "peeking at them." Some worry that future robots will be like zombies--intelligent, feeling beings but with no internal sense of existence. Moravec replies that while there are interpretations of the human brain or robot as mindless, there are others that allow us to see a "real, self-appreciating mind" (Moravec, 1999, pp. 196-197). Here we go again. Moravec claims that anything can be interpreted as possessing any abstract property, including consciousness and intelligence. Movement of atoms in a rock can be seen as the operation of a complex, self-aware mind (Moravec, 1999, p. 199).

Right now our consciousness is dependent on our bodies and physical laws in this world. But when we die the rules "surely change." Our consciousness continues to exist in some other possible worlds, and we cannot say exactly how our consciousness is related to body in the next simplest world. Perhaps we will find ourselves "reconstituted in the minds of superintelligent successors" or in AI programs (Moravec, 1999, p. 210).

So will robots be conscious or not? In the really interesting sense of "consciousness" as phenomenal awareness, Moravec seems to be claiming that they will be because there will be an interpretation of them in which they are. This may not sound like much but it is his view of how humans are conscious too, and he thinks we are conscious. It would seem that robots will be conscious in really no different a sense than that in which we are conscious, because we both will instantiate an independently existing interpretation under which we are conscious.

So one wants to say that Moravec holds that robots will be conscious in any sense we are conscious. But as we have seen, there are problems understanding his view, and he doesn't provide answers to natural questions that may arise about what he means. For example, there would be an interpretation of my brain in which I am not conscious, and presumably this interpretation also exists independently of me, though it seems to conflict with the interpretation in which I am considered conscious. Is it that this interpretation is not instantiated? What if someone chooses to interpret me in this way, is it then instantiated, making me both conscious (my interpretation) and not conscious (the other interpretation)? I do not understand why this conflict of interpretations would not force Moravec to question the meaning of what he is saying. We wind up with a position in which anything can be interpreted as anything, with me no more conscious than a rock if that is how we interpret me and the rock.

Humans as Persons

We now have the views out on the table and the consensus seems to be that robots will be conscious. The position that surfaces in at least most of the comments of our authors is that consciousness involves some sort of subjective experience and phenomenal awareness. But there is also a consensus that this consciousness can arise purely from the physical, as it does in us.

As a way of leading into a discussion and appraisal of the views of our authors, I must deal at the start with Moravec's comments about interpretation. I am just going to have to ignore most of them. We commonly take it that there is an external world independent of our perception and conception. We are not just constantly dreaming, brains in a vat, under the deception of an evil genius or demon, etc. We use terms such as "facts" and "states of affairs" to describe that the world just is a certain way independent of our interpretation. Our biases and opinions may shape how we feel about the world, but we don't assume that our conceptual framework forms the world in any significant sense.

I am just going to claim without argument that whether robots are conscious is a question of fact (metaphysically determinate), much as we assume that there are other facts about the world that we don't create by our "interpretations." Assuming we can get clear on what terms such as "consciousness" mean, whether robots are conscious is not just a matter of "interpretation." This is just to say that the issue is metaphysically determinate, whether or not it is epistemologically or linguistically determinate. To answer them, questions of fact call naturally for discovery, not decision or interpretation. It may be that a particular question of fact cannot be settled because we cannot carry out the relevant discovery, perhaps due to physical limitations. If the matter of fact is of great importance then we might need to be practical and make a decision about which position on the matter of fact we will agree to hold. But this does not change the matter of fact into some vague matter of interpretation in which anything goes and there really is no fact of the matter. And it may be that consciousness comes in degree, as in some sort of panpsychism. I am not ruling this out. There may be borderline cases in which we are not sure how much consciousness an entity has. But this does not mean that in the case of humans, for instance, whether we are conscious is just a matter of arbitrary interpretation. The same holds for robot consciousness.

As I say, I am just going to assert this claim. It wouldn't even be an issue here except for the fact that Moravec's comments make one wonder if he is trying to claim something contrary. My view here is basically the claim that questions of "what is the case" (metaphysics) are distinct from questions of whether and how we know something to be the case (epistemology). In this thesis I cannot possibly mount an extended argument to try and support this view, even if I could present a good version of such an argument. Such an argument would have to work its way through realism vs. anti-realism, hermeneutics, modernity and post-modernity, truth and relativity, pragmatism, etc. But I think it clear that the view I adopt here is a kind of realism that pretty much everyone not in the grip of an ideology holds anyway. This thesis is about robot consciousness and mind transfer, and I cannot possibly spend half of it discussing realism and anti-realism or idealism.

This means that I am not going to try to unpack what Moravec could possibly be thinking when he makes claims that consciousness is a matter of interpretation, with all interpretations existing in their own Platonic realm. Moravec is a very inventive fellow, but one has to agree with McGinn that on this matter Moravec's views are "bizarre, confused, and incomprehensible" (McGinn, 1999). Given that it will be treated as a matter of fact whether robots will have minds and be conscious, we want to know whether we have good and perhaps conclusive reasons for thinking they will.

This chapter will try to explore basically two complex and interrelated topics. First, in what sense do humans have minds and consciousness? This is part of the larger issue of what it means to say that we are persons. While it could be argued that there are many essential aspects of personhood, such as having a mind and consciousness, having free-will, being creative, and being capable of moral agency, we cannot address all of these fully. I will devote the most time to the mind and consciousness issue. Some time will be devoted to the free-will issue. I will make only passing comments about creativity and moral agency. Second, is it plausible to think that robots will have minds, be conscious, and be persons? Our authors think it is, and this view is what we need to examine.

We turn to tackle the first complex question, then, that of the meaning of human mentality, consciousness, and personhood. As a way of leading into this topic, and to avoid later confusion, we need to make the distinction between "human" and "person." These two terms do not mean the same thing. It seems that there could be a person who was not human. For instance, imagine an alien visitor from outer space who displayed advanced intelligence, sophisticated linguistic ability, a sense of morality, and behavior characteristic of emotions and feelings. We might think that surely this creature is a person even if it is not human. And perhaps not everything human is a person. Imagine someone losing an arm in an industrial accident; the detached arm is human, but it's not a person. Well, you say, it's not a human being either. Fair enough--imagine a horrible accident that leaves a human being in an irrecoverable, permanent vegetative state, with no consciousness. We might think that here we have a human being that unfortunately is no longer a person. Or to step into a controversy that I will not discuss further, consider the case of a well-developed fetus. There might be some slight controversy over whether this is a human being (it is clearly human), but there is even more controversy over whether the fetus is a person (though it is clearly a potential person). Or at least there should be discussion about this issue. Unfortunately, it's rare to hear anyone outside of professional philosophers make the kind of distinction between human and person that we need to even start clarifying the abortion controversy. I think it clear that the abortion controversy would be more focused were the participants to note this distinction between a human being and a person, though this alone might still not resolve the controversy.

So we have the distinction between a human being and a person. In the situation of mind transfer, we want to transfer as persons, though we might leave behind being humans. What characteristics are necessary for personhood? I'm not sure we can reach a consensus, but we would probably agree that typically a person possesses rationality, consciousness, free will, moral agency and responsibility, and creativity. I am not going to try to provide an exhaustive list of possible attributes of personhood. These will do. We would be hard pressed to imagine continuing existence as any kind of person without them. Maybe if I lost the ability to throw a baseball or distinguish between red and orange I would still be a person, though if I had no feeling or emotion one might wonder whether I was a human person. But would I be a person at all if I had no conscious experience? So with respect to transferring into a robot, at a minimum we want to ensure that the robot is capable of characteristics or attributes or properties like those above that we think are essential properties of persons per se (whether or not more is needed for them to be specifically human persons).

What is the mind? We need to discuss this because it is allegedly the thing being transferred and because the robot will need one to be a person. Philosophers and others have long pondered the mind and its relation to the body. This general topic is sometimes called the "mind-body" problem. Our authors think they hold a particular view about the relation of the human mind to the body, but they may be oversimplifying matters. We will want to consider how various positions on the mind-body problem are compatible with their assertions about the extraordinary future.

I'm not going to get into all the possible ways the mind has been defined historically. It can be seen as a substance or thing, a faculty, a collection or set of elements, etc. Some thinkers believe there is a conscious part and an unconscious part. Some think it is distinct from the brain while others see it as just the brain or a part of the brain. The mind is usually taken to include thoughts, beliefs, desires, wishes, perhaps dispositions, etc.

To give a rough characterization of the views that follow, the substance dualist sees the mind as an immaterial substance, the type-identity materialist believes the mind is the brain or part of the brain, the behaviorist believes there is no mind as we traditionally understand it, the eliminative materialist believes that there really is no mind or it is very different than what we commonly assume, the functionalist sees the mind as not a type of thing but a way of describing a function or set of functions, and the property dualist sees the mind as the brain or part of the brain but with some nonphysical properties. As I say, this is a crude characterization and some thinkers holding the above views might object to this way of putting it, but it gives you an idea of the landscape ahead.

Substance Dualism

With respect to the human mind-body problem, as I mentioned, there are a number of classic positions. Unfortunately for our authors, they seem not to be very aware of some of them, but we can help them out. Two qualifications must be made. First, I will describe just the basic positions, not the many subtle variations within these basic positions. The basic views relevant to our purposes are substance dualism, property dualism, the identity thesis version of materialism (type-identity materialism), eliminative materialism, behaviorism, and functionalism. Second, I will describe the basic positions and even provide some commentary on reasons given for these positions, but I will not try to decide which position, if any, is the correct one.

If you ask the proverbial "man on the street" questions about what a human is, you might eventually come to the conclusion that the respondent thinks that humans are something more than just a physical body. The opinion is likely to be that humans have something other animals do not--a soul, mind, spirit, self, etc. Opinions are probably going to be confused and confusing about what this "extra" is and its relation to the physical body, but many people have the intuition that we are more than just bodies in space and time. The claim might also be that after death, this immaterial aspect of us survives.

The position that people are or have some sort of immaterial soul or substance is a popular one historically among some of the great thinkers in philosophy and religion as well as a popular view among everyone else. Our authors constantly disparage it as a remnant of religious superstition. For our purposes we can take as a representative of this position the substance dualism of Rene Descartes. Descartes claimed he could imagine his mind continuing to exist without the existence of his physical body. He took the essence of himself to be thought and the existence of his thinking to demonstrate his very existence ("I think, therefore I am" is one way he put it.) He is essentially a thinking thing but not essentially a material, physical thing, though in fact on earth his mind is related to his physical body.

Descartes is considered a substance dualist on this issue because he holds that minds and bodies are two basic, irreducible kinds of substance. The label "dualism" implies a distinction between only two entities but I will use the term for any position similar to the traditional substance dualism. I recall seeing religious tracts claiming that human beings comprise mind, body, and soul, or spirit, body and soul, and so forth, and such positions imply a distinction among more than two entities. On some uses of the term "soul," the soul is distinct not only from the body but also from the mind. I will use the term " substance dualism" for any such position which holds that a human comprises one or more distinct immaterial substances currently joined to a physical body.

Substance dualism is not now as popular among philosophers as it used to be; in fact, many scientists and most philosophers have long since abandoned it, either because they see it as an outmoded vestige of religion or because it seems to go beyond the (scientific?) evidence. Of course, some Roman Catholic and conservative Protestant theologians and some philosophers still hold the view. And I have mentioned that it is a common view of the "man on the street."

Some modern thinkers don't find substance dualism realistic because they think science really provides the correct approach to the question of what we are, and they think science can do so on a purely material/physical basis. That is, there is no need to resort to claiming we are or have an immaterial mental substance. So, on this view, the principle of parsimony dictates that we adopt a "physicalist" position on this issue. Substance dualists are seen as religious and anti-scientific reactionaries. It's pretty clear this is the real view of the extraordinary future authors we have considered, even though for example Kurzweil pretends to be sympathetic to "the truths in all views."

But in criticizing substance dualism, considerations other than religion are brought in. One might claim that, after all, aren't mental events very strongly correlated with brain activity? When someone's brain is damaged, this seems to affect his or her mind. This suggests that in some way the mind just is that brain activity and not a distinct substance. Actually, substance dualism ran into problems long before this century in trying to account for the seeming causal interaction between mind and body. A mental event, such as a thought, seems to be able to bring about a physical event, such as my moving my right arm. I think about raising my arm, will to do it, and it just goes up. How exactly does an immaterial event or thing like thought or mind accomplish this? How does an immaterial substance exert any causal power on the physical? Or when I stick my hand too near a fire, how does the physical event of the fire touching my arm result in a mental event such as the sensation of pain (philosophers have long been accustomed to taking pain to be the paradigm example of a mental event--the feeling of my arm being in pain conceivably could be caused by electrical activity in the brain even if I didn't have an arm). Presumably a scientist analyzing the chain of events would do so in terms of the physical events, and there seems no place in such an account for the insertion or removal of an extra-physical mental power or energy. It's not as if the physics equations describing the physical events wouldn't balance unless the extra mental force was brought in on one side. Substance dualists might reply that physical to physical causation is mysterious too, so the inability of substance dualists to explain mental-physical causation is not crippling. But this reply seems ad hoc and in the direction of multiplying mysteries, and most philosophers have not been persuaded by it (Hannan, 1994, pp. 11-12).

Type-Identity Materialism

Because of the above problems with substance dualism, a more popular view these days among philosophers, scientists, and intellectuals is some version of physicalism or materialism. Such materialism comes in many different versions. One popular view is called "the identity thesis." On this view, mental events per se are identical to physical events in the brain/nervous system. We don't know at this point in scientific knowledge exactly which mental events are identical to which physical events, but eventually this will be learned. Note that the identity thesis is a much stronger claim than saying merely that mental events are caused by or correlated with physical events. My belief about something, or my mental image of Aunt Martha, or my toothache pain, is identical to a particular neuron or groups of neurons "firing" in the brain. But the claim is even stronger than this. Not only is this true in my particular case; for all creatures who have such mental activity, the type of mental event (such as pain or mental imagery) is identical to a type of physical brain event. Thus this position holds to an identity between the type of mental state and the type of physical brain state (type identity), and not just an identity between a particular instance of that mental state in me and a particular physical state in me (token identity), though it holds to the latter also. I can use the phrase "type-identity materialism" to refer to the identity thesis version of materialism.

Many materialists, certainly those holding to type-identity materialism, pursue a reductionist strategy. I will reserve a more precise characterization of reduction for later, but right now we can get a preliminary idea what this means by considering the phenomena of lightning and temperature. Centuries ago these physical phenomena were not well understood. With scientific discoveries, it became known that lightning was really identical to certain kinds of atmospheric electrical discharge. Likewise it became known that a gas's having a particular temperature was merely the molecules of the gas having a certain kinetic energy. One could say that lightning and temperature are reducible to the underlying physical events (Hannan, 1994, pp. 17-18). This does not mean that lightning and temperature fail to exist--the type of event we call "lightning" just is identical to a certain type of atmospheric electrical event.

Similarly type-identity materialism claims that mental events and states such as belief and desire do exist, but they will ultimately be seen to be reducible to particular types of physical brain event. Obviously the substance dualist is nonreductionist in this context. (Actually the variation in views on this is much too complicated to get into, and there are some materialists who now think that some form of nonreductive version is more plausible, though they still wish to be considered materialists and not substance dualists or even property dualists. Some of these thinkers might claim to be functionalists.)

Eliminative Materialism and Behaviorism

There are many other variations of materialism that we need not go into. One important version, however, that I will discuss starts out by disagreeing with the identity thesis from the start. Eliminative materialism, in contrast to type-identity materialism, claims that mental states and events don't really exist, though of course we all (falsely) talk as if they do. Recall that the identity thesis described above holds that mental events indeed exist--like lightning and temperature really exist--they are just identical to physical events. Eliminative materialism holds that there can be no identity relations because mental events such as thoughts and pains really in fact do not exist. It is not as if they exist and are identical, or reducible, to physical things. The relevant analogy with mental events, defenders of eliminative materialism say, is not with lightning and temperature but with witches and phlogiston. Witches and phlogiston were both at one time commonly discussed, so it used to be assumed they exist, but we later came to see that there are no such things. It is just the same with mental events such as belief and desire; currently we talk as if they exist but we will eventually come to see they never did.

One might wonder what the real difference is between the position that mental events don't exist (eliminative materialism) and the position that they exist but are simply identical to physical events (type-identity materialism). It may have to do with the matter of whether our terms for mental events refer to anything. When I say that I have a belief, the type-identity materialist holds that the word "belief" really does refer to something, while the eliminative materialist apparently thinks it doesn't.

Obviously the position of eliminative materialism sounds very strange at first. We talk of thoughts and pain and such all the time, and supposedly observe them through introspection. Mental events enter into our lives as causes of action, such as when I say I took my umbrella because I believed it was raining and I desired not to get wet. Certainly our authors talk as if they think we have such beliefs and desires. So how could it turn out that they don't exist? Eliminative materialism claims commonsense "folk psychology" (our ordinary way of talking of beliefs and thoughts as causing our actions) is just wrong about this, and science will realize this years from now when we more fully map out the brain and its happenings. The beliefs and thoughts commonly discussed in folk psychology are either not science or they are bad science. The mind is not only not identical to the brain, it just doesn't exist at all.

Here is a summary of arguments for eliminative materialism. First, as mentioned, "folk" or commonsense psychology, which relies on talk of propositional attitudes (believing that such and such, desiring that such and such), is inadequate as an explanation. (Again, "folk psychology" is the everyday practice of explaining and predicting human behavior by referring to their beliefs, wants, hopes, etc.) For example, folk psychology can't explain sleep, dreams, mental illness, and behavior induced by brain injury. Second is the above argument that claims the best analogy for beliefs, desires, and other alleged mental states is witches and demons rather than lightning and temperature. The unity of science, in which everything ultimately reduces to physics, is unlikely to have a place for current psychology, since it will not be reducible to brain science but instead eliminated as false. Third, for explanation of human behavior, computational cognitive psychology relies on internal causal states that are not classified in terms of propositional content. Therefore they cannot be identified with the beliefs, desires, etc. of commonsense psychology (Hannan, 1994, p. 46).

Opponents of course reply to all these arguments. The fact that folk psychology does not explain everything does not mean it can't explain anything. Computation theory does not explain the behavior of hardware-damaged computers, but this doesn't mean it is worthless in explaining the behavior of properly functioning computers. The notion of intertheoretic reduction presupposed in the unity of science theme is itself vague and controversial. It is unlikely that cognitive psychology will eliminate all reference to states significantly like the beliefs and desires of folk psychology. Simple component states may not have propositional content, but larger states composed of them may be identified as propositional attitudes (believing that, wishing that, etc.) or else not be "cognitive" psychology at all. Built into the notion of cognition is the idea that cognitive beings involve cognitive attitude states (Hannan, 1994, pp. 50-55).

With respect to positive arguments against eliminative materialism, and not just replies, it has been argued that folk psychology can't be all false because we have direct introspective awareness of the beliefs, etc. it discusses. "That we introspect could not conceivably be false, even if beliefs we form on the basis of introspection are often false." Remarks by Patricia Churchland that it could turn out that there is no such thing as awareness, which implies that it could be discovered by future science that we are all in fact unconscious, are dismissed with incredulity (Hannan, 1994, pp. 52-53, 57-67).

A view that seems to share similarities with eliminative materialism is behaviorism, which either denies the existence of mental states (a crude or radical behaviorism) or equates mental state terms with terms about dispositions to behave (logical behaviorism) (Fodor, 1981, pp. 115-117). Both types of behaviorist in effect deny the existence of mental states as traditionally understood and in this respect share affinity with the eliminative materialist. Most simple versions of behaviorism have by now been abandoned in favor of functionalism (see below) or what has been called "complex behaviorism," which sees beliefs and desires as implicit in other sorts of states and processes that cannot be identified with commonsense psychological states (Harman, 1989, pp. 834-835 refers to Dennett, who seems to hold this type of view).

Functionalism

The functionalist position claims that to understand the nature of thought and mental events we have to think of them as functional states. The easiest way to understand functional states is to consider analogies with ordinary mechanical devices. Some use the example of a thermostat, which can be realized in different physical mechanisms. That is, a thermostat can be made of a variety of materials; the important feature of all such realizations is that given a particular temperature as input, the output from the thermostat is the turning off or on of the heater or furnace. Or consider the example of a smoke detector. What is important about this object hanging in your house is its functional role to detect smoke (and thus fire) and sound an alarm. Some work photoelectrically by detection of the blocking of a photoelectric beam (smoke blocks some of the light), while others detect certain kinds of ions present in smoke (even before it looks "smoky" in the room). What matters for something being a smoke detector is not what it is made of or even the exact mechanism by which it works but the function is serves. Even a forest ranger looking with binoculars from a tower might be considered a smoke detector because it serves this function.

To a functionalist, a mental event or state is really a functional state of producing certain kinds of behavior given certain inputs. For example, dropping an electric piano on your toe (the input) produces pain-behavior as your output (I speak from experience here). Being in pain, or having a belief or other mental event, just is having a certain type of output in response to a certain type of input.

During the sixties and seventies functionalism replaced type-identity materialism as the most popular mind-body theory among professional philosophers. Type-identity materialism was a popular view in the fifties and sixties. But at least two problems with type-identity materialism caused many to abandon it: first, it seems too restrictive and second, it leaves something out. A clear development of the second problem will wait until I discuss Chalmers and property dualism, but we can talk about the first problem here.

The first problem is that the identity thesis, or type-identity materialism, seems to tie mental events too closely to the types of states that occur in human brains; we want to allow that mental states such as pain could be realizable in different types of brain event, not just those occurring in humans. This objection is usually called the "multiple realization argument." Consider that the identity thesis holds not only that my particular thought is identical to my particular brain event, but that thought types in general are identical to brain event types. The puzzle is that if types of thought are identical to brain state types, how could we ever allow that non-humans (such as Martians, Vulcans, robots, etc.) might have thoughts? Yet we want to allow this. Clearly if we were ever visited by aliens from another planet or somehow found out about them, we might want to allow that such beings could think, though they aren't human. Or consider non-human animals, such as dogs and cats. Aren't we willing to concede that such creatures have the mental experience of feeling pain? But how could this be, if pain as a mental state type is identical to a type of brain event only found in human brains? Cats and dogs in pain may not be in exactly the same physical brain state type as humans are when humans feel pain, but they could still be in pain. It seems possible that humans and other creatures have mental states, though the human brain state types are not identical to those of the other creatures. In short, we should allow that mental states like thoughts, beliefs, desires, and pain are multiply realizable in various physical states, rather than tying them to particular human brain state types via identity with such states.

To many thinkers this multiple-realization argument was a convincing case against type-identity materialism and in favor of functionalism, which I will get to shortly. But before discussing functionalism let me note that some type-identity materialists believe that they can accommodate multiple-realization. Kim presents the most vigorous defense of type-identity materialism against such an argument.

To understand what Kim has in mind, think of what is supposed to be going on in the case of individuals. Type-identity wants to say that you and I can be in pain because we can have the same brain-state type, so clearly the type can be held in common among the brains of different individuals, which are not likely to be exactly alike. Of course one of the problems here seems to be that the notion of a brain-state type is vague. In the case of humans there is just going to be some variation in brain structure both among different humans and within the life of any one particular human. Presumably this variation was not thought to be crippling to type-identity materialism. There must have been enough looseness of fit built into the notion of brain-state here to be able to say that the type of pain (the mental type) is identical to a type of brain-state (the physical state) and yet allow such variation in brain-states among humans and within the life a one human.

But can the brain-state type be loose enough to allow the same brain-state type to be held in common among brains as different as those of dogs, cats, lizards, humans, Martians, robots, etc.? The objector thinks that type-identity materialism is required to say no. Kim replies that the type-identity materialist (whom he calls the "reductive materialist") can allow that such diverse brains realize the same brain-state type. This is because to accommodate such multiple realization one need not rely on a "global" realization but only on a "local" realization that is species-specific.

To understand Kim, just think of what is happening when the same brain-state type occurs among different moments of a human individual's life or among different human individuals. (Allow me a little license in the terminology to simplify the position so that I can present it in a few sentences; Kim might not explain it in precisely these terms.) As explained above, "brain-state type" is loose enough here to allow such different brains to be in the same state and thus realize the same mental state type. Imagining the same mental state type to be realized among brains that are not exactly alike is similar to what Kim wishes to hold with respect to variations not only among humans and within humans but across species. While a mental state type is identical to a brain state type, this still allows the actual realization to be species-specific because the particular way the brain-state type appears can be specific to particular species. This is a "local" realization; the way the brain-state type appears is local to the particular species. To hold that the particular mental state type is identical to a particular physical state type thus allows that physical state type to manifest itself differently for different types of organisms. The fact that the "neural correlates" of mental states are species-specific provides for the local identity that is all Kim thinks the type-identity theorist wants (Kim, 1998, pp. 93-94). What is identical to the mental state type is actually a complex, disjunctive property which consists of a disjunction of species-specific physical state types (Kim, 1992, p. 8).

As I mentioned above, I haven't been too precise in terms here because I just wanted to get the view out on the table. I don't know whether the particular brain states of multiple species that realize the same mental-state type should be thought of as manifesting distinct brain-state types or instead should be considered the same brain-state type appearing in different forms for different species. One would think they would be considered different realizations of the same brain-state type, and in this way preserve the identity of mental state type with brain state type that is supposed to be the hallmark of type-identity materialism. But Kim writes as if he wants to call them all distinct brain state types, the disjunction of which being what is identical to the mental state type. The reduction of mental state type to brain state type occurs at the local species level, not globally. For our purposes understanding his point is more important that disputing his terminology.

Kim thinks local reductions of this sort are common in science (Kim, 1994, p. 249). An analogy is the realization of temperature in multiple types of material. We think of temperature as one kind of property, but we also allow that temperature is one property in solids, another in gases, another in plasmas, etc. The higher-level property of temperature is considered identical to a complex, disjunctive physical property (the disjunction of those physical properties each in a particular medium). So mental properties might likewise be identical to disjunctions of physical properties (the disjunction of particular physical brain-state types in particular kinds of organisms) (Hannan, 1994, p. 22).

The dispute between Kim's version of reductionist materialism and functionalism has not been resolved. To the functionalist, saying a mental event type is such a complex, disjunctive property that can be realized in different brain state types is getting awfully close to allowing that what is in common among these different particular brain-states types is nothing more than similarity of function. So on this view, Kim is a functionalist without realizing that he is. Kim has not won many converts; to many thinkers the force of the multi-realization objection had already caused them to abandon the identity thesis for functionalism.

Let's return to functionalism. Strictly speaking, functionalism is not a type of materialism, since mental states are defined in terms of function rather than in terms of any physical identity. But practically speaking, almost all functionalists believe that human mental states in particular cases happen to be identical to human brain states. The functionalist definition of mental states in terms of function allows the functionalist to avoid tying mental states too closely to human brain states. So a Martian (or a robot, but more on this later) can be truly said to feel pain, or have a belief, etc. if given the right input they produce an output similar to ours. The functionalist believes the type-identity of the identity thesis (mental state type = physical state type) is too restrictive (though we have seen that someone like Kim believes not). But as mentioned, most functionalists are materialists, so they will go along with a "token-identity" here, allowing that in any particular case the particular instance (token) of the mental state is identical to a particular instance of a physical state.

Recall that the first problem for type-identity materialism is accounting for multiple-realization. The other problem with materialism, according to some objections, is that it leaves out some thing or things important. Of course, the substance dualist would claim it leaves out of account the fact that the mind is an immaterial substance. But even those not inclined to the position of substance dualism might object that it leaves out certain kinds of qualitative properties (qualia), subjective experience, or consciousness itself, and if these are to be accommodated one should adopt a version of property dualism. (This objection of "something is being left out" is frequently taken to be against both functionalism and various types of reductive materialism.)

Property Dualism

Property dualism holds that while the mind is not a distinct substance, mental events have certain properties or aspects which do not seem to be physical properties and so the mental is not fully reducible to the physical. Furthermore, these properties or aspects are not provided in a functional account of the mind either. So, according to this line of reasoning, while we may not have to admit to the existence of a distinct nonphysical substance, we have to admit at least to the existence of nonphysical properties. The term for a variety of slightly differing positions that hold this view is thus "property dualism." Property dualism walks the uneasy line between materialism and substance dualism, and so some consider it a type of "non-reductive materialism," while others argue it should not really be considered a type of materialism (Hannan, p. 81).

The best modern defense of property dualism is from David Chalmers, whose analysis I think is at times so illuminating that he threatens to make every other author on the topic look like a rank amateur. I want to spend some time explaining Chalmers' views since it is property dualism that may present the biggest problem for the authors predicting the extraordinary future. Chalmers' comments will also introduce some important issues and terms for use later in our discussion. In the description that follows, I rely on his terms and phrases as used in his recent book The Conscious Mind.

First of all, what is consciousness? There is really no good definition of consciousness. Chalmers claims that what is central to it is experience, or the "subjective quality of experience." It is the internal aspect (conscious experience) to our information processing, the something it feels like to be a cognitive agent. "A being is conscious if there is something it is like to be that being… a mental state is conscious if there is something it is like to be in that mental state." A mental state is conscious if it has a qualitative feel--an associated quality of experience. These qualitative feels can be called phenomenal qualities or qualia. Included are visual experiences (such as color), auditory experiences, tactile experiences, olfactory experiences, taste experiences, experiences of hot and cold, pain, other bodily sensations such as hunger, mental imagery, conscious thought, emotions, and the sense of self (Chalmers, 1996, pp. 3-10).

The phenomenal concept of mind must be distinguished from the psychological concept of mind. The first is that of the mind as conscious experience (the way the mind feels), while the latter is that of the mind as the causal or explanatory basis for behavior (what the mind does) (Chalmers, 1996, p. 11). Many mental terms can be used in either way, with the term picking out a phenomenal or a psychological aspect. For example, "pain" can characterize an unpleasant phenomenal quality or the type of state produced by damage to an organism (Chalmers, 1996, pp. 16-17). We have no independent language for describing phenomenal qualities, and we specify the phenomenal qualities in terms of their psychological (causal/functional) role. "Green" may characterize the phenomenal aspect of an experience, but we learn the term in contexts of observing green things (response to external stimuli). But this should not lead us to be fooled into thinking that there is nothing more than the psychological role (Chalmers, 1996, pp. 22-23). This is relevant to various historical views in the philosophy of mind. Functionalism defines all mental state in terms of their causal roles, but this then is an assimilation of the phenomenal to the psychological. Wondering about whether somebody is having a color experience is about whether they are experiencing a color sensation, not whether they are receiving environmental stimulation and processing it a particular way. Functionalism may be useful but it gives an unsatisfactory analysis of phenomenal concepts (Chalmers, 1996, pp. 14-15).

The term "consciousness" likewise has both phenomenal and psychological senses. Conflation of these two senses has been a problem in analyses of the topic. To be conscious in the former sense is to instantiate some phenomenal quality, but the term can also refer to psychological properties, such the reportability of information. So we can distinguish between phenomenal consciousness and psychological consciousness. Psychological consciousness includes notions of awakeness, reportability, self-consciousness, attention, voluntary control, and knowledge. All these are largely functional notions, but they may be associated with phenomenal states. For example, "self-consciousness" can refer to a phenomenal state of feeling a certain way. The psychological property associated with experience itself is awareness, the functional state whereby we have access to information that we can use to control behavior. There seems to be awareness whenever there is phenomenal consciousness, but there may be awareness without phenomenal consciousness (Chalmers, 1996, pp. 25-30). The hard problem of consciousness involves the phenomenal aspect and that sense is what Chalmers usually means by the term "consciousness" (Chalmers, 1996, p. 31). In this thesis, I use the terms "phenomenal consciousness" and "phenomenal awareness," as well as "consciousness," in this phenomenal sense.

An old way of talking is for some dualists (such as some epiphenomenalists) to say that the mind or consciousness is "caused" by the brain, but the notion of cause is not necessarily what we want for property dualism. Instead, Chalmers develops the notion of supervenience. This technical notion is of great benefit in clarifying positions on the relation of mind to body. Supervenience is a relation between two sets of properties: high-level properties (B-properties) and low-level properties (A-properties). "B-properties supervene on A-properties if no two possible situations are identical with respect to their A-properties while differing in their B-properties." ("Identical" here means not numerical identity but indiscernability.) A-properties, being low-level, are in this context physical properties, such as mass, charge, spatio-temporal position, etc.; in other words, the fundamental properties that would be included in a completed theory of physics. These low-level properties thus fully determine those high-level properties (such as biological properties) that supervene on them (Chalmers, 1996, pp. 32-33).

Local and global supervenience must be distinguished. "B-properties supervene locally on A-properties if the A-properties of an individual determine the B-properties of that individual…" As an example, consider that shape supervenes locally on physical properties--any two objects with the same physical properties will have the same shape. "B-properties supervene globally on A-properties, by contrast, if the A-facts about the entire world determine the B-facts…" Local supervenience implies global supervenience, but the latter does not imply the former. With respect to the problem of consciousness, this direction of implication between local and global supervenience is not all that important. It is likely that if consciousness supervenes on the physical, it supervenes locally, so then it does so globally too. In other words, if two creatures are identical physically, then they will have identical phenomenal experiences (for their particular physical experiences) even if they have different environments and histories (Chalmers, 1996, pp. 33-34).

More important is the distinction between logical (conceptual) and natural (nomic, empirical) supervenience. "B-properties supervene logically on A-properties if no two logically possible situations are identical with respect to their A-properties but distinct with respect to their B-properties." The notion of logical possibility here is close to that of conceivability--what is not contradictory is logically possible. Thus, to use Chalmers' examples, flying telephones are logically possible, but male vixens are not (Chalmers, 1996, pp. 34-35). "B-properties supervene naturally on A-properties if any two naturally possible situations with the same A-properties have the same B-properties." A naturally possible situation is one that could occur in nature without the violation of natural laws. Something that would violate natural (scientific, not logical) laws would be naturally impossible. Chalmers clarifies what he means with examples. It is probably naturally possible with build a mile-high skyscraper, though this has not been done. A universe without gravity, though, may be logically possible, but it is not naturally possible (since it would violate the natural laws we have). So there can be situations that are logically possible but not naturally possible, though not the other way around (Chalmers, 1996, pp. 36-37).

Chalmers uses the notion of supervenience and the above distinctions to characterize materialism (physicalism). Materialism is the view that all the positive facts about the world are globally logically supervenient on the physical facts. ("…once God fixed the physical facts about the world, all the facts were fixed") (Chalmers, 1996, p. 41). This ties in also with the idea of reductive explanation, which is explanation wholly in terms of simpler entities. Once a correct reductive explanation is given, any sense of "fundamental mystery" vanishes. Most natural phenomena can be reductively explained in more basic terms. For example, reproduction is explained by an account of the genetic and cellular mechanisms by which organisms produce other organisms. The reductive explanation is given by a "rough-and-ready analysis" of the phenomenon in question, with the relevant notions typically being analyzed functionally (Chalmers, 1996, pp. 42-44).

This kind of reductive explanation in terms of functional analysis works with psychological states by explaining them in terms of the causal roles they play. But this does not seem to work in explaining phenomenal states, because whatever functional account of human cognition is given, the further question can be asked of why this kind of functioning is accompanied by consciousness. It seems logically possible that the functioning in question could occur without any accompanying consciousness. It may be naturally impossible--consciousness arising due to the way nature does in fact work--but it seems logically possible that it not be there (Chalmers, 1996, pp. 46-47).

This hooks up with supervenience. "A natural phenomenon is reductively explainable in terms of some low-level properties precisely when it is logically supervenient on those properties." It is so explainable in terms of physical properties when it is (globally) logically supervenient on the physical (Chalmers, 1996, pp. 47-48).

Now we have enough to see what use Chalmers wishes to make of these concepts. Almost all facts (including physical laws) supervene logically on the physical, but conscious experience does not supervene logically on the physical and therefore cannot be reductively explained (Chalmers, 1996, pp. 71, 87). In other words, consciousness is irreducible. "No explanation given wholly in physical terms can ever account for the emergence of conscious experience" (Chalmers, 1996, p. 93).

For this conclusion Chalmers presents five arguments. The first argument builds on the logical possibility of zombies. A zombie is someone or something physically identical to me, you, or any other conscious being but lacking conscious experiences. Consider your zombie twin, who is psychologically identical to you (identical functionally) because he or she processes the same sort of information, reacts similarly to inputs, etc. He or she will even be identical with respect to psychological consciousness: awake, able to report the contents of internal states, able to focus attention on different matters, etc. But none of this functioning will be accompanied by conscious experience; there is no phenomenal feel. In words reminiscent of Nagel, Chalmers notes "There is nothing it is like to be a zombie." And while zombies may or may not be naturally possible, they are logically possible (Chalmers, 1996, pp. 94-96).

Chalmers has taken some heat for his claims about the logical possibility of zombies. Some thinkers do not think the notion is intelligible. I note that at least one famous thinker outside philosophy left open such a possibility. The anthropologist Julian Jaynes thought it a contingent fact that humans are conscious, for there could have existed a race of humans who were not conscious but who spoke, reasoned, judged, solved problems, and did all the other things we do (quoted in Copeland, 1993, p. 164).

Chalmers' second argument is based on the conceivability of an inverted spectrum. One can coherently imagine a physically identical world in which conscious experiences are inverted. Or at the local level, imagine a being physically identical to you but with inverted conscious experiences: where you have a red experience, your twin has a blue experience, though you both call them "red." The rest of your twin's color experiences could be systematically inverted likewise so that they cohere, for example, the red-green color axis mapping onto your blue-yellow color axis, and vice-versa. Again, this may not be naturally possible (it requiring brain rewiring in the real world) but it is logically possible. With respect to such a twin, one can imagine one's color experiences inverted while the functional organization stays constant (Chalmers, 1996, pp. 99-101).

The third argument is from epistemic asymmetry. "There is an epistemic asymmetry in our knowledge of consciousness that is not present in our knowledge of other phenomena." Our knowledge of the existence of conscious experience comes in the first person from our own case, while knowledge of other things is third-person. From knowledge of the latter one could have no way of arriving at knowledge of the former. But this shows consciousness cannot logically supervene. If it were logically supervenient, there would be no such epistemic asymmetry. A logically supervenient property can be detected straightforwardly on the basis of external evidence, with no special role for the first-person case (Chalmers, 1996, pp. 101-103).

The fourth argument is the knowledge argument, which Chalmers notes is suggested by Jackson and Nagel (though for his part Jackson thinks Nagel's argument is not similar to his). Mary, who lives in the future in the age of completed neuroscience, is one of the world's leading neuroscientists. Her specialty is the neurophysiology of color vision. She knows everything there is to know about the neural processes of visual information processing, the physics of optical processes, and the physical makeup of objects in the environment. But she has been brought up in a black and white room and has never seen any colors except for black, white, and shades of gray. Mary does not know what it is like to see red, because no reasoning from the physical facts alone will give her this knowledge. Thus the facts about the subjective experience of color vision are not entailed by the physical facts, otherwise, she could in principle learn what it is like to see red on the basis of her perfect knowledge of the physical facts (Chalmers, 1996, p. 103).

The example of Mary is of course from Jackson. Various objections to Jackson's argument have been made and he has replied to some of these. For example, the point is not that Mary wouldn't be able to imagine what sensing red is like, it is that she would not know. But if physicalism were true, she would know. Further, the knowledge Mary lacks but which she gains when let out into the world to observe red things is knowledge about the experiences of others. She will realize that in all her previous writings and investigations into physiology she did not know what sensing redness was like for other people (Jackson, 1995, pp. 180-181).

Jackson puts forward the knowledge argument about Mary as an argument against physicalism or materialism. Jackson's argument against materialism is not the same as the one Chalmers later puts forward. Arguing against materialism, Chalmers will focus on the logical possibility of zombies and inverted spectra, though he counts Jackson's knowledge argument as an ally here when he argues against the possibility of a physical reductive explanation of consciousness. But he also seeks to defend Jackson against his critics. Various arguments against Jackson's knowledge argument accuse it of failing to note that the same fact can be known in two different ways. We can consider just a few. For example, Mary does not gain knowledge of any new fact, rather she comes to know an old fact under a new "mode of presentation." Suppose you know that a particular liquid is water, but you don't know that water is H2O. When you find out that water is H20, you come to know that the particular liquid is H20, but this is not knowledge of a new fact. Rather it is knowledge of an old fact in a new way. But Chalmers thinks that this reply fails. "Whenever one knows a fact under one mode of presentation but not under another, there will always be a different fact that one lacks knowledge of--a fact that connects the two modes of presentation." So Mary does after all gain factual knowledge she previously lacked, and since she already knew all the physical facts, materialism is false (Chalmers, 1996, pp. 140-142).

Or consider the reply that the problem is over a confusion about indexical knowledge. All Mary lacks is indexical knowledge, which is knowledge that facts apply to oneself, or one's time or place. Chalmers replies that even if we were to give her perfect knowledge about her indexical relation to everything in the physical world, her knowledge of red experiences will not be improved. Lacking phenomenal knowledge is lacking something more than indexical knowledge (Chalmers, 1996, pp. 143-144). We have already seen that Jackson has presented his own reply to this kind of criticism by noting that if you don't like the knowledge that Mary gains about herself, consider that Mary gains new knowledge of other people.

Nor is it accurate to say that Mary gains not new knowledge but merely a new ability (such as the ability to imagine red things). She does gain new abilities, but it is implausible to think this is all it amounts to; she seems to learn some new facts about the nature of experience (Chalmers, 1996, pp. 144-145).

Chalmers notes that Jackson considers his version of the knowledge argument an argument against materialism (physicalism). But Chalmers wishes to use the argument against the slightly different issue of reductive explanation. He thinks that most of the typical replies against Jackson, even were they to succeed in showing that Jackson hasn't refuted materialism, will have nevertheless conceded the crucial point that the knowledge of what red is like is factual knowledge that is not entailed a priori by knowledge of the physical facts. And to Chalmers it is factual knowledge that is at issue: when Mary sees red for the first time, she is discovering something about the way the world is, and she is gaining knowledge of a fact (Chalmers, 1996, pp. 103-104).

Chalmers notes a related way to make the same point, one used by Nagel, is by considering systems simpler than us, like bats. Physical facts about these systems does not tell us what their conscious experiences are like (assuming they have some). Once the physical facts are in, the nature of such a creature's conscious experience is still an open question. From the physical facts we cannot ascertain the facts about a bat's conscious experiences or even that it has any (Chalmers, 1996, p. 103).

Nagel's argument is of course very famous. He does wish to attack reductionism. An organism has conscious states only if there is something that it is like to be that organism, which Nagel calls the subjective character of experience. It seems impossible to give a physical account of the phenomenological features of the subjective character of experience. (Nagel thinks this has to do with the fact that a physical account will be objective, while everything subjective is connected with a single point of view.) In the case of a creature whose experience is likely radically different than our own, such as a bat who experiences the world through something like sonar, there is a kind of subjective experience that we can't even imagine. We certainly haven't got hold of it by examining bat physiology (Nagel, 1974). This may have later implications for the idea that a robot's body and brain may give its subjective experience a character unlike our own.

The fifth argument is based on the absence of any plausible analysis of phenomenal consciousness. The proponents of reductive explanation need to give some at least rudimentary idea of how the existence of consciousness might be entailed by physical facts. For this, one would need some kind of analysis of the notion of consciousness to show that consciousness is entailed by the physical facts. Physical facts could imply the satisfaction of this kind of analysis. For example, upon such an analysis, it would be seen that all there is to the notion of something's being conscious is that such and such physical events occur. But any such attempt will fail, Chalmers thinks, because no one has come close to providing any such analysis. The only kind of analysis that has been offered is functional. But to say that all there is to a state's being conscious is that it fulfills some functional/causal role misses the phenomenal aspect entirely. (This, I agree, seems to be what is missing from Dennett's analysis in Consciousness Explained.) Such analyses as have been offered trivialize the problem of explaining consciousness by explaining it as the ability to make certain kinds of verbal reports, or discriminate things a certain way, etc. But "it is entirely conceivable that one could explain all these things without explaining a thing about consciousness itself; that is, without explaining the experience that accompanies the report or the discrimination" (Chalmers, 1996, pp. 104-105).

And functional analysis seems the only candidate here for reductive explanation. A structural analysis would be clearly inadequate; consider how implausible it is to think that the word "consciousness" means some kind of biochemical structure (Chalmers, 1996, p. 106). But the functionalist analysis collapses the distinction between awareness and phenomenal consciousness. Phenomenal consciousness presumably would be analyzed in a manner similar to that which is used for awareness--accessibility of information and control of behavior. But these are distinct from phenomenal consciousness (Chalmers, 1996, p. 105).

Chalmers notes that there is "semantic indeterminacy" in functionally analyzed concepts anyway. Questions such as whether a mouse has beliefs, bacteria learn, or viruses are alive will depend on how we draw the boundaries of the vague high-level functional concepts involved. Such questions call for a decision, and we will stipulate yes or no. But this indeterminacy seems to vanish when the question is one of conscious experience. Questions of whether mice, bacteria, or viruses have conscious experience are not matters for stipulation. "Either there is something that it is like to be a mouse or there is not, and it is not up to us to define the mouse's experience into or out of existence." This is the same point I made earlier that whether or not an organism is conscious seems to be a (metaphysically) determinate matter, whether or not we can discover an answer to the question, so here I obviously agree with Chalmers. Chalmers does admit that there is probably a continuum of conscious experience from the "very faint to the very rich." But if something has conscious experience we cannot stipulate it out of existence, no matter how faint it might be. Now the point against reduction is that such determinacy could not have come from the functional analysis of concepts in "the vicinity of consciousness," because these functional concepts are vague and matters of indeterminacy and stipulation. So then the notion of consciousness cannot be functionally analyzed (Chalmers, 1996, p. 105).

The argument that consciousness is not logically supervenient on the physical is used by Chalmers to argue for the claim that consciousness is not reductively explainable in terms of the physical. But he also uses it to argue for the ontological thesis that consciousness is not physical. Chalmers (1996, p. 123) presents his basic argument for this claim as:

1.     In our world, there are conscious experiences.

2.     There is a logically possible world physically identical to ours, in which the positive facts about consciousness in our world do not hold.

3.     Therefore, facts about consciousness are further facts about our world, over and above the physical facts.

4.     So materialism is false.

A logically possible physically identical zombie world shows that the presence of consciousness is an "extra" fact about our world that is not guaranteed by the physical facts alone. "Consciousness carries phenomenal information" (Chalmers, 1996, p. 123). A similar conclusion can be drawn from the logical possibility of a world with inverted conscious experiences. That world is physically identical to ours but some of the facts about conscious experience are different than in ours, so such facts in our world must be facts additional to the physical facts (Chalmers, 1996, p. 124). Since materialism is the view that all the facts about the world are exhausted by the physical facts (every positive fact is entailed by the physical facts), materialism is false.

I have spent so much time explaining Chalmers' position because he presents the most forceful and clear exposition of the position that materialism and functionalism can't be correct. (Various arguments from Jackson and Nagel are suggestive but their expositions are not nearly as extensive as that of Chalmers.) Chalmers thinks the only serious option is a dualism of some kind that allows that there are both physical and nonphysical features of the world (Chalmers, 1996, p. 124). He opts for a version of property dualism. While consciousness does not supervene logically on the physical, the systematic dependence of conscious experience suggests it supervenes naturally on the physical. Consciousness "arises" from a physical basis though it is not entailed by that basis. Conscious arises from a physical substrate by contingent laws of nature, though these laws are not implied by physical laws. Consciousness is a feature of the world but not a separate substance, because the best evidence of contemporary science is that the physical world is more or less causally closed, with no place for a mind as substance "to do any extra causal work." The notion of a distinct mental substance is unclear anyway (Chalmers, 1996, pp. 124-125). Chalmers calls his position "naturalistic dualism."

Note that property dualists are sometimes hard-pressed to come up with a satisfactory story of how we got such irreducible properties. The strict materialist can claim that all that we are is matter, and evolution explains how we got here. The functionalist can agree. The substance dualist might reject evolution entirely in favor of a theistic explanation. If the property dualist does not want to retreat into a theological defense, he or she is faced with explaining how humans who are not entirely accounted for in material terms evolved from matter that apparently is. Some property dualists hold an emergent view, claiming that when matter reaches sufficient complexity we just get these nonmaterial properties. Others, such as Charles Hartshorne and the Whiteheadians, might claim that properties such as consciousness come in degrees and occur in a very rudimentary fashion in nonhuman forms, even in plants and rocks to a slight degree (this is panpsychism). These options will have relevance to believers in the extraordinary future. Panpsychism implies that consciousness comes in degrees, and if panpsychism is correct it might be that robots have less or more consciousness than human beings do. But I should also note that while it would be desirable for the property dualist to come up with a story of how we got here, it is not obligatory. The position could be correct even though we can't show how we came by consciousness.

Here we have the basic positions on the mind-body problem in humans: substance dualism, type-identity materialism, eliminative materialism, behaviorism, functionalism, and property dualism. (Another traditional view is that of idealism, in which everything is mind, but due to the fact that science seems to assume a physical basis to reality, idealism is out of fashion these days.) The debate between these various positions on the mind-body problem continues. But behaviorism is not very popular anymore. Probably some of those with behaviorist sympathies are now eliminative materialists. As already mentioned, during the last several decades functionalism replaced behaviorism and type-identity materialism as the most popular view among philosophers. There is a widespread general belief that if some sort of materialism is correct, it must be of the nonreductionist variety. But what this means is unclear. Some think it means property dualism is correct, and there exists a group of property dualists who hold that consciousness, qualia, and subjective experience are not accounted for by any sort of functionalism or materialism. This position has become more visible with the recent work of Chalmers. And one comes across the occasional substance dualist. Substance dualism is still probably the most popular view among members of the general public.

Like many philosophical issues, this one shows no sign of being resolved, but not for lack of discussion. There does seem to be a fundamental impasse in the discussion over the nature and existence of consciousness, with dualists such as Chalmers claiming that phenomenal consciousness is a fundamental fact not explained by any explanation of psychological consciousness, and materialists and functionalists such as Dennett claiming that once an adequate account is given of our psychological consciousness, there is nothing left to account for. Nondualists such as the Churchlands and Dennett have not been receptive to Chalmers' arguments and cite dismissive analogies between consciousness and vitalism, light, etc. The two camps are nowhere near the negotiating table.

Robots and Minds

This chapter of the thesis is not about resolving the above controversy but about understanding how it bears on the question of the plausibility of the extraordinary future. We can move on to consider other aspects of personhood later; right now we evaluate our authors with respect to the issue of the mind-body problem.

Our authors really do not appreciate the complicated nature of the picture. Kurzweil and Paul and Cox make a valiant attempt to mention some names relevant to various positions, but they have not done enough homework to really appreciate the recent nuances of the debate. Despite their name-dropping, in essence they see the controversy as between materialism and theological dualism. But as we have described, there are more basic positions on the mind-body problem than this, and this is without going into the subtleties possible among variations with each basic position. No one expects our authors to be professional philosophers but their portrayal of the issue is simply too simplistic to be able to consider possible options for the extraordinary future or address possible objections to their positions.

They need to see that the issue is more complicated than just one of a religiously based substance dualism against pro-science materialism. Though Kurzweil and Cox and Paul do seem to recognize the issue is more complicated than their discussions mention, they don't really give other positions more than passing mention, and so their portrayal does in the end seem to amount to a choice between these two basic positions. They see the question of human minds as between only a substance dualism and a materialism, they think substance dualism is religious superstition and unscientific, and they think that if they adopt materialism that will make it easier to claim that a robot is conscious, since robots are material. There is no significant discussion of functionalism, behaviorism, property dualism, or eliminative materialism. There seems no real recognition that there is more than one type of materialism possible, for example the identity theory version and eliminative materialism. I don't think it's all that crucial that they cover behaviorism or eliminative materialism, but failing to adequately discuss functionalism and property dualism leaves them easy prey.

Consider the following possible objection, based on comments against strong AI made by Searle. While our authors decry dualism, which they equate with substance dualism, they wind up holding to a dualism rather than a materialist "monism." This is because the mind is seen as software, as the program running on the hardware of the human (physical) brain. In a sense software can be seen as physical, as when one refers to a particular instance of a program stored on a CD and available on a shelf in a store. But there is another sense in which the program is not a physical thing, it is rather akin to a set of mathematical functions or algorithms. The particular software CD you buy in the store is just a concrete instantiation of this abstract program. It is this sense intended when the mind is likened to software. The particular physical brain states occurring as the brain is active constitute an instantiation of the abstract program running in the brain. The program just is the mathematical and algorithmic relationships describing the brain activity. If the mind is just software, and can be instantiated in other pieces of hardware (such as when duplicated in or transferred to a robot), then a new dualism has been created, one between mind and brain as in the old kind, only now not as distinct substances but as abstract entity and material realization.

Now how should our authors respond to such a criticism? Should they abandon materialism and admit to being dualists after all? They would rather die than admit to being counted among the hated dualists. They certainly don't want to admit to being substance dualists, and yet if the mind is the program running in the brain, and not the brain itself, then in what sense are they materialists? Searle is a materialist who seems to think that those who see the mind as a program are not really legitimate materialists. In the simplistic view of our authors, materialists hold that the mind literally is the brain or somehow has a physical basis, whatever that is supposed to mean. But how is this reconciled with the view that the mind is a program? Our authors lack the tools to reply.

I don't think they need to be stumped for an answer if they consider the position of, for example, functionalism. Their position is really most amenable to consideration as a version of functionalism, though they don't realize this. The mind is seen as a functional state or a series of functional states. A program itself might be seen as a functional state or series of functional states, one that maps input to output. In the case of a particular mental state (a state of a particular person), our authors could hold that the mental state token (the particular instance) is identical to the particular brain state (the physical token). This is a token identity materialism adhered to by the majority of, if not all, functionalists. But the mental state type could be realized by a different physical state token (the state of a different type of brain, such as that of a robot), and so the position is not one of type-identity materialism. In other words, a particular belief as a particular mental event really is the particular brain in a certain state (token identity), even though that same belief as a computational state could likewise be realized in a different kind of hardware (a robot brain, for instance). This dualism is not insidious but rather the innocuous kind characterizing any mental state that is seen as a functional state that could be realized in more than one type of physical state. They are not committed to what to them would be an undesirable type of dualism, one that held that the mind and brain were really distinct substances or possessed different kinds of properties.

Well, by this point the cat is out of the bag and it should be obvious that I think their position, at first glance anyway, is most favorable to interpretation as a kind of functionalism. A second place choice would go to the identity theory version of materialism, perhaps on an analysis such as that of Kim that seeks to protect itself against the classic multiple-realization objection argument. But even some of the other possible mind-body views might be amenable to extraordinary future, more so than they realize.

Our authors rant and rave against substance dualism, so it might come as a surprise that such a view might be compatible with the extraordinary future after all, though it wouldn't be their choice of perspectives on this controversy. I think that there is nothing in substance dualism that precludes the mind as substance from being a software program, though it's not going to be solely a software program, and further there is nothing that precludes a mental substance accompanying the creation of a robot. Recall that Turing mentioned that he saw no reason why God couldn't give a soul to a robot, and those who claimed God wouldn't seemed to him to be trying to limit the sovereignty of God.

Of course, one might have a particular conception or doctrine of God such that God wouldn't give souls or mental substances to humanly-created robots, but that claim comes from the particular doctrine of God and not from substance dualism. Or if one's substance dualism is not theologically motivated, holding only that soul or mental substance springs into being when the brain forms, then one might hold that the same occurrence might occur when a robot reaches a similar state of complexity, etc. Given the current state of ignorance concerning the relation of human wetware to the mind as substance, and the attendant puzzles about mental-physical interaction and causation, there seem no greater difficulties occasioned by the view that it is robot hardware rather than wetware that accompanies the mental substance.

Of course to say that substance dualism is compatible with the extraordinary future is not to recommend it to our authors. Given their sympathy with other views, their position that substance dualism suffers from a lack of evidence, and the objections raised by other positions against substance dualism, it would not be the most likely harbor at which to dock their ship.

Behaviorism and eliminative materialism are two other positions that would be compatible with the view of mind as program and the realization of the extraordinary future. Behaviorism sees mental states as dispositions to behave. Were they to adopt behaviorism, our authors could accommodate this view as long as they were allowed to claim that a robot, like a human, is caused to behave by various physical states that amount to hardware running a program. Human-computer mind transfer could be seen as the building of a robot with all the dispositions to behave possessed by the human transferring his or her mind. The same could be said for eliminative materialism, which holds that there are no mental states, at least not the ones we usually assume we have, such as beliefs, desires, and other thoughts. Our authors could adopt the position that the only things present in robots and humans are various states of the hardware, and these are not mental in any significant sense. Human-computer mind transfer, similar to the case in behaviorism, would amount to building a robot with all the relevant robot brain states to produce the same output in the robot that would have been expected in a continuation of the pre-transfer human. If the mind doesn't exist, then strictly speaking we wouldn't be able to transfer it, but what we describe "incorrectly" as human-computer mind transfer could still occur by giving the robot the right kind of brain, in both general and specific details.

But again, as in the case of substance dualism, though these positions might be compatible with the extraordinary future becoming reality, our authors would not likely be amenable to either behaviorism or eliminative materialism. This is because their writings are replete with descriptions of our inner mental life! They all discuss the subjective character of experience, consciousness, thoughts, beliefs, etc. as if they really think such things exist. Even Moravec assumes this, though we have seen it is often not clear what his position is. Short of us becoming convinced that behaviorism or eliminative materialism were obviously the strongest theories, however, there seems little reason to recommend that our authors abandon their assumptions and adopt them, when one of the other positions would fit their beliefs better. I remind the reader that I am not trying to resolve the debate on the mind-body problem by taking sides for one position over all the others.

So we have seen that while various positions seem compatible with the perspectives of our authors, some seem more likely than others to find favor with them. Substance dualism, behaviorism, and eliminative materialism conflict with the explicit statements made by our authors about a variety of matters and so are not likely candidates to recommend. But while these positions might be compatible with the extraordinary future, all except substance dualism suffer from a similar problem: they may not be able to adequately reply to the charge that phenomenal consciousness is not logically supervenient on the physical brain. This was the point of our extended treatment of the argument for this charge put forward by Chalmers.

The reason this is a problem for our authors is that they do not wish to dismiss the reality of phenomenal consciousness. Their remarks make clear that they do not claim that we are not conscious, that we are the same as zombies, or that consciousness can be explained by a purely psychological account. But given that they accept the reality of phenomenal consciousness, they need to consider the strong case we have seen made by Chalmers that only property dualism does justice to consciousness.

Our authors of course do not realize this. They think that phenomenal consciousness either is the same as a physical brain state or arises from a physical brain state without being a nonphysical property or substance. But if Chalmers and other supporters are correct, and phenomenal consciousness is not logically supervenient on the physical, then it can't just be a physical brain state and we don't know for sure that in the case of any particular physical state phenomenal consciousness will supervene naturally on that state.

So the dilemma for our authors is this. Either they give up belief in phenomenal consciousness or not. If they give up belief that phenomenal consciousness exists, then they don't have to account for its occurrence in humans or provide for its occurrence in robots. They could adopt behaviorism or eliminative materialism as a theory. But giving up the belief in phenomenal consciousness is a hard position to hold because it is counterintuitive to the opinion of the majority of people, including professional philosophers. It is certainly not suggested by any comments of our authors; in fact, their comments suggest the opposite.

The other horn of the dilemma is to allow that phenomenal consciousness exists. But then the problem becomes one of finding an adequate mind-body theory that accounts for it. The line of argument suggested or developed by Nagel, Jackson, Chalmers, and others is that only a kind of property dualism or something equivalent (or of course a substance dualism) can account for phenomenal consciousness in humans. If this line of reasoning is correct, then our authors really are closet property dualists without realizing it, for they certainly reject substance dualism. In any event, taking consciousness seriously combined with this line of reasoning entails that functionalism and type-identity materialism are not viable options either for humans or for robots. Recall that I earlier claimed that functionalism was the theory most likely to find favor with our authors, but at least once I qualified the comment with "at first glance." The qualification was made because if they really wish to retain belief in phenomenal consciousness, and a good case can be made that only property dualism can account for phenomenal consciousness, then at second glance our authors' position is most amenable not to functionalism but to property dualism.

Taking phenomenal consciousness seriously and adopting the position of property dualism does not create the happiest situation for the extraordinary future, however, because the view that consciousness is logically supervenient on the physical seems to create lingering uncertainty about whether robots would be conscious. One might attempt instead to rely on mere natural supervenience. In cases of human consciousness, the lack of logical supervenience would mean there would be no guarantee that any particular person were conscious rather than a zombie, but one who held to natural supervenience here could quite rightly refrain from worry that his human friend is really a zombie. Of course, this objection goes, given a robot with exactly the same brain states as a human being who is conscious, the lack of logical supervenience means likewise that there is no guarantee that the robot will be conscious, but couldn't we rely on natural supervenience here too to prove robot consciousness?

The answer seems to be "no." Even if we assume, for the sake of argument, that natural supervenience holds for humans, the robot brain is not exactly the same as the human brain, and one wouldn't know that whatever naturally supervenes on the robot brain is consciousness as we know it. To say that consciousness is naturally supervenient does not entail that what we know as consciousness would naturally accompany any physical state whatsoever. Natural supervenience means only that in cases where the organism is conscious, it is of natural (though not logical) necessity. If the robot were conscious, then it would be so of natural necessity, but whether it is so is the very point at issue. If the robot had a human brain, we might assume consciousness from natural supervenience on the basis of analogy. But in no scenario of the extraordinary future will a robot have a human brain. In mind transfer the whole idea is to give a human a new, better, "electronic" brain, made of different stuff. And as we have noted, our authors do not even insist that the robot brain structure be exactly the same as the human brain. It seems a natural and prudent assumption to think that the greater the difference between the robot brain and the human brain, the less assurance we would have that phenomenal consciousness (as humans know it) is present.

Actually, I'm not at all sure that even logical supervenience would guarantee consciousness if the robot brain were different than a human brain in any way. If consciousness were logically supervenient on the physical, it would not mean that what we know as consciousness would necessarily accompany any physical state whatsoever. For particular brain states that are in fact accompanied by phenomenal consciousness, it could not be otherwise. But it would remain an open question whether what in fact logically supervened on a robot brain different than a human brain would be consciousness as we know it.

How likely is it that robots in the extraordinary future will be conscious? I don't see that we have any way of answering such a question. There are so many factors involved no prediction seems reasonable. Far from having a guarantee, we are left wondering if there is any good chance at all that a post-transfer robot would be conscious. Robot brains will not only not be made of the same material as human brains, they probably will not match them in structure or organization, though there might be a vague resemblance. If this is the case, we are surely far from any assurance that robots will be conscious.

Generally in this thesis I abstain from trying to decide any philosophical controversy or telling our authors the position that they should hold, preferring rather to tell them which one most likely fits their already held view. But if I can for once comment on the issue of what view is correct, I would say that I don't think Chalmers' arguments can be easily dismissed. Property dualism might be the correct theory. Logical supervenience seems not to hold, and even if natural supervenience holds, this seems not enough to give any confidence of the consciousness of a robot brain different than a human brain in any respect at all. And if I cannot be sure of the consciousness of a an electronic robot duplicate of me, much less can I be sure of the consciousness of some robot equivalent which varies from me in even more significant ways.

If the extraordinary future features robots whose brains vary from human brains, as they will, and we therefore face significant doubt over whether they are conscious, then porting our minds to them seems quite a gamble. It would be a gamble that after the transfer I would be conscious. If I'm not conscious, I'm not a person, so the transfer would be a gamble that after it I would still be a person.

In our discussion of Searle's position in the previous chapter I came to the conclusion that we could not rule out that in essence Searle is right. As far as we know, robots might not understand anything because we have no reason to think that understanding in the sense of conscious understanding comes from computation alone or even computation accompanied by word-world sensory connections. Now in this chapter it seems we arrive at a similar conclusion with respect to consciousness itself. Our authors sometimes guard their comments about the future by stating that though they think robots will be conscious, they can't be absolutely sure. But they nevertheless act confident and optimistic. The combined effect of the results of our investigation in the last chapter and this leave me much less optimistic and not confident at all.

Now that we have the views out on the table, and our appraisal as well, we can step back and see that our authors often seem to misinterpret their purported opponents. We can see this by an examination of some of the points made by Paul and Cox, who at least make an effort to consider the views of critics such as McGinn and Searle. Paul and Cox argue, as does Kurzweil, that intelligence can be distinct from consciousness. One could have intelligence without consciousness--as in solving an important problem while sleeping (their example). I have no problem with this claim, but merely note that this allows the critic to charge that merely because robots show intelligence does not allow us the further inference that they are thereby conscious. We saw in an earlier chapter that the notion of thought without consciousness could make sense, and so apparently can intelligence without consciousness (panpsychists might disagree, but then they do so only by begging the questions at issue).

But when Paul and Cox try to refute McGinn and Searle, they seem to seriously misunderstand the positions of these thinkers. McGinn is lumped together with the other "mysterians" as presenting a variation on Frank Jackson's argument about the neuroscientist (this argument is variously presented as about a deaf neuroscientist, a blind neuroscientist, a neuroscientist raised in a black and white room, etc. but the point is the same). Paul and Cox agree that a neuroscientist may understand how neurons generate conscious thought without fully understanding the subjective nature of consciousness. But then Paul and Cox object that an understanding of "what consciousness is" is not a necessary requirement for the manufacture of conscious thought. Their analogy is with gravity, which you can do a lot with without fully understanding it. So likewise, the fact that we do not understand the nature of conscious or how it is generated from the brain does not entail that we cannot produce artificial consciousness in advanced robots.

This seems to me to be a misunderstanding of McGinn and the "mysterians." McGinn does not rule out the possibility of a robot being conscious, and in fact Chalmers (usually seen as a "mysterian") is a supporter of strong AI and argues that consciousness is naturally supervenient on the physical! The point of the neurophysiologist stories and their variants (supported by McGinn and Chalmers) is that the scientist may be in possession of all of the physical facts about hearing, vision, etc. and yet not have phenomenal consciousness of the qualia of hearing, seeing, etc. This is an argument that phenomenal consciousness is not logically supervenient on the physical, or that not all of reality is reducible to purely physical properties, or that there are further facts in the universe than just physical facts, etc., or however one wants to put the point. McGinn and Chalmers do not think this point shows we cannot produce consciousness artificially in robots but that there is no logical guarantee that a robot experiences qualia just because it acts as we do when we experience qualia.

Similar misunderstandings plague their examination of the views of Searle. They correctly interpret Searle to hold that computer simulations of a reality are not the reality, and that the brain has special causal powers that give rise to consciousness in humans. But they think Searle holds that such causal powers are not present in any other machine. Searle actually claims he does not know whether other machines have the requisite causal powers; he claims only that since it must be the causal powers in human brain tissue and not computation itself that gives rise to consciousness and understanding in us, other machines would not have consciousness by virtue of computation alone. They note that Searle holds that a machine using symbols to represent the world will not recognize reality rather than the symbols. But they object that we use symbols in calculations to represent the world in our simulations, so why can't a machine do it? But again this seems a misunderstanding. What Searle thinks is that in our use of symbols, there is something else beyond the syntax that provides semantic understanding, namely the causal powers of the brain. Computers can carry out simulations using symbols, but because as far as we know the computer will not do more than such symbol manipulation, the symbol manipulation alone will not allow us to say that the computer understands.

Paul and Cox, and our authors in general, come out on the wrong side of the burden of proof issue. Their general position is to argue that we should assume that robots will be conscious because we have no proof that they will or could not. They think we have no such proof because to them the counterarguments against machines being conscious are based on things such as religious superstition. But thinkers such as McGinn are not basing questions about machine consciousness on religious views to the effect that we have a mental substance that God will not give to any machine. Rather, McGinn's point is more that since we really don't know how phenomenal consciousness arises in us, we can't be sure that it will be present in a machine merely because the machine behaves like we do. Searle's point is stronger but similar--we shouldn't fall into the assumption that since machines produce behavioral output like we do, and by syntactically manipulating symbols simulate the conscious mental processes that in us result in such output, such machines will be analogous to us in possessing conscious mental processes like understanding just like we do. Whether Searle's actual arguments are good or bad, his questioning of the legitimacy of assuming such an analogy seems warranted. So what critics such as McGinn and Searle show is that while we have no proof that such robots won't be conscious, we shouldn't assume that they will be.

And it seems to me that with respect to human-computer mind transfer this is surely the right view of the matter to hold. The implications for the decision to attempt to make the transfer would depend on what you were sacrificing. If you still had some natural life to live, and the transfer would destroy any chance of you continuing in your old body, then you should have second thoughts. Transferring your mind into a computer when you don't know if that computer could support consciousness would be a little like jumping out of an airplane without knowing whether your parachute would open. You certainly wouldn't want to jump if you thought only that the parachute might open, or even that there was a good chance it would. You would want very good evidence that it would, perhaps even enough evidence that you could truthfully say you knew it would. Have our authors established that it is even probable that computers would support consciousness?

On the other hand, if you were on your deathbed, at your absolute last gasp of breath, you might consider it. Or if you could continue on as the old you in your old body, with the "transfer" really being a duplication, then what would you have to lose? But this seems more like a desperate grasp at a hope than a confident option you should plan on. The optimism of our authors is clearly unwarranted, though it may help sell their books.

Free-will

Up to now we have considered the question of whether computers will have qualia, subjective experience, and consciousness needed to be persons. Another characteristic believed by some to distinguish us from non-person animals is free-will. It is very naturally assumed that humans have freedom of the will. Note that we are not talking of freedom in the sense of "political liberty" but of what seems to many to be part of making everyday choices. Almost every human being, if they haven't given it much thought, will claim that this is the case (this doesn't imply that their answer must change after they have given it much thought). It just seems to be an assumption that we have about ourselves. Here, one might think, we have a characteristic that distinguishes us from computers: we have free-will while computers and robots are just "automata." One might claim that having freedom of the will is a necessary condition of personhood, so since computers and robots will never have freedom of the will, they can never be persons.

The problem with the claim that freedom of the will distinguishes humans from computers is that it is not clear that humans have freedom of the will, or that if they do, that computers can't have it in the sense that humans have it. To see this we have to describe the possible views on human free-will. With respect to the free-will issue, there are three basic positions: libertarianism, compatibilism, and hard-determinism. I will discuss these three positions as they are traditionally understood and even present a fairly modern and fresh analysis of freedom of the will provided by Harry Frankfurt. Then I will consider whether a robot could have free-will in any of the possible senses discussed. But in order to understand these positions, I first need to discuss the issue or causal determinism.

We commonly take events in the world to be caused. When something happens, we think it proper to ask why it happened, and if someone cites other events that were said to cause the event in question, we think nothing odd about this. We would in fact think it odd if people went around stating that certain events happened but there "just were no causes of them." This is not just for events among physical objects; often our talk of mental events presupposes these events are caused. "Why do you believe that it's going to rain?" we are asked. "Because I just saw some storm clouds outside" we reply, rather than "Funny thing, that thought just popped into my mind for no reason." "Why are you angry?" "Because of my feelings of frustration." "Why did he beat his child?" "He himself was beaten as a child and this experience was stored in his unconscious, which motivates him to act in certain ways without him being aware of it, etc."

A moment's reflection will make it plain that given the full set of causes of an event, the event will occur. If an event happens and the full set of causes are ascertained, then of course given just those causes the effect would have to follow. If not, we have not got hold of the (possibly unique) set of causes of the event. So it has been said that "causes fully determine their effects."

Causal determinism holds that every event in the universe, whether physical or mental, is governed by this relation of effect and cause. (Note that the determinist usually does not claim to have a proof of this, instead relying on the fact that we all sort of assume it and that it seems implausible to believe otherwise.) If causal determinism is true of the universe then if I were an omniscient being, and knew exactly what state of affairs occurred in the universe at a certain moment of time, along with knowing what kinds of effects follow from what kinds of causes, I could predict all later states of the universe.

It is not quite this simple, because for subatomic particles and events many scientists claim that causal determinism doesn't hold. There seems to be a minority view that determinism does hold even in the subatomic realm, any indeterminacy being due to our knowledge being incomplete or the fact that the only way to know about these things alters them (the Heisenberg uncertainty principle). On this minority view there would be an epistemic indeterminacy but not a metaphysical one. Einstein held this view, declaring "God does not play dice with the universe." But this does seem to be a minority view, and the orthodox "Copenhagen interpretation" holds that there is a real metaphysical indeterminacy in the nature of things at the sub-atomic level. Neils Bohr, one of the developers of the orthodox Copenhagen interpretation, replied to Einstein more or less with "How do you know what God has chosen to do with the universe?"

But indeterminacy at the subatomic level may not be relevant to the level of physical events relevant to human thought and action. First, with respect to humans making choices, we are either dealing with a mental substance distinct from the brain or mental events correlated with or identical to brain events. If the former, then what happens in the subatomic physical world is irrelevant to how a distinct mental substance operates. If the latter, then almost certainly we are dealing with macroscopic events among neurons, which though small, are not subatomic, so again what happens at the subatomic level is irrelevant. For macroscopic objects, such as tables and chairs and human brains, apparently an extremely good approximation of causal determinism does hold, otherwise they would no longer be teaching Newtonian physics to engineers building bridges. So whenever I use the phrase "causal determinism," keep in mind that I recognize the problem with subatomic particles but that for what we want it is an excellent approximation.

But note too that even if we were dealing with subatomic events when we talk of neurons firing, the indeterminism of subatomic particles may not be what we want to use when trying to account for free will. People who believe that they have a free will that can override the influence of previous factors and brain causes are not talking about their choice being some kind of random indeterministic event. On this view my choice of vanilla over chocolate is my choice, and it doesn't seem either a real choice or mine if it is due merely to a subatomic particle indeterministically moving this way rather than that. So on this view it can be argued that metaphysical indeterminacy at the subatomic level is irrelevant to the free-will issue. There are those who argue against this position that purely random, indeterministic events at the subatomic level are irrelevant to human thought and action and the free-will issue, and later I will bring up one such critic. But at this point let's assume that causal determinism characterizes the relevant level of human choice, thought, and action we need to discuss in considering the free-will issue.

Having a characterization of causal determinism, I can now consider the basic positions on free-will. If causal determinism holds for mental events as well as physical events (which may or not be the same thing, depending on whether one is a dualist or a materialist), then it might seem that this precludes human free-will. This is the position of the hard determinist. For if my choices are determined by their causes, as any other effect is determined by its causes, then how could I have chosen otherwise than I did choose? My will is not genuinely free.

Hard determinism is a version of incompatibilism. Incompatibilism is the view that human free-will is incompatible with determinism. The hard determinist believes that causal determinism does hold for human deliberation and choice, and since causal determinism (on this view) precludes free-will, humans do not really have free-will. A hard determinist would claim that we can't genuinely choose otherwise than we do choose. This is because like all incompatibilists, the hard determinist would hold that to have free-will, it is necessary that after one makes a choice one would truthfully be able to say:

"I could have chosen otherwise in the sense that every event (including the causes of my choices, such as my desires, wants, motives, inclinations, etc.) could have been the same up to the moment of choice, and yet I might have chosen other than I did."

But to the hard determinist, causal determinism rules out the truth of this type of statement; one needs to be able to say the above statement truthfully for free-will to hold, but because one cannot say this truthfully, free-will does not hold. This is because if causal determinism governs human choice, then given the set of causes of the choice, I had to choose what I did in fact choose. It makes no sense to say all the causes could have been the same and yet I might have chosen differently.

The hard determinist recognizes that, sure, it sometimes seems to us that we are not determined in our choices, that we could choose from among many options, but this is just because knowledge of the full set of causes of our choices is hidden from us. I chose vanilla over chocolate because of my desires, needs, and wants, which all exert their causal influence through the process of deliberation, but I might not be aware of all these causal factors operative in the deliberation. Hard determinists also recognize that people do say things like "I chose vanilla, but I could have chosen chocolate." But when such people say they could have chosen otherwise, they do not realize that in reality they had no free-will to do so.

The psychologist/psychotherapist Sigmund Freud theorized that the conscious mind (the part of our mind which we are directly aware of in introspection) is but a very small part of the whole mind. The largest part of the mind is actually a part called the "unconscious." Through describing human behavior as really the working out of several parts of the mind he called the "id," the "ego," and the "super-ego," Freud held that in the overwhelming majority of cases of choice, though we think we know the causes of the choice, we really don't. Rather, the causes come from the unconscious. Freud may have been a hard determinist, but the hard determinist is not tied to any particular view about the role of the unconscious. All our choices are really determined, whether we know their causes or not, and whether the causes involve the unconscious or not. So the hard determinist takes a position that may have been even stronger than that of Freud's "determinism."

That is hard determinism. Let's turn to libertarianism. In one respect libertarianism is an ally of hard determinism (not the political kind of libertarianism). The libertarian, believing that determinism is incompatible with human free-will, is an incompatibilist like the hard determinist. But they draw opposing conclusions from their recognition of incompatibilism. While the hard determinist holds to causal determinism and rejects human free-will, the libertarian holds to free-will and rejects causal determinism. That is, the libertarian claims that while causal determinism might hold for most events in the universe, the process of human choice is not fully determined. Somehow we can rise above the causes of a choice (our motives, inclinations, wants, desires, etc.) and choose in a way that is not already fully determined. Thus, the libertarian can agree with the hard determinist that the statement above about "Could have chosen otherwise" must be true for us to have free-will, but unlike the hard determinist, the libertarian holds that the statement can in fact truthfully be said.

It must be pointed out that libertarianism is not necessarily simply indeterminism, that is, the occurrence of random events. Not all libertarians would be happy to replace a causally deterministic account of human choice with one that saw such a choice as being due to a random, indeterministic event. This would be to make human choices out of our control. If I were to go to choose something, and the occurrence of a random event in the decision process occurred and resulted in my choice of something totally unexpected, then this would no longer really be my choice.

Although libertarianism may be popular among the general public, it is probably not as popular among philosophers and scientists. An example of a libertarian philosopher is Richard Taylor, who holds to what he calls "agency theory," in which a self or person (not prior events in the person's mental or physical life) is the cause of an event--an act of behavior. In the case of an act done of free-will, the cause is the agent who performed the act such that no antecedent conditions were sufficient for his performing just that action. Taylor is candid enough to admit that his view cannot be adequately explained. But he believes it is the only view that allows us to hold to some other views that we must not give up: that in a free choice what I choose is really "up to me," and that we do genuinely deliberate rather than merely seem to (Taylor, 1992, pp. 343-345).

It might seem we are left with a standoff between the hard determinist and the libertarian. They are opposites on the question of whether humans have free-will, though they both agree that free-will is incompatible with causal determinism. But there is a different tack one can take by saying that this last claim is where both camps go wrong: despite initial appearances, free-will is in fact compatible with causal determinism. This is the position of the compatibilist, who disagrees with the hard determinist and the libertarian about the compatibility of causal determinism and human free-will. According to compatibilism, we can have both. So the compatibilist agrees with the hard determinist about the truth of determinism, and agrees with the libertarian that humans generally have free-will.

Compatibilism is popular among philosophers, and may provide a way for robots to have free-will, so we need to spend some time understanding it. The compatibilist believes that causal determinism characterizes human choice. So how can the compatibilist hold that humans have free-will? Recall that the hard determinist would object that given the prior causes of my choice it inevitably followed, and therefore was not free.

The compatibilist believes when we say we could have chosen otherwise, we need to be able to say truthfully only the following:

"I could have chosen otherwise in the sense that had my wants, desires, motives, etc. (the causes of my actual choice) been different, my choice would have been different."

This is why the compatibilist can still hold to causal determinism. The above statement does not claim that my actual choice was uncaused, or that I had the ability to somehow rise above all the causes of my choice to choose otherwise. It says only that had the causes been different, the choice would have been different, and to the compatibilist, that is enough to provide for free-will.

There are many different ways to be a compatibilist; I will mention a traditional interpretation and a more modern interpretation from Harry Frankfurt. A traditional basic statement of this position is that of Stace. Stace can claim that determinism is compatible with human free-will by pointing out that whether a choice was free or not has nothing to do with the choice being uncaused but rather with the kind of cause it does have. If the immediate cause of the choice was a desire, motive, or other internal psychological state of the person, then the choice was free. If the immediate cause of the choice was some external state of affairs (physical force or physical condition outside the person), then the choice was not free. In both cases, the choice was caused, the crucial difference being in the kind of cause. To use Stace's example, Gandhi fasting because he wanted to free India is an instance of a man acting out of a free-will. However, a man fasting in the desert because there is no food is an instance of an unfree act (Stace, 1952, pp. 326-327). So, on my earlier example, if I chose vanilla over chocolate because I like vanilla and wanted vanilla, then this choice was free. If I chose vanilla over chocolate because someone was holding a gun to my head at the time, then the choice was not free.

However, Stace may not be correct in believing that a compatibilist should hold that any choice whose cause is an immediate state of the agent is free. He apparently fails to realize that not all cases of lacking free-will are due to external states of coercion; some coercion may be "internal." The compatibilist can admit that human choices having internal causes such as powerful drugs or overwhelming unconscious motives are not instances of free-will, as long as there are cases where in the absence of these causes humans choose in accordance with their internal states as causes. Furthermore, Stace may not be phrasing things quite properly in saying that in acting because of someone threatening you (holding a gun to your head, threatening to fire you, etc.) the immediate cause of your choice is an external factor. It is clearly a causal factor, but the immediate cause may be something more like your desire to continue living or holding a job.

But we don't want to spend much time arguing with Stace; he gives us a general sense of the traditional compatibilist understanding of how causal determinism could be true and yet we could still have free-will. There are more sophisticated recent versions of compatibilism around, for example Harry Frankfurt discusses the very issue of drug addiction and free-will in presenting a version of compatibilism.

Frankfurt thinks that one essential difference between persons and other creatures is in the structure of a person's will (Frankfurt, 1971, p. 82). Human persons, like other creatures, have first-order desires. A first-order desire is a desire about an action, an omission or a state of affairs, for example, the desire to eat some chocolate, or the desire to give up smoking. Our first-order desires may conflict, with only one want or desire winning out and moving you to action. Frankfurt uses the term "will" to refer to the desire that wins out against other conflicting desires. Thus the will is the effective desire, as opposed to other inclining desires that lose the contest (Frankfurt, 1971, pp. 83-84).

A second-order desire is a desire about a desire: a desire to have a desire or to not have it. For example, imagine you have a first-order desire to smoke. But you find out that smoking is bad for you, and you desire (a second-order desire) to have the desire to refrain from smoking. So for Frankfurt a person can step back and assess his or her first order desires and form preferences about them. He can decide which desire he desires to have, and these concern the type of person he wants to be. But we must distinguish among second-order desires. A "mere" second-order desire is a desire to have a desire but not have that (first-order) desire lead one to action. That is, it is not a desire to have that first-order desire become one's will. Thus the doctor desires to have the desire for a drug, so that he can learn what it is like to be an addict, but the doctor does not want that first-order desire for the drug to become effective--that is, he does not want to wind up taking the drug (Frankfurt, 1971, pp. 84-85).

A second-order volition is a second order desire to have a first-order desire lead one to action. That is, it is a desire to not just have a first-order desire but also to have it become effective--to have it win out over competing first-order desires and lead one to action. To have a volition is to have a desire that one's first order desire be one's will. Frankfurt thinks that having second-order volitions is what is essential to being a person. A "wanton" is a human being who has no second-order volitions; a wanton does not care about his or her will, about which first-order desire becomes effective (Frankfurt, 1971, pp. 85-86).

The unwilling addict has a strong desire for a drug, and a weak desire to refrain from taking a drug. He hates his addiction and wants his desire to refrain to win out. Thus he has a second-order desire to have his first-order desire to refrain from taking the drug, and to have it be the one that is his will--to be his effective first-order desire. Unfortunately, his first-order desire to take the drug is so strong it always overpowers the desire he wishes would win out. Thus he remains an addict, but an unwilling one (Frankfurt, 1971, pp. 86-87).

The wanton addict has a desire for the drug, and a desire to refrain from taking the drug. These are competing first-order desires, but he doesn't care which wins out. Thus, he has a second-order desire to have the desire to refrain, but he does not care if that desire to refrain becomes his will. He is moved by both first-order desires, so he is not completely satisfied when one wins out, leaving the other unsatisfied. But it really makes no difference to him whether his craving or his aversion wins out! He has no second-order volitions (Frankfurt, 1971, pp. 87-88).

Frankfurt thinks that because a person has second-order volitions, freedom of the will can become an issue. A person exercises freedom of the will in securing the conformity of his will to his second-order volitions. That is, a person exercises freedom of the will when he is able to have the will he wants--when he is able to have the first-order desire he wants to win out be the one that does win out. He is free to will what he wants to will, free to have the will he wants. The unwilling addict is a slave to his addiction--the desire that wins out (becomes his effective desire, the will) is not the one he wants to win out. His will is not the will he wants, and so he lacks freedom of the will. The wanton addict has no desires for his will to be one or the other of his first-order desires, and so he lacks freedom of the will by default. That is, since he has no second-order volitions, there is no question of his securing the conformity of his will to his second-order volitions. Frankfurt claims his theory is neutral with respect to determinism and indeterminism. It could be causally determined that a person is free to have the desire he wants to win out be the one that does win out (Frankfurt, 1971, pp. 89-95).

What we see from Frankfurt's type of analysis is that the whole free-will issue may not have anything to do with determinism versus indeterminism, as the incompatibilists seem to think (the hard determinist debating the libertarian about which kind of incompatibilism should win out). Free-will may have to do with the relationship among one's desires on different levels.

So what does this all this have to do with robots? Well, the objection that some may try to raise against robots being persons is that, even if they were conscious, robots would still be automatons with no ability to act truly independently in the way we do, making choices freely, etc. For robots will always act in a causally deterministic fashion, just following their program, and so they will not act of free-will. But what the above explanation of the different views of human free-will shows is that claims that humans possess free-will but robots never can (because they are causally determined) seems to presuppose a particular position on the free-will issue, namely the libertarian position. (Robots would also have no free-will on the hard determinist position, but then in this case neither would humans, so this would not be an objection to robots being persons if humans are.) And we have seen that libertarianism is by no means the only or even most popular theory of the freedom of the will. Assuming robots are causally determined does not automatically rule out robot free-will if some sort of compatibilism is correct, either on Stace's traditional analysis or Frankfurt's more modern perspective. If computers can be conscious and have mental states such as desires and second-order volitions--something we have not ruled out, though perhaps we have not been able to prove--then they could have free-will in any of the compatibilist senses sketched above. Frankfurt particularly presents a version of free-will that he thinks is constitutive of personhood, and if this account is correct, then it seems clear that we couldn't deny personhood to computers for the alleged reason that they can't have free-will.

So the whole issue of denying personhood to robots on the basis of an alleged lack of free-will may be a red herring. Libertarianism may be true (we have not tried to solve the debate), but a plausible option for our authors is to point out that on either the compatibilist or hard determinist analysis there need be no essential difference between humans and robots.

Our authors could make an even stronger case than this by arguing that it is not clear that robots will be entirely deterministic anyway. For example, Copeland argues that a robot could be fitted with a true randomizer, such as a quantum amplifier. This amplifier uses a metaphysically truly random subatomic event in nil preference situations. A nil preference choice is a selection among alternatives that appear to the chooser (who has limited time and information) to be equally satisfactory and preferable. This is opposed to the usual "outstanding candidate" situation in which one option appears clearly preferable to the chooser. Perhaps when someone does make a nil preference choice, it is based on a random subatomic event. Robots could make such nil preference choices in a truly random fashion by the use of this amplifier. Because of the truly random element, it would be impossible to predict the robot choice (much like the unpredictability of human choice in such a situation) (Copeland, 1993, pp. 145-147).

Recall that a classic libertarian position is that true indeterminacy is not what we mean by free-will, since then one's choices, at the mercy of random events, do not seem mine or under my control. What Copeland has to argue is that such indeterminacy is involved in free-will rather then precluding it.

Copeland does argue against what he sees as the two standard arguments against such indeterminacy. The first argument is he calls the "helplessness argument," because it holds that we would be at the "helpless mercy" of random events controlling our behavior. But by restricting the operation of the randomizer to nil preference situations, Copeland notes it would function merely as a tiebreaker. No one would be "helpless" in situations where there is a clear preference (Copeland, 1993, p. 146).

The second argument is that if it were a matter of pure chance how we act, we could not be held responsible for our actions. And then we would have to allow that we are not responsible for freely done actions. This is contrary to what most people believe, namely that it is accidents we are not responsible for, not freely undertaken choices. Again, in reply, Copeland notes he is talking only of nil preference situations, and there are enough of our other usual "outstanding candidate" situations to provide for responsibility. He gives an example. If a hijacker is considering shooting one of two hostages, and has no desire for one over the other, it is a nil preference situation. The randomizer causes him to shoot one rather than the other, but the hijacker is still responsible because he had already decided to shoot a hostage, even though the one that got shot was so due to pure chance (Copeland, 1993, pp. 146-147).

So I conclude that the free-will issue need not pose a serious problem for the positions of our authors. They could argue that if hard determinism is correct, then robots will lack free-will in the same sense that humans do. If compatibilism is correct, then sophisticated robots with complicated processes of deliberation that mimic human deliberation (perhaps even involving levels of desire) could have free-will in any sense humans do. Even if indeterminacy enters into human choice in nil preference situations from sub-atomic randomness, robots could be provided with the ability to make choices in the same way. Only in the case of the correctness of libertarianism, and one that held that humans were truly unique kinds of agents, would we have a problem in providing robots with free-will. Given that this is a truly minority position on free-will, and that proponents themselves admit that it leaves the mechanism of free-will wholly mysterious and perplexing, our authors should not lose sleep over this possibility. The free-will issue does not seem to cripple the view that robots could be persons. The free-will issue, at least, need not undermine the plausibility of the realization of the extraordinary future.

Moral Agency and Creativity

It seems to me two major issues in personhood are consciousness and free-will. Moral agency is also sometimes seen as a major issue, but it might be parasitic on personhood or on the other issues of personhood. That is, we don't think that someone is a person because that individual is a moral agent. Rather, we judge that someone is a moral agent because we see him, her, or it as a person or as possessing such features as consciousness and free-will.

Given the above perspective, I have only a few comments to make about moral agency. Moral agency has to do with holding someone responsible for his or her actions morally. A moral agent knows right from wrong and has a moral obligation to do right and refrain from doing wrong. The ability to distinguish right from wrong may include the ability to distinguish among the following types of actions when faced with a choice: the morally impermissible (what one ought not to do), the morally obligatory (what one ought to do), and the merely morally permissible (what one can do but is not obligated to do). Thus when faced with a choice between telling the truth and lying, one can recognize the truth as morally obligatory and the lying as morally impermissible. And when faced with a choice between eating an apple and not eating it, one can recognize either course as morally permissible but neither one morally obligatory. (I assume that we would agree that eating an apple is neither required nor prohibited, although some versions of utilitarianism or consequentialism might claim that in theory no action is merely morally permissible. If selling or giving away the apple would result in greater happiness (or other intrinsic value) than not for all parties concerned, then one is obligated to do so. To fail to do this is morally prohibited.)

If we take robots to be persons then we will take them to be moral agents, and the people transferring their minds to these robots will remain moral agents. Just as humans are educated about such things, I see no reason why a computer or robot could not be programmed or otherwise learn to undertake the morally appropriate course of action when faced with a choice. In a human-computer mind transfer, the robot would get the moral understanding of the individual being ported. Just as in the case of humans, they don't need to have a complete understanding of ethical theories to learn or recognize the difference between right and wrong, but having such an understanding might be useful for applying moral principles in the real world.

Being a moral agent is not the same as being a moral patient. Being a moral patient means that one can be an object of a moral or immoral action, that is, one can be benefited or harmed. We commonly take humans to be moral agents and moral patients. Sometimes we think of nonhuman animals as moral patients in some respects. We say that they have the right not to be tortured, for instance. But we don't believe that such animals can engage in moral and immoral acts, so we don't think of them as moral agents. Inanimate objects such as rocks are usually not considered to be moral agents or moral patients.

We might hold that future robots are moral agents even if we think them not conscious as long as we think they have free-will. Would we hold a zombie morally responsible for its acts? If so, then we would probably do the same for such robots.

Keep in mind that some thinkers who claim we have no free-will (hard determinists) could hold that no one is really morally responsible or a true moral agent, though we talk as if they are. The relation between the free-will / determinism issue and that of moral responsibility is complicated. Libertarians and compatibilists usually hold that moral responsibility requires that humans be able to make choices from a free-will (morality presupposes free-will). Libertarians claim that it doesn't make sense to hold a person morally responsible for an act (praiseworthy if good or right and blameworthy if bad or wrong) if the person could not have chosen to act otherwise. (Note that they will give "could have chosen otherwise" their own interpretation). Hard determinists often admit that since (on their view) determinism is true and we have no genuine free-will, we need to modify our notion of moral responsibility or perhaps even give it up. But people committing wrong acts still can be held responsible in the sense of liable to be punished if needed to change their future behavior.

On the other hand, compatibilists (and some hard determinists) would claim that determinism is required to make sense of moral responsibility and punishment (moral responsibility presupposes determinism). In punishing we assume we can supply a cause of changed behavior in the person punished and also deter others (cause them to act correctly) by the fear of punishment. But these functions wouldn't make sense if behavior (the choices to do right or wrong) were not caused. If human actions were uncaused, then punishment would not be able to influence behavior. If human actions were uncaused, actions would be completely unpredictable and capricious and therefore preclude responsibility.

We have seen that on some views of the matter, there is reason to think that, if compatibilism is correct, robots (like humans) can have freedom of the will though their actions are causally determined. So the free-will requirement for moral responsibility would be satisfied for robots and post-ported human persons. If hard determinism is correct, then robots aren't going to have free-will, but then neither do humans and so this is no disadvantage to the robot and wouldn't represent a post-porting loss for the human. If libertarianism is correct, then robots may or may not be capable of having freedom of the will. As we mentioned, if what gives humans free-will is something unique to a nonphysical human soul that would not accompany the porting, then robots would not have it. If what gives humans free-will is something unique to the carbon-based form of life and brain material possessed by humans, then robots would not have it. Otherwise, if the libertarian position of free-will is correct, then further argument is needed to show why robots could not have free-will as well.

Robots might also be considered moral patients, whether or not we think of them as moral agents. This would occur if we thought they could feel pain, experience disappointment, etc. If they could live a meaningful life according to a plan of their own choosing, etc. we might grant them the same rights as we enjoy. A requirement of being a moral patient might be consciousness in some form. In other words, we might not think a zombie to be a true moral patient, and if we think robots to be nothing more than zombies, we might likewise not think of them as moral patients. This might be hard to imagine. For example, if a robot were screaming as if in pain, would we think it okay to continue to "hurt" them, even if we knew they were not experiencing any phenomenal pain?

Let's turn to the issue of creativity, about which I have just a little to say. First of all, with respect to computers and creativity, several things could be on the agenda. I will consider two things that might be meant.

One thing creativity could mean is flexibility of response, such as the ability to be flexible in behavior in responding to new situations. The lack of such flexibility is one problem current computers have. They run into problems with situations that are new, because they work according to rules that haven't taken these new situations into account. And you can't write rules for every possible situation. Some people claim that humans don't work by following such rules, so computers will never be as flexible as humans as long as they work differently.

A big concern in current AI is of how to give computers such flexibility, but we have seen that our authors show little concern for the problems of AI. The assumption is that all the problems of AI will be overcome somehow, perhaps by the manner in which robots are created to copy the functioning of the human brain. I already questioned this view in the last chapter and will provide further comments in a future chapter. All I wish to say on this matter is that I have pointed out and will continue to point out that our authors may be wildly optimistic in their assumption that it will be so easy to create smart robots, whether or not copying a human is involved. But if we do succeed in creating such robots in the way envisioned, they will be able to pass the Turing test and any other test we throw at them that a human could pass, and to do this they will have to possess human-like flexibility.

The assumption is sometimes held that because robots would be causally determined, they could not be flexible or creative in the way humans are. It should be obvious from my discussion above that this assumption has many presuppositions of its own. It is not obvious that humans are not causally determined, or that if they are, that they can't exhibit free-will. Humans have the kind of creativity we see in the everyday exercise of free-will. But we argued above that there is good reason to think that robots in the extraordinary future could be provided with the same kind of free-will, so they would have a similar kind of creativity, irrespective of any causal determinism. Of course if humans really are as some libertarians say, or their free-will stems from an immaterial soul which robots will lack, then such robots probably won't be as creative. But that this will be the case is far from obvious.

The other considerations in this thesis should allow that robots could exhibit other kinds of creativity possessed by humans, such as artistic creativity. As Turing replied to Lady Lovelace's objection that machines can never do the original, human originality may be nothing more than unexpected outcomes from the application of teachings or general principles learned earlier. Humans have varying amounts of artistic creativity, and we don't know why. If such types of creativity are bound up with different types of intelligence, as Gardner thinks, then they should be considered in the design of the robots and tested for before anyone transfers their minds in. But again, our authors just think that all the problems of AI will be solved because we will create computers that are just copies or equivalents of our brains, so in whatever way human brains have already overcome the problems of AI, robot brains will to.

To summarize, I see potential problems, well-known in AI, in getting robots to be smart. One of these is in getting computers to have flexibility of response, as mentioned above. Aside from this, I do not see any special problems in getting computers to be creative. If we can build one that acts just like a human, it should be as creative as the human we model it on. On the other hand, if libertarians or substance dualists are right, we won't be able to build such computers.