Chapter 6 - Conclusion

Author: Winfred Phillips

Our authors are confident that within the next century humans will transfer their minds into robots and continue living as the same persons. In this thesis I have discussed issues relevant to the plausibility of the claim that this will happen.

I analyzed their claim into four predictions (supposed to come true in the next century):

1.     Robots will be as smart as humans are.

2.     Robots will be capable of being persons.

3.     There will be a viable mechanism for the transfer of a human's mind from a human body to a robot body.

4.     Transferring one's mind to a robot will allow one to continue one's existence as the robot.

I can now summarize the results of my examination of each of these predictions.

With respect to the prediction that robots will be as smart as humans, I think that we have no reason to be confident that this will occur in the next century. Computing power is advancing, but I doubt that we can predict its rate of advance with any degree of accuracy. Moore's Law, for example, is more a self-fulfilling prophecy than a scientific law, and it could very well fail before smart computers arrive. Reliance on future technologies such as nanotechnology and atomic computing is speculative. What may be most important to the kind of smart computers that would be needed for mind transfer is a recreation of the human brain's massive parallel processing, not just sheer computing chip density or computing power, and this seems a long way off. Our authors give no indication of how they would solve current problems in artificial intelligence. Our authors have paid insufficient attention to the need for a smart computer to match all the varied facets of human intelligence. And it is controversial whether future smart computers will be able to consciously understand anything.

With respect to the prediction that robots will be capable of being persons, I think that the issue of consciousness might prove to be the stumbling block to any confidence in this prediction. Prerequisites of personhood are commonly thought to include such characteristics as consciousness, free-will, creativity, and moral agency. Our authors all assume that future robots will be conscious, though they recognize that they cannot prove this. Their position on the relation of human mind to human brain seems most close to that of functionalism, which would allow them to hold that robots could have minds. But functionalism cannot guarantee that robots will have phenomenal consciousness, and our uncertainly is heightened by the fact that robot brains would not resemble human brains exactly, whether in material or organization and processing. This is because a good case could be made that phenomenal consciousness is not logically supervenient on the physical.

On the other hand, our authors do seem to have a good chance of showing that robots could have free-will in ways similar to those in which humans have free-will, assuming that humans do have free-will, or in ways similar to those in which we fail to have free-will, if humans don't have free-will. Likewise, our authors have a good chance at showing that robots could be creative in the same ways that humans are creative and be moral agents as humans are.

With respect to the prediction that there will be a viable mechanism for the transfer of a human's mind from a human body to a robot body, I think we have good reason to see this issue as a major problem in the way of realizing mind transfer. A variety of possible scenarios for transfer are presented by our authors. Both invasive and noninvasive procedures are considered for determining the relevant information about the brain of the human undergoing the transfer. Some scenarios have the robot already created and other have the robot created during the actual transfer. Some scenarios see transfer as involving a series of transplants of robot brain parts for human brain parts. But in most of the transfer scenarios, problems might arise that our authors seem to have neglected considering. Our authors commonly think of the human brain as some sort of computer running a program, with the mind being thought of as a piece of software. This may not be a correct understanding of the brain and mind, but even if it is, determining the program running on a part or whole of the human brain by some sort of reverse engineering may be practically impossible. The notion of a part or whole of a robot brain being equivalent to a part or whole of a human brain is itself vague and in need of clarification. Furthermore, testing is a common and perhaps universal quality control practice in software development, but it is very difficult to imagine humans having the endurance to undergo anything like a realistic testing program to determine the correctness of the software written for the robot brain.

With respect to the prediction that transferring one's mind to a robot will allow one to continue one's existence as the robot, I think that though it is certainly not clear that this will happen, something almost as good could very well happen. Our authors have a bare grasp of some basic theories of personal identity, and seem to prefer theories that see identity as consisting of some sort of continuity of mental states. Some sort of mental state continuity could very well be preserved in human-computer mind transfer. However, questions arise about identity in situations of transferring a human mind into multiple robots, or in situations in which the original human remains intact after the robot has been created and the mind supposedly transferred. These puzzle cases are occasionally mentioned by our authors but no real attempt is made to solve them; this seems partly because our authors have not attempted to address the philosophical issues adequately and partly because they treat the entire issue of personal identity in too cavalier a fashion. Their position on identity issues in mind transfer could be strengthened were they to consider the theories of Parfit; this might allow for the existence of a robot survivor (a descendent self) of a human even were the robot not considered to be the same person as the human. But of course this would require modification of the assumption of our authors that human-computer mind transfer would maintain the identity of the person, and it might not be palatable to people considering undergoing the mind transfer process.

So I conclude that our authors have no right to be so confident that human-computer mind transfer will occur within the next century. There is little cause for optimism and much cause for pessimism about the truth of their predictions. One reason for optimism is that if humans are able to transfer their minds as described, there is a good chance that they will continue as the same persons or something just as good (survival of their descendent selves). Another reason for optimism is that if we can build the robots anticipated, there seems little reason to think robots cannot have free-will, be moral agents, or be creative in any way humans are.

On the other hand, it looks doubtful that technology will soon enable computers to equal the computing power of the brain, that we will soon be able to build human intelligence into such computers, that we will be able to build adequate robot bodies, that we will be able to reverse engineer the program running on the wetware of the human brain, that we will be able to write equivalent software for a robot brain, and that we will be able to adequately test any such software we write. In addition, we just don't know what gives rise to consciousness in our brains, whether there is anything analogous to a program running in our brains, whether the robots we build (assuming we can build them) will be conscious persons or just zombies, whether they will understand anything is the sense of conscious awareness of meaning, or whether we will be able to convince many people to attempt human-computer mind transfer.

I conclude our authors are too optimistic and their confidence is premature.

Copyright: 2000