Dear Sir or Madam.
In an Article on the “Human Brain Project” in “Scientific American”, June 2012, pages 34 to 39, Henry Markram describes your project of simulating a whole human brain in a computer system, down to the molecular level. He argues that technical advances over the next decades might make this feasible. I think that such a project raises some philosophical, especially ethical, problems. Since you are obviously planning to address such issues within the project (http://www.humanbrainproject.eu/ethical_issues.html), I decided to share my concerns with you.
The prospect of the possibility of the simulation of a whole human brain, as described in Henry Markram’s Article, raises the question if such a simulated brain would become conscious. I suppose it would, and the consciousness arising in it would subjectively be indistinguishable from that of a normal human being.
Consider the possibility that this is true. If the simulated brain would contain a human conscious mind capable of experiencing emotions, pain and so on, that conscious mind would have to be regarded as a human being. That means he or she would have to be granted full human rights, including the right to live (so it must not be switched off and deleted), the right of physical integrity (so he or she must not be used for experiments on the effect of, say, simulated brain injury or simulated brain diseases or drugs and so on) and must not be put into a state of pain or suffering (that would be torture), the right of freedom (so using him or her in any way would have to be regarded as slavery), the right for privacy (so we must not spy inside his or her mind) and so on.
Mr. Markram rightly writes that “Ethical concerns restrict the experiments that neuroscientists can perform on the human brain”. If I am right, the same restrictions would hold to a simulated human brain. In fact, if the simulated brain contains a human consciousness, then the mere act of creating it would be unethical. We would have created a newborn baby with Locked-In-Syndrome, with the full emotional and cognitive needs of a baby and probably no way to satisfy these needs. That would be a cruel thing to do.
We should consider the resulting philosophical and legal questions before we ever consider doing such an “experiment”. We are stirring up a hornet’s nest of philosophical and legal problems here. Some examples: If you connect the simulated conscious brain to an artificial body, the resulting robotic system would have to be regarded as a human being with all the rights of a human being, including a right to live, to be free, to have privacy, to decide over his or her own life and so on. If that artificial human being decides to become a musician, a Buddhist monk or a carpenter, we would have to respect his or her decision. We would have to continue paying the bill for those super computers (switching them off would be murder). If he or she decides to end his/her life, than if we think humans have a right to commit suicide we would have to grant him/her the right to destroy the artificial brain. What if he/she becomes a criminal…
If we introspectively investigate our own consciousness, we will find that we are not aware of the neuronal or molecular processes and of a lot of the information processing underlying our cognitive and perceptive processes. The internal details of these processes are not introspectively accessible. There is a “horizon of accessibility” that we cannot see through “from the inside”. My point is that the nature of the “hardware” implementing the conscious mind is beyond that horizon, so the mind cannot distinguish if it runs on natural or simulated neurons. As a consequence, the nature of that “hardware” does not matter for the question of the ontological or ethical status of the resulting consciousness! A silicon-based mind therefore would have to be treated just like a biological one.
A simple example from computer sciences might clarify my point: consider a text file stored on a hard disk. Now copy it to a USB stick or a CD-ROM. The physical representation of the file will be completely different in each case, but any application using the file cannot “see” these differences. The operating system and device drivers create a world of “emulated objects” whose properties can be described and understood independently from the physical system used to implement them. On a physical level of description, what you have are magnetic orientations of particles, small holes in the CD-ROM’s surface and so on, together with processes “reading” these features that are physically completely different from each other. From the application’s point of view, these differences are not accessible (and the usefulness of computers to a great extend comes from the possibility to create such emulated objects and “evert” them to us through a user interface). The application resides in a world of objects that are, in a way, independent of the underlying physics. It is an observer of emulated objects.
You might say that this emulated world is just a layer of description and that “in reality” the only thing existing is the magnetized particles, electrical currents and so on and that the application we as programmers think of exists only as a description in our minds (while in reality, there is a physical machine executing machine instructions only). But a crucial aspect of consciousness seems to by its self-reflexivity. If the “application” can observe itself and is itself part of the world of emulated objects, being able to create descriptions of itself and the processes inside itself, it will be existing from its own point of view, thus acquiring an independent existence. We would then have, inside the system, an “internal observer” that is emulated by the system and that exists from its own point of view. And this observer would have a horizon of accessibility shielding the details of its implementation from its view.
My idea is that our own consciousness is an internal reflexive observer of this kind, emulated by the neuronal processes in our brains. If you simulate these processes in a computer system, such an internal observer would be present too. The brain would be simulated, but the consciousness inside would be emulated, just like the one in a biological brain. The details of its hardware (biological vs. silicon) would be beyond its horizon of accessibility. This means that the resulting conscious mind would subjectively be just as the conscious mind of a biological brain having the simulated structure.
Therefore, all ethical constraints that apply to experiments on human beings must apply here too. The ability to experience joy and pain is the reason of treating humans not just as things. It is the basis of ethics. We don’t understand yet how subjective experiences of joy or pain can arise in our conscious minds, but we should assume that they also would arise in a simulated brain, unless we can prove otherwise.
The human brain project might bring us a step further to understanding these issues and to raise the philosophy of mind to the level of a science, but I suggest we should actually stop with the simulations before they cross the threshold of becoming conscious. Referring to the graphics on page 38 of the article, I suggest you should become very cautious when you reach the “Regions” levels or start going beyond it.
One more point: reaching consciousness inside a simulated brain might be computationally easier than doing a whole brain simulation down to the molecular or neural level because if we replace detailed simulations of some of the brain’s components (e.g. neuronal columns) with simplified approximations taking less computational resources, those details could again be beyond the horizon of accessibility of an internal observer arising within the system. So we might get into the “danger zone” of creating a human conscious mind even before we reach the technological ability to simulate a human brain down to the neuronal or molecular level. I therefore suggest thinking those philosophical questions through very carefully before such experiments are undertaken in reality.
Andreas Keller, Colgne, Germany