Cognitive Science / Computer Science / Ethics / Neuroscience / Philosophy / Science

An open letter to the “Human Brain Project”

Dear Sir or Madam.

In an Article on the “Human Brain Project” in “Scientific American”, June 2012, pages 34 to 39, Henry Markram describes your  project of simulating a whole human brain in a computer system, down to the molecular level. He argues that technical advances over the next decades might make this feasible. I think that such a project raises some philosophical, especially ethical, problems. Since you are obviously planning to address such issues within the project (http://www.humanbrainproject.eu/ethical_issues.html), I decided to share my concerns with you.

The prospect of the possibility of the simulation of a whole human brain, as described in Henry Markram’s Article, raises the question if such a simulated brain would become conscious. I suppose it would, and the consciousness arising in it would subjectively be indistinguishable from that of a normal human being.

Consider the possibility that this is true. If the simulated brain would contain a human conscious mind capable of experiencing emotions, pain and so on, that conscious mind would have to be regarded as a human being. That means he or she would have to be granted full human rights, including the right to live (so it must not be switched off and deleted), the right of physical integrity (so he or she must not be used for experiments on the effect of, say, simulated brain injury or simulated brain diseases or drugs and so on) and must not be put into a state of pain or suffering (that would be torture), the right of freedom (so using him or her in any way would have to be regarded as slavery), the right for privacy (so we must not spy inside his or her mind) and so on.

Mr. Markram rightly writes that “Ethical concerns restrict the experiments that neuroscientists can perform on the human brain”. If I am right, the same restrictions would hold to a simulated human brain. In fact, if the simulated brain contains a human consciousness, then the mere act of creating it would be unethical. We would have created a newborn baby with Locked-In-Syndrome, with the full emotional and cognitive needs of a baby and probably no way to satisfy these needs. That would be a cruel thing to do.

We should consider the resulting philosophical and legal questions before we ever consider doing such an “experiment”. We are stirring up a hornet’s nest of philosophical and legal problems here. Some examples: If you connect the simulated conscious brain to an artificial body, the resulting robotic system would have to be regarded as a human being with all the rights of a human being, including a right to live, to be free, to have privacy, to decide over his or her own life and so on. If that artificial human being decides to become a musician, a Buddhist monk or a carpenter, we would have to respect his or her decision. We would have to continue paying the bill for those super computers (switching them off would be murder). If he or she decides to end his/her life, than if we think humans have a right to commit suicide we would have to grant him/her the right to destroy the artificial brain. What if he/she becomes a criminal…

If we introspectively investigate our own consciousness, we will find that we are not aware of the neuronal or molecular processes and of a lot of the information processing underlying our cognitive and perceptive processes. The internal details of these processes are not introspectively accessible. There is a “horizon of accessibility” that we cannot see through “from the inside”. My point is that the nature of the “hardware” implementing the conscious mind is beyond that horizon, so the mind cannot distinguish if it runs on natural or simulated neurons. As a consequence, the nature of that “hardware” does not matter for the question of the ontological or ethical status of the resulting consciousness! A silicon-based mind therefore would have to be treated just like a biological one.

A simple example from computer sciences might clarify my point: consider a text file stored on a hard disk. Now copy it to a USB stick or a CD-ROM. The physical representation of the file will be completely different in each case, but any application using the file cannot “see” these differences. The operating system and device drivers create a world of “emulated objects” whose properties can be described and understood independently from the physical system used to implement them. On a physical level of description, what you have are magnetic orientations of particles, small holes in the CD-ROM’s surface and so on, together with processes “reading” these features that are physically completely different from each other. From the application’s point of view, these differences are not accessible (and the usefulness of computers to a great extend comes from the possibility to create such emulated objects and “evert” them to us through a user interface). The application resides in a world of objects that are, in a way, independent of the underlying physics. It is an observer of emulated objects.

You might say that this emulated world is just a layer of description and that “in reality” the only thing existing is the magnetized particles, electrical currents and so on and that the application we as programmers think of exists only as a description in our minds (while in reality, there is a physical machine executing machine instructions only). But a crucial aspect of consciousness seems to by its self-reflexivity. If the “application” can observe itself and is itself part of the world of emulated objects, being able to create descriptions of itself and the processes inside itself, it will be existing from its own point of view, thus acquiring an independent existence. We would then have, inside the system, an “internal observer” that is emulated by the system and that exists from its own point of view. And this observer would have a horizon of accessibility shielding the details of its implementation from its view.

My idea is that our own consciousness is an internal reflexive observer of this kind, emulated by the neuronal processes in our brains. If you simulate these processes in a computer system, such an internal observer would be present too. The brain would be simulated, but the consciousness inside would be emulated, just like the one in a biological brain. The details of its hardware (biological vs. silicon) would be beyond its horizon of accessibility. This means that the resulting conscious mind would subjectively be just as the conscious mind of a biological brain having the simulated structure.

Therefore, all ethical constraints that apply to experiments on human beings must apply here too. The ability to experience joy and pain is the reason of treating humans not just as things. It is the basis of ethics. We don’t understand yet how subjective experiences of joy or pain can arise in our conscious minds, but we should assume that they also would arise in a simulated brain, unless we can prove otherwise.

The human brain project might bring us a step further to understanding these issues and to raise the philosophy of mind to the level of a science, but I suggest we should actually stop with the simulations before they cross the threshold of becoming conscious. Referring to the graphics on page 38 of the article, I suggest you should become very cautious when you reach the “Regions” levels or start going beyond it.

One more point: reaching consciousness inside a simulated brain might be computationally easier than doing a whole brain simulation down to the molecular or neural level because if we replace detailed simulations of some of the brain’s components (e.g. neuronal columns) with simplified approximations taking less computational resources, those details could again be beyond the horizon of accessibility of an internal observer arising within the system. So we might get into the “danger zone” of creating a human conscious mind even before we reach the technological ability to simulate a human brain down to the neuronal or molecular level. I therefore suggest thinking those philosophical questions through very carefully before such experiments are undertaken in reality.

Kind Regards

Andreas Keller, Colgne, Germany

22 thoughts on “An open letter to the “Human Brain Project”

  1. This is ridiculous. We are so far away from being able to create a conscious entity within a computer that it’s not even worth thinking about the ethics. We don’t even know if it’s possible. It’s certainly not going to be the case that one day we’ll switch on the computer and it’ll be conscious. Any development in that direction will be slow and will progress in stages. Once we reach the stage of finding that it is indeed possible, and that we have the capability to achieve it, then we can worry about the ethics.

    • I hope you are right. What the people of the Human Brain Project are planning might be unfeasible, but the think it is. They are not talking about consciousness in your personal computer. What they want to build (within one or two decades) is a big machine that would have the electricity bill of a small town. The computational power required would be enormous, but it might be within what is possible. Markram states in the article in Scientific American that experiments on humans are not ethical. Now, the abstract of the article on http://www.nature.com/scientificamerican/journal/v306/n6/full/scientificamerican0612-50.html says:

      “-Computer simulation will introduce ever greater verisimilitude into digital depictions of the workings of the human brain.
      -By the year 2020 digital brains may be able to represent the inner workings of a single brain cell or even the whole brain.
      -A sim brain can act as a stand-in for the genuine article, thus fostering a new understanding of autism or permitting virtual drug trials. ”

      Personally, I am not sure they will succeed in doing that, but if they do, I believe the resulting machine would be conscious and the planned experiment (simulations of drug tests etc.) would, in my view, be just as unethical as the corresponding experiments in normal people. I hope you are right, but Scientific American is a respected journal, they would not so easily publish crank science. The project application has been submitted with the European Commission on 23rd of October, 2012 (on 23rd of October, 2012). So I am afraid it is less ridiculous than you might think.

    • I don’t think its ridiculous to pose such ideas. These are very valid concerns however i think we are missing a few things here. The human brain is conscious and has feelings because of all the many different types of stimulus and information that have been fed into it via its sensory organs from as early on as the womb. If you simulate the machinery for the brain itself you would only ever get out of it what you put in. This simulation would not learn the love of a parent for example or the taste and texture of a coconut. It would not of been hurt physically or have suffered hardship. It would not of felt love, joy, pride etc. It would not know what pain is. It would simply be a set of algorithms working as a whole and that whole would of course develop some perception of self given enough stimulus but it would never be like us in my opinion (unless of course you give it a human body replete with eyes, ears, internal organs, skin, the sense of touch, reproductive organs etc… which i assume they are not doing….. yet)

      • Of course a normal human development would not be possible, but I think a consciousness able to feel fear and other emotions would develop. The experiment, I think, would be like taking a newborn baby and cutting its spinal cord and brain nerves, leaving a brain cut off from any sensual input. If that is an unacceptable cruel experiment then so is the creation of an artificial brain without sensory input.
        The simulation “would simply be a set of algorithms” in the same sense as you are simply a set of brain cells. The point is that such a set of information-processing units can emulate an internal observer (a consciousness). The core argument is that the non-biological base of the simulation is beyond the horizon of accessibility of the consciousness in the simulated brain, so I think indeed these two (thought-)experiments lead to equivalent results.
        If you do connect the simulated brain to an artificial or simulated body with artificial or simulated senses, you get an instance of slavery unless you give that artificial human full freedom (creating a host of other problems).
        I think we should stop before the stage where parts of a simulated brain are assembled into a whole one.

  2. I admit that I have not read the entire article. Part of the reason is that I get stuck at his first paragraph:
    “The prospect of the possibility of the simulation of a whole human brain, as described in Henry Markram’s Article, raises the question if such a simulated brain would become conscious. I suppose it would, and the consciousness arising in it would subjectively be indistinguishable from that of a normal human being.”

    Assuming such a brain became conscious, why would it “be indistinguishable from that of a normal human being.”? Why not a dog brain, or a horse brain?
    And assuming that such a brain has similar mental abilities wrt reasoning, why should we consider it human? Could it feel pain, or love, or compassion, or hate…? Could it appreciate a joke? What if some said to it, “That is so bad.” Could it comprehend that that was a compliment? Could it enjoy a fine meal, or bottle of wine? A fine summer day? Or a friend ? These also part of what we consider being human. So is it really “human”? And if these things could somehow be programmed into it, would those feelings etc. be human, or just “human-like”.
    If you could answer the question if a living human brain could exist in a vat and be conscious, would that be a human being – then you might be closer to whether an artificial brain was human in any real sense.
    In my opinion, any conscious being or entity has rights, like the right to life. A conscious machine would be in that group. But that does not mean it is human. After all, was HAL human?

  3. Andreas, I think it would be utterly fascinating to take this as far it will go. If consciousness can be achieved, then great! Let it live, let it learn, let it teach! I do see and recognise the ethical problems, but such an advancement will be fundamentally necessary if our species is ever going to truly explore space. Here i’m thinking more in terms of automated independence of systems capable of exploring space.

    • I am not entirely objected to the possibility of artificial consciousness. However we should avoid building one that has a human emotional system (and that would be the result of simulating a human brain). If we build an artificial consciousness of a different kind, there is still the question if we can just use it the way we want, e.g. send it into space. Where does slavery start? I think it is technically possible and will probably be done if our civilization does not collapse before that, but the novel ethical questions arising from this possibility must be thought about and answered.

  4. Te dejo mi sincera opinión. Estas cosas que planteas dan que pensar y deben de ser tenidas en cuenta, sí. Pero a ti realmente esto te preocupa? Quiero decir, realmente sientes el deber moral de quejarte ante esta causa? Esto podria traer mucho conocimiento al ser humano. Yo creo que las ventajas son tambien muy a tener en cuenta.

    • Let me return the question. If we could get valuable scientific insights from performing experiments on a human baby, would that be justified. For example, we take a baby, cut all the nerves going into its brain (backbone, optical nerves and so on) and then perform experiments on the brain, e.g. to test drugs. Would you think that is ethically justifiable if it would give us the chance to get some very helpful and valuable scientific knowledge? I would say: no.
      Now, if I am not wrong (and yes, I might be wrong), if the human brain project reaches the stage of simulating a complete brain, that is absolutely comparable. If there is a consciousness in that simulation, it would be like that of a human because the topology of the neurological network would be like that of a human. It would have the emotional capabilities and needs of a newborn human. If I am not wrong, that would be a (severely crippled) human being.
      I think it should not be done.

  5. Aside from the ethics and plausibility of the Human Brain Project. I think that the attention and funding that neuroscience is receiving is fantastic especially with the new BRAIN initiative. Hopefully if these efforts pay off we will be at an incredible understanding of the brain. I think the time frame the HBP have given is highly optimistic – I just hope something come from it.

  6. Pingback: Horizons | The Asifoscope

  7. Pingback: Two Conditions of Extensibility | Creativistic Philosophy

  8. Great letter, I feel/think the same regarding this and similar projects. I am quite curious whether you were reassured after reading their reply. Even with the 3% of funds allocated to ethics, I am not.

    • I am also not and I don’t think they want philosophy that is critical to what they are doing. In an interview I read somewhere, Markram said something like “wether such a system is conscious or not is a philosophical question”. I am paraphrasing here. I think he is not interested.
      I am going to come back to these questions sooner or later, I am not finished with it.
      It looks like the funding of the project was reduced and neurologists started to turn their back on it, but not for the reasons I find problematic. There are other such projects and these are fundamental questions.
      Thanks for your interest.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s