Sat, 15 Jan 1994 10:12:00 PST

Passmore writes:

"I believe that a concept of conciousness can exist without reference
to a specific material construction (for humans that material
construction would be the organic brain). The attempts to create
artificially intelligent computers based on non-organic materials
seems to indicate at least a belief that conciousness may be created
without the specific hardware present in humans. To take this to its
extreme, one might argue that if conciousness remains the same thing
regardless of the hardware supporting it then the conciousness(or
mind) exists separately from specific hardware involved.

This is not to say that this conciousness can exist without hardware
to support it. I merely hold that conciousness can exist regardless
of what hardware is supporting it."

I think Passmore is drawing the wrong conclusion from AI work (which, it
should be noted, has not achieved anything even vaguely resembling
"consciousness", and is having enormous problems in even having computers
reason in a sui generis manner). If one accepts that mind=brain (that is,
mind is simply a name for a property that the brain has has a material entity
and does not come, somehow, from without), then it follows that a computer
based on non-organic materials also has the potential of exhibiting the
capacities we associate with ourselves--consciousness, meaning, reasoning,
beliefs, emotions, etc. It is not that mind exists separately from
specific hardware, but mind is a property of hardware (and software). (Note
that androids such as Data in Star Trek, while having all the smarts and then
some of humans, lack emotion, beliefs, etc--an unwillingness on the part of
the writers to accept the possibility that these latter are, indeed, merely
phenomena of the brain, hence potentially duplicable in an android.) If one
grants that these properties that we associate with ourselves--and often
associated with ourselves as what makes us human--are simply the product of a
material brain, then we can ask: To what extent are these necessary
properties of a "smart brain", and to what extent are these properties that
have been selected for via natural selection? For if you accept the thesis
that mind=brain, and the brain has arisen via natural selection, then
properties of the mind are either selected for directly via natural
selection, or are epiphenomena of other properties. Now consider something
like religion and belief in spiritual powers (or however it may be
expressed). Could we have a brain with all of its reasoning capacities yet
not have a belief in a god or gods? That is, is the belief in gods, the
afterlife, etc. a property that was selected for, or is it an epiphenomena of
properties of, say, reasoning powers in the brain that were selected for.
Now if the latter, then the implication is that an android such as Data would
necessarily also have a belief in god or gods.

It seems that accepting ourselves as purely material (i.e., having arisen via
natural selection without some kind of outside "force" or whatever also being
part of our makeup) implies that if we are sufficiently clever in
constructing androids we may/will arrive at the point where what we create
has the properties we often associate with ourselves as what makes us "human"
and not just "another animal." How will we deal with androids that are
conscious, have belief in gods, capable of developing moral codes, etc?

D. Read