I’ve
been considering the nature
and role of consciousness
for some years. Along the way, I’ve wondered about Artificial
Intelligence (AI) and whether at some point it
might become conscious.
My conclusion has been that however complex and “intelligent” an
AI would become, that would not produce consciousness.
Consciousness requires life and – as at must include at least some
degree of self-awareness – it could only be the property of an
individual organism with some organized “self.” Machine
intelligence might be constructed – coded – to simulate self (and
thus pass the Turing
Test)
but this would nevertheless not be an awareness of self. (Even now,
AIs and robots can be quite “clever.” A recent visitor to a
Tokyo hotel asked the robot concierge how much it cost. It replied
“I’m priceless.” Cute.) However elaborate
the mind – in the case of the most advanced AI’s built with
neural
networks
this might be quite sophisticated, even to the point that the human programmers might not be able to replicate its internal processes –
consciousness is an additional property beyond mere complexity and processing power. In the first season of HBO’s Westworld,
android “hosts” become conscious through the repeated experience
of pain and loss. But of course, to feel such emotions, one must
first experience them as
such.
Quantity
(of coded processing operations) does not equal qualia.
Qualitative
experiences are only possible if one is already conscious.
But
human beings not only possess a conscious mind but also an
unconscious one.
Most brain processes – even those that at some point enter
consciousness – originate and work away unconsciously. We all have
the experience of dreaming and also of doing things quite
competently, without any awareness of having done them. (We may,
for example,
drive a familiar daily route and get to our destination without remembering details of the ride.) The brain processing of
the unconscious mind may indeed be replicated by advanced machine
intelligence. As AI becomes more complex, given the power of
electronic circuits and the complexity of coded learning via neutral
networks, the processing capacity of machine intelligence may well
exceed human despite not becoming conscious. They would also,
perhaps, exceed human ability to understand what they are “thinking” (or even “dreaming”).
It was Stephen
Hawking
who most famously warned of the dangers of such AIs. He warned that
we need to be attentive to their management.
So,
though AIs may never become conscious or self-aware, they may
nevertheless run autonomously along routes enabled by the algorithms
coded into their machine DNA and
come to “conclusions” we humans might find inconvenient or
dangerous. (They might for example decide that human
activity is injurious to the planet
– which it seems it is – and seek to correct the disease.) Limits should be included in the most basic AI coding and algorithms.
Isaac Asimov thought of this some time ago. His Three
Laws of Robotics
make a good start:
- First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
- To these he later added a zeroith law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
This
last might be might turn out to be a doubled-edged sword.
Next week I will return to Notes on "A History of Political Theory"