Showing posts with label information. Show all posts
Showing posts with label information. Show all posts

Monday, May 18, 2020

Humming along….


The human brain is able to store and retrieve memories spanning the decades of an individual life. This occurs despite the exchange, death and constant rearrangement of our neurons. Which raises the question of how? In the hard drives of modern computers – classical or quantum – data is stored in physical bytes (or qubits). Data is written to them and retrieved from them. They can be re-written but the bits themselves do not otherwise change. If one does change through damage or failure, that bit of information is – generally speaking and leaving aside backups – lost. Computer memory is hard. Ours is soft, organic. Amidst the constant comings and goings of millions of nerve cells, our memories – our very identity and sense of self – remains constant (within the margins of error associated with life and aging). It’s a marvel of evolution, really.

According to a recent study, we owe this happy state of affairs to the fact that “as individual neurons die, our neural networks readjust, fine-tuning their connections to sustain optimal data transmission.” It’s a matter of individual nerve cells and networks of same being both excited and inhibited from discharging, thus maintaining a dynamic balance. Through this process, the entire system (networks of networks) achieves “criticality’ – sustaining an overall state that apparently maintains the data structure despite changes affecting the underlying organic bits. What this means is something like this: our inbuilt self-regulating rhythm of neural activity at the individual nerve, synapse and network levels tends towards an optimal level of brain-wide activity. That allows us to remember stuff even as nerve connections change. It’s like the brain is constantly humming to itself the story of our lives. The humming is the basis of mind and memory.

While one might take the notion of this “humming” as simply a metaphor, the researchers suggest that the mechanism they hypothesize also may explain consciousness. But it seems to me this cannot be the case. Ever listen to a gurgling stream? It kinda hums too. But a stream – okay, as far as we know – is not conscious. But we are. I hum therefore I am.

Wednesday, September 25, 2019

Interlude: Unconscious Artificial Intelligence?


I’ve been considering the nature and role of consciousness for some years. Along the way, I’ve wondered about Artificial Intelligence (AI) and whether at some point it might become conscious. My conclusion has been that however complex and “intelligent” an AI would become, that would not produce consciousness. Consciousness requires life and – as at must include at least some degree of self-awareness – it could only be the property of an individual organism with some organized “self.” Machine intelligence might be constructed – coded – to simulate self (and thus pass the Turing Test) but this would nevertheless not be an awareness of self. (Even now, AIs and robots can be quite “clever.” A recent visitor to a Tokyo hotel asked the robot concierge how much it cost. It replied “I’m priceless.” Cute.) However elaborate the mind – in the case of the most advanced AI’s built with neural networks this might be quite sophisticated, even to the point that the human programmers might not be able to replicate its internal processes – consciousness is an additional property beyond mere complexity and processing power. In the first season of HBO’s Westworld, android “hosts” become conscious through the repeated experience of pain and loss. But of course, to feel such emotions, one must first experience them as such. Quantity (of coded processing operations) does not equal qualia. Qualitative experiences are only possible if one is already conscious.

But human beings not only possess a conscious mind but also an unconscious one. Most brain processes – even those that at some point enter consciousness – originate and work away unconsciously. We all have the experience of dreaming and also of doing things quite competently, without any awareness of having done them. (We may, for example, drive a familiar daily route and get to our destination without remembering details of the ride.) The brain processing of the unconscious mind may indeed be replicated by advanced machine intelligence. As AI becomes more complex, given the power of electronic circuits and the complexity of coded learning via neutral networks, the processing capacity of machine intelligence may well exceed human despite not becoming conscious. They would also, perhaps, exceed human ability to understand what they are “thinking” (or even “dreaming”). It was Stephen Hawking who most famously warned of the dangers of such AIs. He warned that we need to be attentive to their management.

So, though AIs may never become conscious or self-aware, they may nevertheless run autonomously along routes enabled by the algorithms coded into their machine DNA and come to “conclusions” we humans might find inconvenient or dangerous. (They might for example decide that human activity is injurious to the planet – which it seems it is – and seek to correct the disease.) Limits should be included in the most basic AI coding and algorithms. Isaac Asimov thought of this some time ago. His Three Laws of Robotics make a good start:

First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

To these he later added a zeroith law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
This last might be might turn out to be a doubled-edged sword.


Next week I will return to  Notes on "A History of Political Theory"

Saturday, September 16, 2017

The Brain As Quantum Computer


Recently I had the opportunity to watch southern African White-necked crows while they were watching me. I was taking afternoon tea (and eating rusks) on the patio overlooking a beautiful valley in the hills near Mbabane. Crows are smart and these are among the smartest. One sat on the roof of the next house staring at me convinced that at some point, I would grow careless and give him or her a chance to steal something, perhaps something to eat. As I was ever-vigilant, eventually they flew off over the valley, soaring and dipping in very real time. As I watched, I thought about the complex calculations that a bird must make moment-to-moment to move so quickly through three-dimensional space. They must keep track of where they are, where to go, how to get there. Knowing each requires entire subsets of information – such as (for where to go), where they saw food or last saw food or might find food while watching for anything that might require evasive action. These calculations must be solved each fraction of a second. I then thought this must be true for any animal with a brain (or nervous system). Neural systems allow the organism to move through, and react to, the environment rather than obey simple tropisms or merely be buffeted about by the external environment. The more complicated the neural system – reaching a peak of networks of networks to the 4th or 5th power (or beyond) running in our human brains – the more complex the information that can be stored and manipulated. A classical view of the human brain would start with the 500 trillion synapses of the adult brain’s hundred billion neurons. Now that is a lot of synapses. But think about how much information is stored there in language, knowledge, experience, memories and everything else that makes each individual unique and utterly complex.

I’ve speculated in this space about quantum consciousness, the production of mind from brain through “collapsing the wave functions apprehended from the perceptual flow. While watching the crows, I realized that the brain must function as a quantum computer and not as a classical system. The notion that quantum processes mix with (or form) consciousness is called “orchestrated objective reduction.” It rests on the possibility that the microtubules in nerve cells are small enough to contain quantum states. The brain accounts for just two percent of the human body’s mass but utilizes around 20% of its energy. It basically is like having a 20 watt bulb in our head shining all the time. This energy could be powering the creation and persistence of entangled states inside the microtubules of every cell. In this way, the neural organization of the brain would be the maintenance of a complex, constantly refreshed, while constantly changing, global entangled state. The collapse of the highest level of this entangled state-of-states coincides with consciousness. Inside our heads, this quantum computer has storage and calculating power well beyond what would be true if our brains functioned simply along classical physics lines. It may produce what we experience as consciousness. Or, collapse may come through the decisions that we – the “ghost” in the machine, acting as the internal observer – make in each moment as the crow flies.


Friday, January 20, 2017

Westworld’s Consciousness Riff


The HBO remake of Westworld is superior TV in a number of ways. But its most intriguing aspect may be its foundational riff on what makes up consciousness. The basic premise is that recursive experience plus an emotional occurrence that anchors memory – especially an episode of painful loss – ignites (self) consciousness. Intriguing, yet not finally convincing. The ability to experience emotion itself requires consciousness – one must be aware of feeling such-and-such. Westworld’s premise begs the question of where that awareness comes from.

There seems to be no a priori reason to suppose that machines cannot be intelligent. It may be useful to think about intelligence as existing in more or less distinct forms. Generically, intelligence might be defined as the ability to acquire, process and apply knowledge. (Animals have varying degrees of this kind of intelligence and so may plants.) Machines have the ability to store and process information. Machine intelligence is the orderly processing of information according to governing rules (software). Both the information and the rules are externally derived and stored within the machine. The machine itself may be contained in discrete units or widely distributed (the cloud). Machines can learn – by adding and elaborating rules based on previous cycles of processing – but they can’t process information without instructions stored in memory. Cloud intelligence is machine intelligence taken to a higher level by accessing massive information from many data sources using more and many powerful processors and sophisticated software with built in “learning routines.”

Human intelligence is what we human beings have. It is what we know as manifested in thought and action. Our knowledge is stored in two places, our heads and in our culture. Culture is contained in language, traditions, techniques, art and artifacts, beliefs and whatever else carries collective knowledge across time and generations. The basic unit of human intelligence, however, remains the individual mind, which itself can be thought of as an organically based “machine.” But there seems to be a ghost in the human machine that we experience as consciousness. Mere machines cannot feel emotion – or pleasure and pain – no matter how massive the memory and computing power. And the movies Matrix and Terminator aside, machines do not inherently strive for self-preservation. Machines are not alive nor do they have “souls.” Whether because humans are organic life forms evolved over hundreds of millions of years after having crossed-over somehow from an inorganic strata or from deeper principle of the universe, we feel and experience pleasure and pain. Why is the unknown. Westworld, for all its brave speculation, sidesteps this question.

Monday, September 27, 2010

The Largest Quantum Object - The Toilet?

Science News, in its September 25 issue, runs a piece on suggestions that gravity is a matter of entropy and information. I had a hard time getting this one. It seems to be a matter of looking at space as bounded by holographic screens that establish boundaries that create gradients leading to movement we call gravity. A black hole's event horizon can also be considered as such a hologram, containing on its curved, flat surface, all information about the black hole's interior entropy. I can follow the illustration of how a two-dimensional surface – a mirror – can contain all the information needed to record the three-dimension surface it reflects. But even if I understood how all this creates gravity, it would still leave the question of why the universe obeys the 3rd Law of Thermodynamics.

Anyway, what I really want to ask now is why toilets seem to work best when one lifts the top of the tank off? Improper flushing is a common problem with these wondrous contrivances. Sometimes handles get stuck or maybe the mechanism operates with insufficient oomph. When that happens, it seems that simply lifting the tank lid to see what is going wrong pretty much guarantees it will work properly (and that you will not see what the problem may be). Could it be that when the lid is closed, the toilet is a quantum object and the tank containing the water like Schrödinger's cat-box? When the lid is opened, the wave-function collapses and the object settles into the functioning state? This would make the toilet the largest quantum object know to physics. (It might also argue for transparent tanks.) Could this also somehow be related to black holes?