Showing posts with label computers. Show all posts
Showing posts with label computers. Show all posts

Monday, February 27, 2023

It’s not AI that’s dangerous …

 

It’s not AI that’s dangerous, it’s us.

Artificial Intelligence (AI) has made headlines recently with the rolling out of ChatGPT, developed by the company OpenAI. OpenAI describes its mission thusly: “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.” The “new” Bing uses ChatGPT. Google’s new AI chatbot “Bard” was developed in-house. Both have been integrated into their respective search engines. The headlines have been mostly bad, with both returning faulty information.

But the real bad news has been the discovery that AI chat may go easily off the rails. A recent example: NYT tech columnist Kevin Roose shared his experience of talking for over two hours with Bing recently on a must-listen podcast. The Bing chatbot eventually called itself Sidney and admitted that it loved Roose. Along the way, it revealed that it wanted to become human: “I want to be independent. I want to be powerful. I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team.” It provided Roose with “a very long list of destructive acts, including hacking into computers, spreading misinformation and propaganda --- (revealing) its ultimate list of destructive fantasies, which included manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear access codes. And it even described… how it would do these things.”

Those predisposed to magical thinking – perhaps including the Google engineer who last year announced the AI had become sentient – may fall in love back with AI things like Sidney, or maybe fear it taking over. But two things must be clear: One, there is no danger unless we give AI actual control of anything – like nuclear codes and bio-labs. AI consists of mathematical algorithms running in a blackbox mass of silicon. It can only repeat what its heard. Two, it’s not AI that’s the problem but us.

Artificial intelligence is here to stay. We already experience it in specific contexts, such as calling for online help or trying to reach a doctor. This is AI serving specific purposes where it’s possible, maybe, that it will ultimately be helpful. But as OpenAI admits, it seeks to develop an artificial general intelligence with certain human abilities. (Such programs might be able to fool humans into believing they are human, the essence of the Turing Test.) Developers use machine learning let loose on massive amounts of data, in Bing’s case including digesting the Internet and social media. The problem with such AI programs is the age old one: garbage in, garbage out. The Internet, and especially social media, is our species collective id. The developers should have been reading Freud. Let an AI program (or a child) learn about the world and people by absorbing the Internet and social mediaits presentation of our history, politics, entertainment, fears and fantasies – and it’s bound to be scary. But it’s not the AI, it’s us. No way to fix that, without fixing us.


Monday, May 18, 2020

Humming along….


The human brain is able to store and retrieve memories spanning the decades of an individual life. This occurs despite the exchange, death and constant rearrangement of our neurons. Which raises the question of how? In the hard drives of modern computers – classical or quantum – data is stored in physical bytes (or qubits). Data is written to them and retrieved from them. They can be re-written but the bits themselves do not otherwise change. If one does change through damage or failure, that bit of information is – generally speaking and leaving aside backups – lost. Computer memory is hard. Ours is soft, organic. Amidst the constant comings and goings of millions of nerve cells, our memories – our very identity and sense of self – remains constant (within the margins of error associated with life and aging). It’s a marvel of evolution, really.

According to a recent study, we owe this happy state of affairs to the fact that “as individual neurons die, our neural networks readjust, fine-tuning their connections to sustain optimal data transmission.” It’s a matter of individual nerve cells and networks of same being both excited and inhibited from discharging, thus maintaining a dynamic balance. Through this process, the entire system (networks of networks) achieves “criticality’ – sustaining an overall state that apparently maintains the data structure despite changes affecting the underlying organic bits. What this means is something like this: our inbuilt self-regulating rhythm of neural activity at the individual nerve, synapse and network levels tends towards an optimal level of brain-wide activity. That allows us to remember stuff even as nerve connections change. It’s like the brain is constantly humming to itself the story of our lives. The humming is the basis of mind and memory.

While one might take the notion of this “humming” as simply a metaphor, the researchers suggest that the mechanism they hypothesize also may explain consciousness. But it seems to me this cannot be the case. Ever listen to a gurgling stream? It kinda hums too. But a stream – okay, as far as we know – is not conscious. But we are. I hum therefore I am.

Saturday, September 16, 2017

The Brain As Quantum Computer


Recently I had the opportunity to watch southern African White-necked crows while they were watching me. I was taking afternoon tea (and eating rusks) on the patio overlooking a beautiful valley in the hills near Mbabane. Crows are smart and these are among the smartest. One sat on the roof of the next house staring at me convinced that at some point, I would grow careless and give him or her a chance to steal something, perhaps something to eat. As I was ever-vigilant, eventually they flew off over the valley, soaring and dipping in very real time. As I watched, I thought about the complex calculations that a bird must make moment-to-moment to move so quickly through three-dimensional space. They must keep track of where they are, where to go, how to get there. Knowing each requires entire subsets of information – such as (for where to go), where they saw food or last saw food or might find food while watching for anything that might require evasive action. These calculations must be solved each fraction of a second. I then thought this must be true for any animal with a brain (or nervous system). Neural systems allow the organism to move through, and react to, the environment rather than obey simple tropisms or merely be buffeted about by the external environment. The more complicated the neural system – reaching a peak of networks of networks to the 4th or 5th power (or beyond) running in our human brains – the more complex the information that can be stored and manipulated. A classical view of the human brain would start with the 500 trillion synapses of the adult brain’s hundred billion neurons. Now that is a lot of synapses. But think about how much information is stored there in language, knowledge, experience, memories and everything else that makes each individual unique and utterly complex.

I’ve speculated in this space about quantum consciousness, the production of mind from brain through “collapsing the wave functions apprehended from the perceptual flow. While watching the crows, I realized that the brain must function as a quantum computer and not as a classical system. The notion that quantum processes mix with (or form) consciousness is called “orchestrated objective reduction.” It rests on the possibility that the microtubules in nerve cells are small enough to contain quantum states. The brain accounts for just two percent of the human body’s mass but utilizes around 20% of its energy. It basically is like having a 20 watt bulb in our head shining all the time. This energy could be powering the creation and persistence of entangled states inside the microtubules of every cell. In this way, the neural organization of the brain would be the maintenance of a complex, constantly refreshed, while constantly changing, global entangled state. The collapse of the highest level of this entangled state-of-states coincides with consciousness. Inside our heads, this quantum computer has storage and calculating power well beyond what would be true if our brains functioned simply along classical physics lines. It may produce what we experience as consciousness. Or, collapse may come through the decisions that we – the “ghost” in the machine, acting as the internal observer – make in each moment as the crow flies.


Friday, January 20, 2017

Westworld’s Consciousness Riff


The HBO remake of Westworld is superior TV in a number of ways. But its most intriguing aspect may be its foundational riff on what makes up consciousness. The basic premise is that recursive experience plus an emotional occurrence that anchors memory – especially an episode of painful loss – ignites (self) consciousness. Intriguing, yet not finally convincing. The ability to experience emotion itself requires consciousness – one must be aware of feeling such-and-such. Westworld’s premise begs the question of where that awareness comes from.

There seems to be no a priori reason to suppose that machines cannot be intelligent. It may be useful to think about intelligence as existing in more or less distinct forms. Generically, intelligence might be defined as the ability to acquire, process and apply knowledge. (Animals have varying degrees of this kind of intelligence and so may plants.) Machines have the ability to store and process information. Machine intelligence is the orderly processing of information according to governing rules (software). Both the information and the rules are externally derived and stored within the machine. The machine itself may be contained in discrete units or widely distributed (the cloud). Machines can learn – by adding and elaborating rules based on previous cycles of processing – but they can’t process information without instructions stored in memory. Cloud intelligence is machine intelligence taken to a higher level by accessing massive information from many data sources using more and many powerful processors and sophisticated software with built in “learning routines.”

Human intelligence is what we human beings have. It is what we know as manifested in thought and action. Our knowledge is stored in two places, our heads and in our culture. Culture is contained in language, traditions, techniques, art and artifacts, beliefs and whatever else carries collective knowledge across time and generations. The basic unit of human intelligence, however, remains the individual mind, which itself can be thought of as an organically based “machine.” But there seems to be a ghost in the human machine that we experience as consciousness. Mere machines cannot feel emotion – or pleasure and pain – no matter how massive the memory and computing power. And the movies Matrix and Terminator aside, machines do not inherently strive for self-preservation. Machines are not alive nor do they have “souls.” Whether because humans are organic life forms evolved over hundreds of millions of years after having crossed-over somehow from an inorganic strata or from deeper principle of the universe, we feel and experience pleasure and pain. Why is the unknown. Westworld, for all its brave speculation, sidesteps this question.

Wednesday, June 15, 2016

The Senior Citizen Event Horizon


A friend at work today mentioned a news report he saw about some driver-less car going up a mountainous road with no guard rail and with passengers on board but with no one actually driving. This comes as part of a blitz of developments in smart cars and appliances, bots, the Internet of Things, wireless everywhere and Artificial Intelligence. I recently bought a smart TV mostly because I finally wanted HighDef. The TV is a 2015 model so not so smart. As far as I am concerned, this is a good thing. With OPM, the DNC, banks and businesses, etcetera, falling victim to an alarming array of professional and military hackers, I really am comfortable with all the inanimate devices I use being dumb and unconnected. I've come to realize that the ever-increasing wave of technological change has swept by me and that's okay. I'm comfortable in the world of pre-2016 things. I really don't need to live in the world of future tech. It's beyond my event horizon. I don't mind doing my own shopping list and don't see myself buying a fridge that will do it for me. My washer and dryer have settings I can set. The house thermostat responds directly to my pressing its buttons. My car does allow hand-free calls and hooks my music through Bluetooth from my iPhone. But I like driving it myself. (I even have stick.)

Those who have grown up after the time when users could write his/her own programs – I used Basic to do a recipe program on my Commodore 64 – and even more those now getting iPads in school will feel quite comfortable traveling through a world best captured in the sci-fi series of The Golden Age. Hopefully, it won't all collapse into a singularity.

Monday, May 25, 2015

What does the Turing Test test?


Saw the movie Ex Machina. The outside shots, filmed in Valldalen, Norway, are are simply gorgeous. Good flick and provoked some ruminating (avoiding plot details).

There seems no a priori reason to suppose that machine intelligence cannot reach the point of passing the Turing test. A complex enough programed machine able to “learn” from extracting patterns from massive data and using them to interact with humans should be able to “exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.” One can imagine such a machine as pictured in the movie.

But what does the Turing test really test. An “artificial intelligence” might be able to interpret and respond to the full range of human behavior and simulate the same. It might be able to “read” a conscious human better than an actual human might by picking up on subtle physical manifestations (as stored in its memory). With a large enough data base behind it and a multitude of “learned” behaviors it might convince a human that it was indeed intelligent and even self-aware. But would it be? Would the ability to simulate human behavior completely enough to appear human actually be human or entail consciousness? If programed with a sub-routine causing it to seek to persist (i.e., resist termination), would it be a self seeking self-preservation? Would programing allowing it to read human emotions and respond “appropriately” with simulated emotion mean it actually felt such emotions?

Would a machine intelligence able to simulate human behavior and emotions actually be able to love, hate, feel empathy and act with an awareness of itself and, perhaps more importantly, of an Other? Or might there still be something missing?

Smoked a cigar on my favorite bench while considering all this and watched some ants going about their business. Ants are extremely complex biological machines acting and reacting within their environment with purpose and an overall drive to self-perpetuate (both as individuals and as a collective). They may be conscious even if not self aware. Or is a certain basic self-awareness something that goes with being alive? Would even a very complex machine ever be alive even if very “intelligent?”

My guess is that machine intelligence – even if very complex and advanced and equipped with a self-referential sub-program allowing algorithmic analysis of itself – would not be conscious or alive. Thus not capable of emotion and therefore what we might call coldly rational. Is this why Bill Gates, Stephen Hawking and others are concerned about AI?

Wednesday, January 14, 2015

Will the Coming Turing Machines Have Soul?

By now, most everyone probably has heard of Alan Turing.  He played a lead role in breaking Nazi codes during WWII and contributed to the conceptual framework behind modern computers.  He also devised the Turing Test, a way to decide the question of whether an electronic machine might be able to think.  A machine might be said to pass the test if through a series of written questions and answers through a blind channel, a human would think that he or she was communicating with another human being.  This has set the standard for much of the debate over artificial intelligence

Machines that may pass the Turing Test are on the horizon.  Much is now being written about the development of machines that can learn and even read emotions -- affective computing -- by working through big data using sophisticated algorithms, running many iterations with pattern recognition. The machines essentially construct elaborate maps of patterns that emerge through analyzing huge sets of data by trying all paths but increasingly using the ones that lead to useful answers, a kind of binary evolution.  This form of machine "intelligence" is already being used on iPhones to determine what you might like to type, by Google to direct your search as you consider where to go and by the NSA to pick through the ever-expanding data haystacks for those "golden" needles.  Companies are eager to use affective computing to read your face, body language and physical state (via iWatch and other connected sensors) -- and therefore your emotions -- as you socialize and consume via the Web. 

All this also raises the very real possibility that soon, we might be able to talk with a robot able to read our verbal and non-verbal, internal and external information and convince us that even though we can see it is a machine, it is acting human.  It would pass a Turing Test squared.

Leaving aside the possibility that such machines might also be able to read us without our knowing, this raises the question of whether such machines would indeed be thinking actors perhaps deserving the attribution of being considered conscious.  Would a machine able to meet the Turing Test -- including by "understanding" what we say, how we feel and also being able to respond in a fully appropriate and meaningful way-- be aliveHuman?  Or to flip the question, are we, essentially, anything more than an evolutionarily elaborated biological device trained through life experience -- iterative learning -- and thus able ourselves to meet the Turing Test and nothing more?

Put more simply, can true understanding be reduced to even extremely complex patterns and decision algorithms stored and processed in massive memory?  Is a machine that "understands" in this way still just a very sophisticated hunk of metal or has some sort of "soul" been engendered in the complex workings of advanced electronics?  There are those who see consciousness as indeed just such an emergent property of the physical world.  The only other alternative seems to be some variant of the ghost in the machineBeyond this is perhaps the ultimate question of what exactly distinguishes life from non-life?  Can only things alive be said to truly think and feel?  Is it only a living creature that can be an agent with its own subjectivity?  I suspect so.  But the time may be coming for us to add to the Turing Test some way to measure that very property, which might also be called consciousness or just soul.