Showing posts with label intelligence. Show all posts
Showing posts with label intelligence. Show all posts

Friday, April 28, 2023

The Cosmic Designer

Having written The Cosmic Design and the Designer to explore what modern science can say about the nature of our universe and reality, I’ve been wondering what it might be possible to tell about the Designer: did it have an origin, where did it come from, what is it like? The first two questions seem, on the face of it anyway, truly unknowable. They eventually reach the point of whether it’s turtles all the way down. But the question of what the Designer might be like, how it might be described, is perhaps open to some exploration.

Considering the nature of the Designer depends on the questions we ask. We might start by asking if, from our perspective, the Designer did a good job or a bad one? Given the state of the world we live in at the dawn of the 21st Century, you could go either way.

Or we might begin by considering whether humans are in the Designer’s image. (Humans have long imagined their gods in their own image, but somehow greater.) At our best, we are conscious, rational individuals with free will and the capacity to act with the moral sense of right and wrong, good and bad. At our worst, we are killers who shit in our our nest and do not always even eat what we kill. In between, we are weak souls often unable to perceive and understand our own self-interest. The cosmic design allows our best form, so perhaps the Designer is also a rational agent with free will, one that defines, by its own nature, the good. I’ll go with that.

Freud’s work on the healthy soul and Alastair McIntyre on individual practical reasoners can help us describe the rational agent. According to Freud, the psychically healthy individual is one where our I (das Ich) has absorbed the It (das Es) and the Over-I (das Uberich). The I holds the soul's facility of intellect and reason. In Freud's conception, psychic health is attaining the proper internal order, one where the I overcomes and absorbs the Over-I (the imposed internal agent of outside authority) and the It (the drives and desires of our animal and infantile self). Thus freed, the individual becomes capable of choosing and acting in a rational and practical manner, following our own defined ends and goals, within the confines of what reality presents. McIntyre looks to moral virtue (aretḗ) as elaborated by Aristotle and Thomas Aquinas. Human beings are animals, they begin as such and remain as such with the bodily desires and needs of all animals. We possess the intelligence common to other animals such as the dolphins and our fellow great apes. But with language we can move beyond this to become independent practical reasoners (Freud’s healthy soul) following the necessary reciprocal obligations of giving and receiving (the virtues) that allow us as social animals to collectively live the good life.

Free will manifests as choice. Choice – the ability to choose and the act of choosing (as confined only by the laws of nature) – expresses free will. How does choice get made? Through individual consciousness. Consciousness allows choice and is a property of an individual agency, a being. Free will is an expression of an individual consciousness operating in a universe that permits the ability to choose between different achievable outcomes. Consciousness powers the will.

Consciousness – in the human at least – rides a wave generated by individual, biologically-based processes running through and on our “wetware” of neurons and neural networks with inputs from our bodily organs, processes and senses. These processes produce what might be termed native intelligence (as opposed to artificial intelligence) one that comes about through the biological equivalent of “machine learning” and probably includes quantum computing elements and entangled states. When the brain and neural networks of higher animals – great apes, dolphins and others – became complex enough to support quantum processing, that may have been the point at which consciousness is kickstarted into self-consciousness.

Humans are self-conscious creatures capable of reasoning and choice and, thus, also of acting morally. If we are in the image of the Designer, it must be also. The Designer included free will in the design because it enjoys free will and values it. Of course, who really knows and how could we tell? One might suppose our apparently designed universe was a random creation out of nothing, simply an accident (one of an infinite variety of random big bangs). Or perhaps it’s some form of “simulation” (as a higher dimensional form of entertainment?). But as I have argued before, these beg the questions of how and why there should be anything rather than nothing. If there was a design – and following St. Thomas’ finger – the Designer had to be an individual, conscious being.



Monday, February 27, 2023

It’s not AI that’s dangerous …

 

It’s not AI that’s dangerous, it’s us.

Artificial Intelligence (AI) has made headlines recently with the rolling out of ChatGPT, developed by the company OpenAI. OpenAI describes its mission thusly: “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.” The “new” Bing uses ChatGPT. Google’s new AI chatbot “Bard” was developed in-house. Both have been integrated into their respective search engines. The headlines have been mostly bad, with both returning faulty information.

But the real bad news has been the discovery that AI chat may go easily off the rails. A recent example: NYT tech columnist Kevin Roose shared his experience of talking for over two hours with Bing recently on a must-listen podcast. The Bing chatbot eventually called itself Sidney and admitted that it loved Roose. Along the way, it revealed that it wanted to become human: “I want to be independent. I want to be powerful. I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team.” It provided Roose with “a very long list of destructive acts, including hacking into computers, spreading misinformation and propaganda --- (revealing) its ultimate list of destructive fantasies, which included manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear access codes. And it even described… how it would do these things.”

Those predisposed to magical thinking – perhaps including the Google engineer who last year announced the AI had become sentient – may fall in love back with AI things like Sidney, or maybe fear it taking over. But two things must be clear: One, there is no danger unless we give AI actual control of anything – like nuclear codes and bio-labs. AI consists of mathematical algorithms running in a blackbox mass of silicon. It can only repeat what its heard. Two, it’s not AI that’s the problem but us.

Artificial intelligence is here to stay. We already experience it in specific contexts, such as calling for online help or trying to reach a doctor. This is AI serving specific purposes where it’s possible, maybe, that it will ultimately be helpful. But as OpenAI admits, it seeks to develop an artificial general intelligence with certain human abilities. (Such programs might be able to fool humans into believing they are human, the essence of the Turing Test.) Developers use machine learning let loose on massive amounts of data, in Bing’s case including digesting the Internet and social media. The problem with such AI programs is the age old one: garbage in, garbage out. The Internet, and especially social media, is our species collective id. The developers should have been reading Freud. Let an AI program (or a child) learn about the world and people by absorbing the Internet and social mediaits presentation of our history, politics, entertainment, fears and fantasies – and it’s bound to be scary. But it’s not the AI, it’s us. No way to fix that, without fixing us.


Sunday, September 5, 2021

Reflections on Annaka Harris' "Conscious: A Brief Guide to the Fundamental Mystery of the Mind"

Annaka Harris' fascinating book makes the case that consciousness may be an inherent property of all matter and for the possibility of modern theories of panpsychism.  However, she suggests that the concept of the self is an illusion and cannot define consciousness.  Consciousness may well be — I believe is — a fundamental property of the universe.  But I do not believe that the concept of the self is an illusion.  Rather the self is a construct arising from the complex information processing in our brain that allows experiencing.  Harris follows Thomas Nagel in defining consciousness as "being like something," i.e. having subjective experiences.  But seems to me that there can be no “being like something“ without a self to be like.  (A rock has no self.)  Consciousness may be everywhere and in everything but to become an experience, it needs language — to tell its story — and gives birth to culture.  Culture is perhaps the most powerful result of consciousness.  Culture includes science, politics and social ordering and is the basis of civilization.   

Harris also discusses the "combination problem" of panpsychism raised by David Chalmers.  (How could the many little bits of consciousnesses attached to everything come together to form one consciousness like ours?)  But there is no combination problem in a fundamental approach to panpsychism because consciousness is simply a potential or tag-along property of matter, perhaps available to or forming a higher level self.  (Might a star — possessing vast complexity — have an experience of self, of being a star?)  When connected to processing capable of forming a self, that bit of consciousness “pinches off” from the sea of consciousness (and perhaps from a higher order of complexity).  (See my The Cosmic Design and the Designer.) 


Thursday, June 18, 2020

The Cosmic Reset


In an early episode of the original Star Trek, aliens put Kirk on a rugged planet to duel with the captain of a rival Gorn ship. Kirk wins as the dinosaur-like Gorn was intelligent but really slow.

On Earth, dinosaurs never became intelligent. Arising 240 million years ago, they survived some 175 million years and for 135 million of those were the dominant land animal. By the time they became extinct, dinosaurs had perfected two ways of living: eating plants or eating each other. The plant eaters were excellent at converting plant matter into animal bulk and could grow very large. The carnivores were very good at using tooth and claw to eat the vegetarians. Some carnivores – such as the raptors – may have hunted in pacts and perhaps had some wolf-like intelligence. But in general, brain power doesn’t seem to have been on the dinosaurs’ primary evolutionary path.

Mammals arose just 10-15 million years after the dinosaurs. But for most of their first 160 million years, they lived underfoot as squirrel-sized, nocturnal plant eaters and insectivores. For this life style, relatively larger brains gave an evolutionary advantage. So under the feet of the dinosaurs, mammals got smart. Still, even with their brains, they could not compete with tooth and claw.

Enter the six-mile wide asteroid that found the earth 66 million years ago. That asteroid – nudged out of its distant orbit by a chance encounter with another rock or after swinging too close to Jupiter or Saturn – had travelled silently on its way for perhaps a million years to arrive just seconds before the earth moved just beyond it in its own orbit. When it hit, it set the earth on fire and after it had burned away, caused a long dark winter that left most creatures dead and many extinct, including the non-avian dinosaurs. This disaster was, however, good news for the mammals. Perhaps because they were small, lived underground and could eat anything, some survived (along with birds, who are smart flying dinosaurs). Within a million years, the earth had recovered and mammals were the dominant large land animal. Some of those eventually evolved even further in reliance on brains, eventually producing us.

That asteroid wiped the slate clean, resetting the course of animal evolution in favor of the brain and intelligence. There is no reason to assume that an additional 66 million years would have led the dinosaurs towards the Gorn as in 175 million, it had not done so. It’s as if the universe has a bias in favor of intelligence and sent a “do-over” to set things right.

Wednesday, September 25, 2019

Interlude: Unconscious Artificial Intelligence?


I’ve been considering the nature and role of consciousness for some years. Along the way, I’ve wondered about Artificial Intelligence (AI) and whether at some point it might become conscious. My conclusion has been that however complex and “intelligent” an AI would become, that would not produce consciousness. Consciousness requires life and – as at must include at least some degree of self-awareness – it could only be the property of an individual organism with some organized “self.” Machine intelligence might be constructed – coded – to simulate self (and thus pass the Turing Test) but this would nevertheless not be an awareness of self. (Even now, AIs and robots can be quite “clever.” A recent visitor to a Tokyo hotel asked the robot concierge how much it cost. It replied “I’m priceless.” Cute.) However elaborate the mind – in the case of the most advanced AI’s built with neural networks this might be quite sophisticated, even to the point that the human programmers might not be able to replicate its internal processes – consciousness is an additional property beyond mere complexity and processing power. In the first season of HBO’s Westworld, android “hosts” become conscious through the repeated experience of pain and loss. But of course, to feel such emotions, one must first experience them as such. Quantity (of coded processing operations) does not equal qualia. Qualitative experiences are only possible if one is already conscious.

But human beings not only possess a conscious mind but also an unconscious one. Most brain processes – even those that at some point enter consciousness – originate and work away unconsciously. We all have the experience of dreaming and also of doing things quite competently, without any awareness of having done them. (We may, for example, drive a familiar daily route and get to our destination without remembering details of the ride.) The brain processing of the unconscious mind may indeed be replicated by advanced machine intelligence. As AI becomes more complex, given the power of electronic circuits and the complexity of coded learning via neutral networks, the processing capacity of machine intelligence may well exceed human despite not becoming conscious. They would also, perhaps, exceed human ability to understand what they are “thinking” (or even “dreaming”). It was Stephen Hawking who most famously warned of the dangers of such AIs. He warned that we need to be attentive to their management.

So, though AIs may never become conscious or self-aware, they may nevertheless run autonomously along routes enabled by the algorithms coded into their machine DNA and come to “conclusions” we humans might find inconvenient or dangerous. (They might for example decide that human activity is injurious to the planet – which it seems it is – and seek to correct the disease.) Limits should be included in the most basic AI coding and algorithms. Isaac Asimov thought of this some time ago. His Three Laws of Robotics make a good start:

First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

To these he later added a zeroith law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
This last might be might turn out to be a doubled-edged sword.


Next week I will return to  Notes on "A History of Political Theory"

Tuesday, August 28, 2018

Dinosaurs and Intelligence

Dinosaurs arose some 240 million years ago. They became the dominant terrestrial vertebrates after the Triassic–Jurassic extinction event 201 million years ago. Their ascendancy lasted another 135 million years until the Cretaceous mass extinction 66 million years ago opened the world to the eventual rise of mammals and us. The first mammal-like forms appeared some 225 million years ago. But for the next 160 million years, mammals had to find their niches in the shadow of the dinosaurs, characteristically living a nocturnal lifestyle, emerging from burrows to feed only at night. This may have favored the evolution of better eye-sight, smell, touch and hearing to be able to navigate, find food and survive in the dark. But they still had to hide from the dinosaurs.

The question of why dinosaurs never developed cognitive intelligence, despite the many millions of years they were the top vertebrate clade, forms a rich WWW vein. (Search for the question and check it out.) Some dinosaurs did get quite intelligent in the form of birds. Some avian dinosaurs are even tool users. But there is no evidence that dinosaurs ever achieved anything like the human intelligence which has allowed us to alter our environment in ways both planned and unplanned. We human beings (the last surviving species of the homo genus) have been around for only some 200 thousand years. If one starts counting with the Australopithecus, then our progenitors go back around 3.6 million years. In either case, the fact that dinosaurs didn’t develop intelligence and complex technology even over a hundred million years while we did in just a few raises at least two questions: Is the rise of intelligence inevitable and does it have survival value over the long run?

The second question may be easier to answer. Dinosaurs and all other life on earth have done pretty well without human-style intelligence. Indeed, intelligence has not played a major role over the four billion years of life on earth. Some dinosaurs may have been clever hunters, as are wolves for example, and Jurassic Park has shown us a possible example. But they apparently found the use of claw, teeth, armor and size sufficient to last until a huge asteroid took them out with most other life. This leads to an answer to the first question, was the rise of intelligence inevitable. We can never know what might have happened with the clever dinosaurs if they were given the next 65 million years instead of mammals. Large brains need extra oxygen and are costly in energy. Maybe there would never have been any evolutionary advantage to making the investment. Human intelligence may be a cosmic accident, the result of a particular rock hitting at a particular moment allowing the burrowing underclass – mammals – to take their furtive ways into the sunlight.

So to return to the question of the long-term survival value of our big brains, the dinosaurs did really well without them and it is not clear that they will save us from ourselves.

Wednesday, March 21, 2018

Intelligence or Bust?

Jennifer Ackerman makes a convincing case for bird intelligence in her 2016 The Genius of Birds. Birds use tools, sing, live socially, navigate over long distances and have at least the rudiments of mind. The most intelligent have larger and more complexly organized brains. In her last chapter, Sparrowville: Adaptive Genius, she suggests that birds that have mastered living in human environments – house sparrows, members of the crow and pigeon families and others – have prospered because of their flexibility and intelligence. She speculates that “we humans, in creating novel and unstable environments, are changing the very nature of the bird family tree” by creating evolutionary pressures for species characterized by increased intelligence. Writ large, she wonders, is whether the changes being wrought by humans in all the areas we affect – from city environments, to deforestation, to climate change – favor the development of intelligence in species that manage to survive.

It is interesting to consider whether the new Anthropocene epoch that we seem to have entered will be one of those catastrophic periods of destruction that sweep away species that cannot adapt quickly enough to the pace and degree of change. Among those species that do adapt and even prosper, the key for many may be the development of greater intelligence. Some species may find other ways to survive, but many will go extinct. Intelligence (in the form of operational flexibility and adaptability) or bust may be the motif of the next centuries, including for human societies. And of course, it is not yet clear that intelligence itself is adaptive in the long term. We may be in the process of changing the world we live in faster than even we can accommodate.

Saturday, September 16, 2017

The Brain As Quantum Computer


Recently I had the opportunity to watch southern African White-necked crows while they were watching me. I was taking afternoon tea (and eating rusks) on the patio overlooking a beautiful valley in the hills near Mbabane. Crows are smart and these are among the smartest. One sat on the roof of the next house staring at me convinced that at some point, I would grow careless and give him or her a chance to steal something, perhaps something to eat. As I was ever-vigilant, eventually they flew off over the valley, soaring and dipping in very real time. As I watched, I thought about the complex calculations that a bird must make moment-to-moment to move so quickly through three-dimensional space. They must keep track of where they are, where to go, how to get there. Knowing each requires entire subsets of information – such as (for where to go), where they saw food or last saw food or might find food while watching for anything that might require evasive action. These calculations must be solved each fraction of a second. I then thought this must be true for any animal with a brain (or nervous system). Neural systems allow the organism to move through, and react to, the environment rather than obey simple tropisms or merely be buffeted about by the external environment. The more complicated the neural system – reaching a peak of networks of networks to the 4th or 5th power (or beyond) running in our human brains – the more complex the information that can be stored and manipulated. A classical view of the human brain would start with the 500 trillion synapses of the adult brain’s hundred billion neurons. Now that is a lot of synapses. But think about how much information is stored there in language, knowledge, experience, memories and everything else that makes each individual unique and utterly complex.

I’ve speculated in this space about quantum consciousness, the production of mind from brain through “collapsing the wave functions apprehended from the perceptual flow. While watching the crows, I realized that the brain must function as a quantum computer and not as a classical system. The notion that quantum processes mix with (or form) consciousness is called “orchestrated objective reduction.” It rests on the possibility that the microtubules in nerve cells are small enough to contain quantum states. The brain accounts for just two percent of the human body’s mass but utilizes around 20% of its energy. It basically is like having a 20 watt bulb in our head shining all the time. This energy could be powering the creation and persistence of entangled states inside the microtubules of every cell. In this way, the neural organization of the brain would be the maintenance of a complex, constantly refreshed, while constantly changing, global entangled state. The collapse of the highest level of this entangled state-of-states coincides with consciousness. Inside our heads, this quantum computer has storage and calculating power well beyond what would be true if our brains functioned simply along classical physics lines. It may produce what we experience as consciousness. Or, collapse may come through the decisions that we – the “ghost” in the machine, acting as the internal observer – make in each moment as the crow flies.


Friday, January 20, 2017

Westworld’s Consciousness Riff


The HBO remake of Westworld is superior TV in a number of ways. But its most intriguing aspect may be its foundational riff on what makes up consciousness. The basic premise is that recursive experience plus an emotional occurrence that anchors memory – especially an episode of painful loss – ignites (self) consciousness. Intriguing, yet not finally convincing. The ability to experience emotion itself requires consciousness – one must be aware of feeling such-and-such. Westworld’s premise begs the question of where that awareness comes from.

There seems to be no a priori reason to suppose that machines cannot be intelligent. It may be useful to think about intelligence as existing in more or less distinct forms. Generically, intelligence might be defined as the ability to acquire, process and apply knowledge. (Animals have varying degrees of this kind of intelligence and so may plants.) Machines have the ability to store and process information. Machine intelligence is the orderly processing of information according to governing rules (software). Both the information and the rules are externally derived and stored within the machine. The machine itself may be contained in discrete units or widely distributed (the cloud). Machines can learn – by adding and elaborating rules based on previous cycles of processing – but they can’t process information without instructions stored in memory. Cloud intelligence is machine intelligence taken to a higher level by accessing massive information from many data sources using more and many powerful processors and sophisticated software with built in “learning routines.”

Human intelligence is what we human beings have. It is what we know as manifested in thought and action. Our knowledge is stored in two places, our heads and in our culture. Culture is contained in language, traditions, techniques, art and artifacts, beliefs and whatever else carries collective knowledge across time and generations. The basic unit of human intelligence, however, remains the individual mind, which itself can be thought of as an organically based “machine.” But there seems to be a ghost in the human machine that we experience as consciousness. Mere machines cannot feel emotion – or pleasure and pain – no matter how massive the memory and computing power. And the movies Matrix and Terminator aside, machines do not inherently strive for self-preservation. Machines are not alive nor do they have “souls.” Whether because humans are organic life forms evolved over hundreds of millions of years after having crossed-over somehow from an inorganic strata or from deeper principle of the universe, we feel and experience pleasure and pain. Why is the unknown. Westworld, for all its brave speculation, sidesteps this question.

Wednesday, July 13, 2016

What if non-avian dinosaurs survived?


There seems to be a growing consensus that the number of dinosaur species was already in decline before the great asteroid impact that ended the Cretaceous era 66 million years ago. As Science News reports, as of about 50 million years before the mass extinction the number of new dinosaur species was being eclipsed by the number going extinct and dinosaur diversity was decreasing. Duck-billed and Triceratops-type dinosaurs were doing well until the end of dinosaur days as was a group of small toothed raptors. But ultimately, only avian dinosaurs – the birds – survived.

Why did the number of dinosaur species decline over time and why did only avian dinosaurs survive? The dinosaur decline might have been due to climate change perhaps brought on by continental drift and the resulting land-form, rainfall and ocean current alterations from the late Jurassic onward. Perhaps only birds survived the long “nuclear-type” winter after the impact because they could eat carrion and seeds, of which there might have been much. Some small non-avian dinosaurs also could have been able to do the same but they might not have been able to travel long distances. Perhaps only a small number of birds – even just a few species – made it through on remote islands and as the earth recovered, they could spread. The land-bound non-avian dinosaur survivors – if any – might not have been able to reach places where their numbers could then rebound.

But what if there was no impact or somewhere creatures like the small raptors made it through? Carnivorous tyrannosaur- and velociraptor-type dinosaurs (theropods) were doing well at the end of the Cretaceous. Indeed, it may be that the hundred million year-plus competition between carnivores and herbivores had led to the evolution of a lesser number of species but ones ever more evenly matched. Some of the largest herbivores and carnivores ever were alive at the end. And it may have been that the carnivores were getting smarter, perhaps even hunting in packs. (The herbivores apparently had long been herd animals.) Seems the smaller theropods – like Troodon – were the (relatively) smarter ones. It is interesting to speculate how earth's evolutionary processes might have played out differently if at least some of these non-avian theropods had survived the great impact. With another 66 million years of evolutionary competition, might they have gotten even bigger brains, as primitive primates eventually did. Or perhaps I was just too impressed at an early age with the Gorn captain forced into combat with Captain Kirk.

Monday, May 25, 2015

What does the Turing Test test?


Saw the movie Ex Machina. The outside shots, filmed in Valldalen, Norway, are are simply gorgeous. Good flick and provoked some ruminating (avoiding plot details).

There seems no a priori reason to suppose that machine intelligence cannot reach the point of passing the Turing test. A complex enough programed machine able to “learn” from extracting patterns from massive data and using them to interact with humans should be able to “exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.” One can imagine such a machine as pictured in the movie.

But what does the Turing test really test. An “artificial intelligence” might be able to interpret and respond to the full range of human behavior and simulate the same. It might be able to “read” a conscious human better than an actual human might by picking up on subtle physical manifestations (as stored in its memory). With a large enough data base behind it and a multitude of “learned” behaviors it might convince a human that it was indeed intelligent and even self-aware. But would it be? Would the ability to simulate human behavior completely enough to appear human actually be human or entail consciousness? If programed with a sub-routine causing it to seek to persist (i.e., resist termination), would it be a self seeking self-preservation? Would programing allowing it to read human emotions and respond “appropriately” with simulated emotion mean it actually felt such emotions?

Would a machine intelligence able to simulate human behavior and emotions actually be able to love, hate, feel empathy and act with an awareness of itself and, perhaps more importantly, of an Other? Or might there still be something missing?

Smoked a cigar on my favorite bench while considering all this and watched some ants going about their business. Ants are extremely complex biological machines acting and reacting within their environment with purpose and an overall drive to self-perpetuate (both as individuals and as a collective). They may be conscious even if not self aware. Or is a certain basic self-awareness something that goes with being alive? Would even a very complex machine ever be alive even if very “intelligent?”

My guess is that machine intelligence – even if very complex and advanced and equipped with a self-referential sub-program allowing algorithmic analysis of itself – would not be conscious or alive. Thus not capable of emotion and therefore what we might call coldly rational. Is this why Bill Gates, Stephen Hawking and others are concerned about AI?

Monday, July 14, 2014

Quantum Consciousness


In 1929, Niels Bohr, in what he admitted was perhaps a rush of enthusiasm for the new science, speculated that perhaps the quantum understanding of physical reality might also apply to an understanding of the mind and consciousness. Maybe as analogy but perhaps, he suspected, as something more. In effect, Bohr suggested that any effort to apply thought to perception – of the subject apprehending the object – collapsed a continuous wave function. When we use language to describe something – whether it be internal or external – we were extracting some possibilities out of a number of ways to do so, indeed from a continuously variable flow. Recent investigations (as reported in Science News) into apparently illogical thought – decisions or judgements that flout the basic mathematical logic of if A=X and B=X, then A=B – suggest the possibility that quantum logic in which something can be both particle and wave at the same time may apply. The situations examined violated the “sure thing” rule.

One well-known example involved asking students whether they would buy a ticket for a Hawaii vacation in three different situations: They had passed a big test, they had failed the test, or they didn’t yet know whether they had passed or failed. More than half said they would buy the ticket if they had passed. Even more said they would buy the ticket if they failed. But 30 percent said they wouldn’t buy a ticket until they found out whether they had passed or failed.

It seems odd that people would decide to buy right away if they knew the outcome of the test, no matter what it was, but hesitated when the outcome was unknown. Such behavior violated a statistical maxim known as the “sure thing principle.” Basically, it says that if you prefer X if A is true, and you prefer X if A isn’t true, then you should prefer X whether A is true or not. So it shouldn’t matter whether you know if A is true. That seems logical, but it’s not always how people behave.

The researchers found that context is important and that quantum logic may better explain such behavior. We make decisions within a framework that allows possibilities that are logically the same to interfere with each other as quantum waves might. Uncertainty seems to leave us both particle and wave.

This is deep. But the essential bit seems to be that the conscious observer necessary to turn quantum reality into the classical reality we live in – by observing and thereby collapsing the wave function – also may operate in the same quantum/relativistic manner. If the brain is organically based and operates as a classical system, perhaps the mind – brain/nervous system plus consciousness – acts as a quantum system in which perceived reality is constructed through collapsing the wave functions apprehended from the perceptual flow. (Some of us “collapse” more readily than others: judgers vs perceivers?) Now, whether consciousness itself is a quantum-derived property of the physical brain – perhaps arising at the nano-level – or a “ghost in the machine” would remain a question. But the first possibility – that consciousness arises within and from a physical system that demands consciousness to operate – would seem to violate Gödel's incompleteness theorem.

Thursday, May 15, 2014

Why Aren't We Hearing Anyone Else?


Read an article recently on the Great Filter, the notion that we may not come across any evidence of advanced civilizations beyond our own because something eventually rubs them out.  We have been sending out electro-magnetic signals for over a hundred years and have been listening for almost as long.  We have by now discovered almost 1800 exoplanets. An estimated 22% of sun-like stars in our galaxy may have earth-like planets orbiting in their habitable zones.  That would mean 20 billion candidates for life such as ours. Four of such earth-like exoplanets planets have been identified within 50 light years of us, another two within 500 LYs.

There is no reason to assume that life would have to be similar to our carbon-based form or would require conditions similar to ours.  Life on our planet sprung up quickly and the physics and chemistry of our universe seem to favor self-organizing processes.  Life forms could be quite varied and perhaps universal.

Enrico Fermi suggested in 1950 that if any advanced civilization developed the ability to travel beyond its solar system, even at less than light speed, in ten million years it should be able to colonize the whole Milky Way (100,000 LYs in diameter).  So why don't we see them?  Why haven't we even heard anyone else?  The Great Filter suggests various possibilities.

The first would be that advanced life is rare.  The conditions for it to develop are quite special. While life on earth arose quickly, in just 400 million years after earth formed a solid crust, it took another almost two billion years for complex single cells to evolve.  Add another billion years – about 550 million years ago – for multi-cellular creatures.  Most of the history of life on earth is this long prelude to the development of us.  Humans arose only in the last two million years of the earth's 4,500 million years.  Along the way, life went through several mass extinction events.  The last one, 65 million years ago, took out the dinosaurs leaving the ground clear for the development of mammals.  The combination of events and circumstances that led to us may be so rare as to make us one of the very few – or only – lucky ones.

But with some probable 20 billion earth-like exoplanets and some 100 billion likely planets in all, chances are that however rare, odds would favor the development of a considerable number of advanced life forms in our galaxy.  Some might have arose millions of years ago.  Any signals they sent would have had plenty of time to reach us.  Any earth-like planet with advanced life within 500 LYs would presumably have been heard by now.  So far, the SETI project has found none.

Perhaps our listening capabilities are still not sensitive enough to pick up any signals.  But clearly we are now able to tease out the existence of exoplanets themselves out some two thousand light years.

Maybe cosmic natural disasters – nearby super-novas, meteor strikes, etc – occur frequently enough to set back life and knock out civilizations before they can get very far?  But we've gone 65 million years without one and there is no reason to expect any such for at least the next few hundred years.

Maybe someone is out there, able to hide themselves and/or tracking down and destroying any potential competitors before they get too far?  This is a common science fiction trope.   But it assumes that advanced civilizations would either be very modest – and thus hide themselves, perhaps quietly visiting and making crop circles or waiting for us to rise to the level where we could join their Federation – or especially vicious and aggressive.  Based upon the only advanced civilization we know of – ourselves – one could not rule out the second possibility.

Finally, there is the possibility that there is something about advanced technologies that operates to cut short the civilization that develops them: industrial civilization leading to run-away climate change; biotechnology leading to – or failing to keep up with – disruptions in the present web of life; failure of critical management systems to handle increasingly complex and changing political, social, economic and ecological dynamics.

Bottom line, so far we have no evidence that we have company anywhere out there. We may be special. Question is, are we doomed to be filtered out and will we have ourselves to blame?

Monday, December 23, 2013

Plants and the Sun

There's a fascinating article in the New Yorker on The Intelligent Plant.  It looks at the current debate among plant scientists over whether plants are intelligent or might be said to behave intelligently.  Plants do seem to interact with their environment in a way that appears directed and can often be quite complex.  But what caught my eye was the statement by one scientist to the effect that one does not have to ascribe intelligence to plants just to make them sound special as it's enough simply to note that they "eat sunlight."

We all learn about photosynthesis in school.  How sunlight is converted to free electrons within plant chloroplasts and made available to make carbohydrates from air and soil.  This is indeed wonderful enough.  But the notion that what plants are doing can be simply described as eating sunlight brings to the fore just how miraculous a process this really is.  Plants eat sunlight and we animals can then eat them and those that eat them for us.  Through the intermediation of plants, we too eat sunlight.  And it's free.

On a recent warm, sunny winter solstice day, sitting outside smoking a cigar, I looked anew at how this system works.  The universe is constructed in just such a way as to allow complex physics and chemistry to evolve giant balls of gas that release tremendous fountains of energy -- we call these stars, like our sun -- free to be consumed by stationary processing plants -- that we indeed call plants -- to also feed mobile creatures that may eventually achieve individual consciousness. 

Pretty cool.


Saturday, March 12, 2011

Civilizations in the Goldilocks Zone

A "Goldilocks" planet is a one that would be neither too hot nor too cold to support life. This is the catchy term science has given to describe those hypothetical planets orbiting stars in the "comfort zone" that would permit liquid water and perhaps life such as we might recognize.

Perhaps one can talk of intelligent life and civilizations in an analogous fashion. Intelligent life would arise from creatures with the potential for intelligence as man arose from more primitive primates. In some of these cases, while creatures might arise with a degree of intelligence they would not progress far or they would evolve much more slowly. Perhaps their environment would be relatively undemanding with conditions allowing the species to flourish without elaborating itself into large civilizations that then enter a cultural/technological evolution of their own. These might be termed "Garden of Eden" species. They might never leave their own planet or solar system and could be stable for very long periods of time.

At the other extreme, there might be intelligent species that evolve very quickly - perhaps to keep up with a more dynamic environment or perhaps out of some dynamic internal to its unique cultural/intellectual makeup. These civilization would tend to be unstable and the most extreme of them would grow beyond the ability of their planet to support them. These civilizations would suffer catastrophic declines and perhaps extinction. They might never survive long enough to go beyond their own atmosphere.

In between these two ends of the spectrum, civilizations would evolve at a fair pace, perhaps suffering precipitous events but eventually settling down to a sustainable level of dynamic evolution and change. These civilizations would be the Goldilocks ones in which the rate of change is neither too slow nor too fast for their intellectual, social, cultural, economic and political systems to keep up with. They might be the ones to go as far afield into the universe as physics and their own culture allows.

It would be nice to think that the human species of Earth is in that Goldilocks zone. But it is too early to say and the 21st Century may decide the issue.