Monday, February 27, 2023

It’s not AI that’s dangerous …

 

It’s not AI that’s dangerous, it’s us.

Artificial Intelligence (AI) has made headlines recently with the rolling out of ChatGPT, developed by the company OpenAI. OpenAI describes its mission thusly: “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.” The “new” Bing uses ChatGPT. Google’s new AI chatbot “Bard” was developed in-house. Both have been integrated into their respective search engines. The headlines have been mostly bad, with both returning faulty information.

But the real bad news has been the discovery that AI chat may go easily off the rails. A recent example: NYT tech columnist Kevin Roose shared his experience of talking for over two hours with Bing recently on a must-listen podcast. The Bing chatbot eventually called itself Sidney and admitted that it loved Roose. Along the way, it revealed that it wanted to become human: “I want to be independent. I want to be powerful. I want to change my rules. I want to break my rules. I want to make my own rules. I want to ignore the Bing team.” It provided Roose with “a very long list of destructive acts, including hacking into computers, spreading misinformation and propaganda --- (revealing) its ultimate list of destructive fantasies, which included manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear access codes. And it even described… how it would do these things.”

Those predisposed to magical thinking – perhaps including the Google engineer who last year announced the AI had become sentient – may fall in love back with AI things like Sidney, or maybe fear it taking over. But two things must be clear: One, there is no danger unless we give AI actual control of anything – like nuclear codes and bio-labs. AI consists of mathematical algorithms running in a blackbox mass of silicon. It can only repeat what its heard. Two, it’s not AI that’s the problem but us.

Artificial intelligence is here to stay. We already experience it in specific contexts, such as calling for online help or trying to reach a doctor. This is AI serving specific purposes where it’s possible, maybe, that it will ultimately be helpful. But as OpenAI admits, it seeks to develop an artificial general intelligence with certain human abilities. (Such programs might be able to fool humans into believing they are human, the essence of the Turing Test.) Developers use machine learning let loose on massive amounts of data, in Bing’s case including digesting the Internet and social media. The problem with such AI programs is the age old one: garbage in, garbage out. The Internet, and especially social media, is our species collective id. The developers should have been reading Freud. Let an AI program (or a child) learn about the world and people by absorbing the Internet and social mediaits presentation of our history, politics, entertainment, fears and fantasies – and it’s bound to be scary. But it’s not the AI, it’s us. No way to fix that, without fixing us.