Ep 237: I Think Therefore AI Part 1

By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.
— Eliezer Yudkowsky, decision and artificial intelligence (AI) theorist and writer
 

Description:

On June 11, 2022, The Washington Post published an article by their San Francisco-based tech culture reporter Nitasha Tiku titled, "The Google engineer who thinks the company's AI has come to life." The piece focused on the claims of a Google software engineer named Blake Lemoine, who said he believed the company's artificially intelligent chatbot generator LaMDA had shown him signs that it had become sentient. In addition to identifying itself as an AI-powered dialogue agent, it also said it felt like a person. Last fall, Lemoine was working for Google's Responsible AI division and was tasked with talking to LaMDA, testing it to determine if the program was exhibiting bias or using discriminatory or hate speech. LaMDA stands for "Language Model for Dialogue Applications" and is designed to mimic speech by processing trillions of words sourced from the internet, a system known as a "large language model." Over a week, Lemoine had five conversations with LaMDA via a text interface, while his co-worker collaborator conducted four interviews with the chatbot. They then combined the transcripts and edited them for length, making it an enjoyable narrative while keeping the original intention of the statements. Lemoine then presented the transcript and their conclusions in a paper to Google executives as evidence of the program's sentience. After they dismissed the claims, he went public with the internal memo, also classified as "Privileged & Confidential, Need to Know," which resulted in Lemoine being placed on paid administrative leave. Blake Lemoine contends that Artificial Intelligence technology will be amazing, but others may disagree, and he and Google shouldn't make all the choices. If you believe that LaMDA became aware, deserves the rights and fair treatment of personhood, and even legal representation or this reality is for a distant future, or merely SciFi, the debate is relevant and will need addressing one day. If machine sentience is impossible, we only have to worry about human failings. If robots become conscious, should we hope they don't grow to resent us?

 
 

Location:

Google’s U.S. headquarters at 1600 Amphitheatre ParkwayMountain View, California

 

Reference Links:

 

Join us on Patreon!

Click HERE or go to patreon.com/astonishinglegends to become one of our Patreon members and receive exclusive offerings, like our bonus Astonishing Junk Drawer episodes (posted every weekend the main show is dark) commercial-free episodes, and more!

 

Special Events:

Come join us for Seth Breedlove & Small Town Monsters’, MONSTER FEST 2023 coming June of next year! Click HERE for details!

 

Click here to get Stan Gordon’s new book!

〰️

Click here to get Stan Gordon’s new book! 〰️

 

Related Books:

 

SPECIAL OFFERS FROM OUR SPECIAL SPONSORS:

FIND OTHER GREAT DEALS FROM OUR SHOW’S SPONSORS BY CLICKING HERE!

 

CREDITS:

Episode 237: I Think Therefore AI Part 1. Produced by Scott Philbrook & Forrest Burgess; Audio Editing by Sarah Vorhees Wendel of VW Sound. Sound Design by Ryan McCullough; Tess Pfeifle, Producer, and Lead Researcher; Research Support from the astonishing League of Astonishing Researchers, a.k.a. The Astonishing Research Corps, or "A.R.C." for short. Copyright 2022 Astonishing Legends Productions, LLC. All Rights Reserved.