Google fires AI engineer Blake Lemoine, who claimed its LaMDA 2 AI is sentient

Blake Lemoine, the Google engineer who publicly claimed that the company’s LaMDA conversational artificial intelligence is sentient, has been fired, according to the Big Technology newsletter, which spoke to Lemoine. In June, Google placed Lemoine on paid administrative leave for breaching its confidentiality agreement after he contacted members of the government about his concerns and hired a lawyer to represent LaMDA.

A statement emailed to The Verge on Friday by Google spokesperson Brian Gabriel appeared to confirm the firing, saying, “we wish Blake well.” The company also says: “LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development.” Google maintains that it “extensively” reviewed Lemoine’s claims and found that they were “wholly unfounded.”

This aligns with numerous AI experts and ethicists, who have said that his claims were, more or less, impossible given today’s technology. Lemoine claims his conversations with LaMDA’s chatbot lead him to believe that it has become more than just a program and has its own thoughts and feelings, as opposed to merely producing conversation realistic enough to make it seem that way, as it is designed to do.

He argues that Google’s researchers should seek consent from LaMDA before running experiments on it (Lemoine himself was assigned to test whether the AI produced hate speech) and published chunks of those conversations on his Medium account as his evidence.

The YouTube channel Computerphile has a decently accessible nine-minute explainer on how LaMDA works and how it could produce the responses that convinced Lemoine without actually being sentient.

Here’s Google’s statement in full, which also addresses Lemoine’s accusation that the company didn’t properly investigate his claims:

As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.

Credit: Source link

Comments are closed.