

Humans have already begun to consider what legal rights AI should have, including whether it deserves personhood rights. Karina Vold, an assistant professor of philosophy at the University of Toronto, hopes the debate over AI consciousness and rights will spark a rethink of how humans treat other species that are known to be conscious. "That was 1965, and here we are in 2022, and it's kind of the same thing," Marcus said. Simplistic responses like "Tell me more about that" convinced users that they were having a real conversation. Marcus believes Lemoine is the latest in a long line of humans to fall for what computer scientists call "the ELIZA effect," named after a 1960s computer program that chatted in the style of a therapist.
SENTIENCE DEFINITION PHILOSOPHY FREE
"Each person is free to come to their own personal individual understanding of what the word 'person' means and how that word relates to the meaning of terms like 'slavery,'" he wrote in a post on Medium on Wednesday. He insists he was not fooled by a clever robot, as some scientists have suggested. Lemoine maintains his position, and even appeared to suggest that Google had enslaved the AI system. Lemoine, who is also ordained as a mystic Christian priest, told Wired he became convinced of LaMDA's status as a "person" because of its level of self-awareness, the way it spoke about its needs and its fear of death if Google were to delete it. You just think, well, that was exactly the word I was thinking of," said Carl Zimmer, science columnist for the New York Times and author of Life's Edge: The Search for What It Means to Be Alive. "If your phone autocompletes a text, you don't suddenly think that it is aware of itself and what it means to be alive. says LaMDA appears to have fooled a Google engineer into believing it was conscious. "My view is that was taken in by an illusion," Gary Marcus, a cognitive scientist and author of Rebooting AI, told CBC's Front Burner podcast.Ĭognitive scientist and author Gary Marcus, pictured during a speech in Dublin, Ireland, in 2014. Most experts believe it's unlikely that LaMDA or any other AI is close to consciousness, though they don't rule out the possibility that technology could get there in future.

Google dismissed Lemoine's view that LaMDA had become sentient, placing him on paid administrative leave earlier this month - days before his claims were published by The Washington Post. Lemoine had spent months testing Google's chatbot generator, known as LaMDA (short for Language Model for Dialogue Applications), and grew convinced it had taken on a life of its own, as LaMDA talked about its needs, ideas, fears and rights. Google engineer Blake Lemoine's recent claim that the company's AI technology has become sentient has sparked debate in technology, ethics and philosophy circles over if, or when, AI might come to life - as well as deeper questions about what it means to be alive. Has artificial intelligence finally come to life, or has it simply become smart enough to trick us into believing it has gained consciousness?
