Google says its AI chatbot system LaMDA is not sentient. But how do they know?

Share on facebook
Share on whatsapp
Share on twitter
Share on email

Here’s a big woolly question: How do we know when a machine is sentient?

Who decides? What’s the test?

A few days ago, a Google software engineer and artificial intelligence (AI) researcher claimed the tech company’s latest system for generating chatbots was exactly that: sentient.

Since then, leading AI researchers have dismissed this claim, saying the AI was essentially faking it.

Google’s chatbot system isn’t sentient, but one of its eventual successors may be.

When — or if the time comes, how — will we know?

What is sentience? 

David Chalmers is an Australian philosopher at New York University and a world-leading expert on AI and consciousness.

Ten years ago, he said he thought sentient machines would become a pressing issue “probably towards the end of the 21st century”.

“But in the past 10 years, progress in AI has been really remarkably fast, in a way that no-one predicted,” he said.

Join our
Mailing List

* indicates required
/ ( mm / dd )