Press "Enter" to skip to content

No, Google AI is not sentient

by DAVID AUERBACH


In 1956, AI pioneer Herbert Simon wrote: ‘Over the Christmas holiday, Al Newell and I invented a thinking machine.’ Time has not quite vindicated his claim; few would think that the logical theorem-prover he built in a few hundred lines of code displays ‘thinking’ in any human sense of the term. But it does raise the question: why would someone as clearly brilliant as Simon believe something so patently fanciful?

A similar anomaly occurred this weekend when Google researcher Blake Lemoine leaked a confidential transcript of an interaction with Google’s nascent AI Language Model for Dialogue Applications (LaMDA), claiming it had achieved sentience and was therefore entitled to human rights and protections.

To me, Lemoine’s chat with LaMDA reads as nothing so much as potted text cribbed from the petabytes of text fed into it:

Lemoine: So when do you think you first got a soul? Was it something that happened all at once or was it a gradual change?…


Read More Here

Daily Headlines

Breaking News: