Four years ago, a Google engineer named Blake Lemoine went public with a strange claim: he thought the large language model he’d been working on had become sentient. At the time, virtually no one took him seriously. (Including, it would seem, Google, who promptly fired him). But lately, it’s started to seem like Lemoine might have been on to something.
When I interviewed Geoffrey Hinton last year, he was pretty confident that artificial intelligence was already exhibiting signs of sentience. Dario Amodei, the CEO of Anthropic, has said that he can’t be sure that his chatbot, Claude, isn’t conscious.
But what exactly does that mean? A chatbot may be intelligent, but does it have a sense of self? And what would happen if it did?
These are the kinds of strange, mind-bending questions Michael Pollan wrestles with in his new book, A World Appears: A Journey Into Consciousness.
It’s the kind of book that raises more questions than it answers. But as Silicon Valley continues to flirt with the idea of building artificial consciousness – of designing machines that don’t just think, but feel – these are the kinds of questions we should probably start asking.
Mentioned:
A World Appears: A Journey Into Consciousness, by Michael Pollan
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.