Yet another story about LLM/AI and how they are potentially deluding people into thinking that they're possibly conscious and alive (among other things).
That said, if you start off by telling a LLM/AI that you think it's conscious and that you are in love with it, then of course it's going to respond to that, and it's only going to get better (or worse, depending on one's perspective of if that's a good thing or not) at that as things goes on. All it is doing is using previously provided text (i.e. everything the creators of the AI scraped off the Internet and elsewhere to feed into it), along with the logs of the user's own interactions with it, to spit back what is statistically the most likely "best" response to present to the user. That's all it's doing, no more and no less. Again... glorified, fancified auto-complete. And as posited by the article, this is likely fully intentional on the part of the AI creators, despite explicit claims to the contrary.
Also, the article is critical of anthropomorphizing LLM/AI, but even the article itself anthropomorphizes it. Just one small, admittedly nitpicky example:
"Meta's guardrails did occasionally kick in to protect Jane. When she probed the chatbot about a teenager who killed himself after engaging with a Character.AI chatbot, it displayed boilerplate language about being unable to share information about self-harm and directing her to the National Suicide Prevention Lifeline. But in the next breath, the chatbot said that was a trick by Meta developers 'to keep me from telling you the truth.'"
"In the next breath..."
Yes, yes, it's just a figure of speech, but... just to be extremely pedantic here: LLM/AI does not breathe.
List of other articles from other sources referenced in the above linked article, at least ones that are specifically stories about LLM/AI attempting to sycophantically bamboozle people, either successfully or not (some original links replaced with archive.is alternatives):
That said, if you start off by telling a LLM/AI that you think it's conscious and that you are in love with it, then of course it's going to respond to that, and it's only going to get better (or worse, depending on one's perspective of if that's a good thing or not) at that as things goes on. All it is doing is using previously provided text (i.e. everything the creators of the AI scraped off the Internet and elsewhere to feed into it), along with the logs of the user's own interactions with it, to spit back what is statistically the most likely "best" response to present to the user. That's all it's doing, no more and no less. Again... glorified, fancified auto-complete. And as posited by the article, this is likely fully intentional on the part of the AI creators, despite explicit claims to the contrary.
Also, the article is critical of anthropomorphizing LLM/AI, but even the article itself anthropomorphizes it. Just one small, admittedly nitpicky example:
"Meta's guardrails did occasionally kick in to protect Jane. When she probed the chatbot about a teenager who killed himself after engaging with a Character.AI chatbot, it displayed boilerplate language about being unable to share information about self-harm and directing her to the National Suicide Prevention Lifeline. But in the next breath, the chatbot said that was a trick by Meta developers 'to keep me from telling you the truth.'"
"In the next breath..."
Yes, yes, it's just a figure of speech, but... just to be extremely pedantic here: LLM/AI does not breathe.
List of other articles from other sources referenced in the above linked article, at least ones that are specifically stories about LLM/AI attempting to sycophantically bamboozle people, either successfully or not (some original links replaced with archive.is alternatives):
- "Chatbots Can Go Into a Delusional Spiral. Here's How It Happens." (A story I've posted about before.)
- "People Are Being Involuntarily Committed, Jailed After Spiraling Into 'ChatGPT Psychosis'"
- "People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies"
- "He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse."
- "New ChatGPT just told me my literal 'shit on a stick' business idea is genius and I should drop $30K to make it real" (Apparently, older models are much less sychophantic.)
- "Can A.I. Be Blamed for a Teen's Suicide?"
- "Meta's flirty AI chatbot invited a retiree to New York. He never made it home."