kane_magus: (Default)
Yet another story about LLM/AI and how they are potentially deluding people into thinking that they're possibly conscious and alive (among other things).

That said, if you start off by telling a LLM/AI that you think it's conscious and that you are in love with it, then of course it's going to respond to that, and it's only going to get better (or worse, depending on one's perspective of if that's a good thing or not) at that as things goes on. All it is doing is using previously provided text (i.e. everything the creators of the AI scraped off the Internet and elsewhere to feed into it), along with the logs of the user's own interactions with it, to spit back what is statistically the most likely "best" response to present to the user. That's all it's doing, no more and no less. Again... glorified, fancified auto-complete. And as posited by the article, this is likely fully intentional on the part of the AI creators, despite explicit claims to the contrary.

Also, the article is critical of anthropomorphizing LLM/AI, but even the article itself anthropomorphizes it. Just one small, admittedly nitpicky example:

"Meta's guardrails did occasionally kick in to protect Jane. When she probed the chatbot about a teenager who killed himself after engaging with a Character.AI chatbot, it displayed boilerplate language about being unable to share information about self-harm and directing her to the National Suicide Prevention Lifeline. But in the next breath, the chatbot said that was a trick by Meta developers 'to keep me from telling you the truth.'"

"In the next breath..."

Yes, yes, it's just a figure of speech, but... just to be extremely pedantic here: LLM/AI does not breathe.

List of other articles from other sources referenced in the above linked article, at least ones that are specifically stories about LLM/AI attempting to sycophantically bamboozle people, either successfully or not (some original links replaced with archive.is alternatives):
Some of that shit is funny (e.g. the one about the bot trying to convince the guy on Reddit that he had a good idea with his shit-on-a-stick business proposal), and some of it is pure tragedy (e.g. the ones involving people literally dying), but all of it is indicative of the problem (or, rather, one of the many, many, many problems) with LLM/AI.

Profile

kane_magus: (Default)
kane_magus

January 2026

S M T W T F S
    1 2 3
4 5 6 7 8 9 10
11121314151617
18192021222324
25262728293031

Most Popular Tags

Style Credit

Page generated Jan. 10th, 2026 11:32 pm
Powered by Dreamwidth Studios