"Teens are opening up to AI chatbots as a way to explore friendship. But sometimes, the AI’s advice can go too far."
This article is interesting as much for the comments as for the article itself. The comments are vacillating between (to paraphrase) "Yeah, I get it, talking to real humans is hard because real humans are shit," to "I don't see the harm in it, but yeah, I wouldn't take what an AI says at face value without outside verification," to "This is dangerous and will be the death of humanity as we know it."
(I'm kind of a mix between the first and second opinions, with only the slightest dash of the third one, myself. Mostly the second one.)
One comment is (direct quote):
I haven't seen any of Black Mirror, so I don't know how "accurate" that statement is, at least in the context of this The Verge article, anyway. I have to assume they're talking about this? That's what came up when I searched Google for "Black Mirror artificial intelligence," anyway. If there's more than that that's actually relevant, I didn't come across it in the minute or two I devoted to searching. *shrug*
What I can say, though, is that Star Trek has dealt with this topic quite a lot, in its own way, and at least a couple decades before Black Mirror ever existed. And that's not even all of it (as of five years ago, at least, as there's been more since). Basically, all we need is for someone to invent hard light holograms now, and we'd practically be almost there already.
As for me, I've messed with Character.ai (as recently as a couple weeks ago), AI Dungeon (back when it was still cool, before it became total shit), and NovelAI (Ihavehad a long-ass on-going story thing I mess with fairly frequently and occasionally dabble in "one-shot" story things), and it's all interesting enough, sure. I don't mistake it for a friend or a psychologist, though, and all that I've tried have absolutely given horrible "advice" on occasion. I think it's mostly fine for this purpose, within reason, with moderation, but when people start confusing this stuff with reality, that's where the problems start.
This article is interesting as much for the comments as for the article itself. The comments are vacillating between (to paraphrase) "Yeah, I get it, talking to real humans is hard because real humans are shit," to "I don't see the harm in it, but yeah, I wouldn't take what an AI says at face value without outside verification," to "This is dangerous and will be the death of humanity as we know it."
(I'm kind of a mix between the first and second opinions, with only the slightest dash of the third one, myself. Mostly the second one.)
One comment is (direct quote):
We really just gonna pedal-to-the-metal speedrun making every overtly horrifying scenario from Black Mirror into reality, while simultaneously criticizing Black Mirror for being corny as shit and lame now, because all it does is tell us stories we go out of our way to make true within 5 years.
This shouldn't be a thing at all. It's monstrous, full-stop.
I haven't seen any of Black Mirror, so I don't know how "accurate" that statement is, at least in the context of this The Verge article, anyway. I have to assume they're talking about this? That's what came up when I searched Google for "Black Mirror artificial intelligence," anyway. If there's more than that that's actually relevant, I didn't come across it in the minute or two I devoted to searching. *shrug*
What I can say, though, is that Star Trek has dealt with this topic quite a lot, in its own way, and at least a couple decades before Black Mirror ever existed. And that's not even all of it (as of five years ago, at least, as there's been more since). Basically, all we need is for someone to invent hard light holograms now, and we'd practically be almost there already.
As for me, I've messed with Character.ai (as recently as a couple weeks ago), AI Dungeon (back when it was still cool, before it became total shit), and NovelAI (I