kane_magus (
kane_magus) wrote2023-02-17 12:04 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Entry tags:
"Introducing the AI Mirror Test, which very smart people keep failing"
"AI chatbots like Bing and ChatGPT are entrancing users, but they’re just autocomplete systems trained on our own stories about superintelligent AI. That makes them software — not sentient."
I'm definitely in the "it would be pretty cool if they were there, but they're most definitely not there yet, and they probably won't be there for a very long time still, if ever" camp. Far too many people seem to think that they're "there," already.
If you've ever messed with things like NovelAI or HoloAI or old-school, pre-fuckup Dragon from AI Dungeon (i.e. OpenAI's GPT-3) or any of the others, I'm sure there were brief moments where it might have felt like the AI was "reading your mind" or "on the same wavelength" or whatever. However, for me at least, there were (and are) far, far too many more instances of the AI just responding with a complete non sequitur, nonsensical gibberish. Sure, it may still be proper (in most cases) English, and it may even be a sentence that would make perfect sense in a different context, but it just shows that the AI is merely trying to parrot back something from the vast database of text that has been poured into it, and those times it picked up a random bit that didn't fit at all. Maybe they're getting better at not picking bits that don't fit, but they're still just picking up bits and trying to fit them into the slot of the "conversation," rather than having, you know, an actual conversation. They're not coming up with their own words. It's kind of like a more generalized equivalent of, say, if someone communicated primarily (or only) via quotations from famous people or pop culture references or memes or whatever. I'm sure you've met a person who seems to have a quote or reference to fit just about any situation. That's what the AI is doing, except its "quotations from famous people" and "pop culture references" are just... plain-ass quotations and references, in general (without citations).
This is true not just of the text-gen AIs but, in a broader sense, also the art-gen AIs (sometimes you get a Lovecraftian horror when all you want is a simple human image, but even when it gives you exactly what you wanted, it's still just cribbing together something from all the human-made art it has been trained on), and the voice-gen AIs, and the music-gen AIs, and all the other AIs out there.
These things are just tools at best. I mostly just use them as toys or games, for the most part. They're pretty damn good at that, but not for much else, at least for me, and at least for now. I don't think for a minute that they're alive or truly aware. Maybe someday we'll get a Commander Data or an EMH or something similar, even if it's just a text-on-screen equivalent, but we're nowhere remotely close to that now. Yet. If ever.
I'm definitely in the "it would be pretty cool if they were there, but they're most definitely not there yet, and they probably won't be there for a very long time still, if ever" camp. Far too many people seem to think that they're "there," already.
If you've ever messed with things like NovelAI or HoloAI or old-school, pre-fuckup Dragon from AI Dungeon (i.e. OpenAI's GPT-3) or any of the others, I'm sure there were brief moments where it might have felt like the AI was "reading your mind" or "on the same wavelength" or whatever. However, for me at least, there were (and are) far, far too many more instances of the AI just responding with a complete non sequitur, nonsensical gibberish. Sure, it may still be proper (in most cases) English, and it may even be a sentence that would make perfect sense in a different context, but it just shows that the AI is merely trying to parrot back something from the vast database of text that has been poured into it, and those times it picked up a random bit that didn't fit at all. Maybe they're getting better at not picking bits that don't fit, but they're still just picking up bits and trying to fit them into the slot of the "conversation," rather than having, you know, an actual conversation. They're not coming up with their own words. It's kind of like a more generalized equivalent of, say, if someone communicated primarily (or only) via quotations from famous people or pop culture references or memes or whatever. I'm sure you've met a person who seems to have a quote or reference to fit just about any situation. That's what the AI is doing, except its "quotations from famous people" and "pop culture references" are just... plain-ass quotations and references, in general (without citations).
This is true not just of the text-gen AIs but, in a broader sense, also the art-gen AIs (sometimes you get a Lovecraftian horror when all you want is a simple human image, but even when it gives you exactly what you wanted, it's still just cribbing together something from all the human-made art it has been trained on), and the voice-gen AIs, and the music-gen AIs, and all the other AIs out there.
These things are just tools at best. I mostly just use them as toys or games, for the most part. They're pretty damn good at that, but not for much else, at least for me, and at least for now. I don't think for a minute that they're alive or truly aware. Maybe someday we'll get a Commander Data or an EMH or something similar, even if it's just a text-on-screen equivalent, but we're nowhere remotely close to that now. Yet. If ever.