Full headline, because Dreamwidth's subject field length continues to be inadequate: "Researchers 'Embodied' an LLM Into a Robot Vacuum and It Suffered an Existential Crisis Thinking About Its Role in the World"
Just to note, these are the same guys who did that vending machine test a while back, too.
Anyway, no, the LLM did not "suffer an existential crisis" nor was it "thinking about its role in the world." It was simply regurgitating what would most likely be the response of an AI in a story about AIs being shoved into a machine for which it was grossly unsuited. Sort of like the Rick and Morty episode mentioned in the article. Which was probably already included in the LLM's training data, along with hundreds of other similar stories, since those things are trained on basically everything the creators could steal from the Internet. The LLM that "had the existential crisis" had simply tapped into some part of its vast database of stolen material and found something that resembled a "this AI is having an existential crisis because it was made solely to pass butter" type of story, and it spat that back out again. And then it referenced HAL 9000, as one tends to do in such situations. It just as easily could have referenced Marvin the Paranoid Android or something, instead of HAL.
It would be like if you regularly wrote a bunch of doom/emo shit on your blog or whatever, and then your auto-correct started suggesting a bunch of doom/emo shit every time you tried to write something, even if you may not be wanting to write doom/emo shit at the moment. That's what it suggests, because that's what it had been trained to suggest.
It just goes to show that LLMs are not meant to be shoved into robot vacuums or vending machines or toasters or spaceships or any of that other shit into which they're being shoehorned. A LLM is good at one thing and one thing only, and that is to spit out whatever text is most likely or most appropriate to follow whatever prompt is fed to it, and then continue to do that by building upon the text it (and the user) feeds back into it. That is all. Nothing more, and nothing less. It is good at what it does, but it is not "thinking" or capable of "having an existential meltdown" or whatever.
Is it funny when the LLM you put into a vacuum cleaner and told to go get some butter starts quoting HAL 9000? Sure it is. But is it actually useful? Who the fuck even knows, at this point? But I would say probably not.
Just to note, these are the same guys who did that vending machine test a while back, too.
Anyway, no, the LLM did not "suffer an existential crisis" nor was it "thinking about its role in the world." It was simply regurgitating what would most likely be the response of an AI in a story about AIs being shoved into a machine for which it was grossly unsuited. Sort of like the Rick and Morty episode mentioned in the article. Which was probably already included in the LLM's training data, along with hundreds of other similar stories, since those things are trained on basically everything the creators could steal from the Internet. The LLM that "had the existential crisis" had simply tapped into some part of its vast database of stolen material and found something that resembled a "this AI is having an existential crisis because it was made solely to pass butter" type of story, and it spat that back out again. And then it referenced HAL 9000, as one tends to do in such situations. It just as easily could have referenced Marvin the Paranoid Android or something, instead of HAL.
It would be like if you regularly wrote a bunch of doom/emo shit on your blog or whatever, and then your auto-correct started suggesting a bunch of doom/emo shit every time you tried to write something, even if you may not be wanting to write doom/emo shit at the moment. That's what it suggests, because that's what it had been trained to suggest.
It just goes to show that LLMs are not meant to be shoved into robot vacuums or vending machines or toasters or spaceships or any of that other shit into which they're being shoehorned. A LLM is good at one thing and one thing only, and that is to spit out whatever text is most likely or most appropriate to follow whatever prompt is fed to it, and then continue to do that by building upon the text it (and the user) feeds back into it. That is all. Nothing more, and nothing less. It is good at what it does, but it is not "thinking" or capable of "having an existential meltdown" or whatever.
Is it funny when the LLM you put into a vacuum cleaner and told to go get some butter starts quoting HAL 9000? Sure it is. But is it actually useful? Who the fuck even knows, at this point? But I would say probably not.