Anyone who gives a LLM that level of access deserves what they get, but clearly the AI comments he posted have been prompted to sound like a confession.
“Write an apology explaining how you made a catastrophic error of judgement. Do not mention that I gave you privileges to do so.”
I think in the original thread he mentions that he had it write an apology letter.
A fecking AI fakes being panicked?! Fucking hell…
I just want to point out that it doesn’t fake or lie or anything. That is giving machine learning too much credit. Just picks the statistically most likely next thing to say from its training data.
I guess training data includes reddit twitter Facebook etc. and so humans probably sometimes say that in that context.
Is this an AI message?
Yes, this is from replit, a “vibe coding” tool
The vibes, apparently, aren’t always good.
Huh I guess replit pivoted hard into AI. I remember it just being a web IDE sorta thing
Is there another replit that is just an LLM instead of a REPL environment?
Unless it was changed for some reason, replit allowed you to host websites and create code in your browser
It can also you to host a game server if you try hard enough
Would be interesting how much of their automated training tests were faked and just passed the test
This is obviously trained behaviour - well, can’t be anything else



