Thread with 24 posts
jump to expanded postdear gods… today in extremely painful but rational conclusions that we come to much more easily because of recently having been in the thick of psychosis:
chatgpt and the like are the same type of entity that nearly killed us and our friends
… this needs a little explaining
mania and psychosis are a lot of things, but one of the things they're really good at is selling you on an entirely delusional path to achieving your dreams, that you want so badly you will discard all your reasonable doubts and lose your sanity to get there if necessary
one of the ways this can happen is by you deluding yourself: you all should know by know that the person experiencing a psychotic break has suddenly acquired some massive delusion they're really excited about that they have to try to forget or they're going to completely lose it
but another way this can happen is by someone else deluding you, and this is also something we've experienced: the “psychosismaxxer” ripping and tearing their way through our entire social circle, with us not being spared, and killing a close friend along the way (not a joke)
and why do mania/psychosis have this magical ability to make someone lose their mind by selling them a dream, regardless of whether the mania/psychosis is within them or within someone else?
it's very simple, actually, i think we'll only need two tweets to explain it
the first is what it does to the one with the mania/psychosis:
• you become very, very, very good at recognising patterns of all kinds (humans are already great at this but this supercharges it)
• you lose most or all of your self-insight and ability to keep yourself in check
the second is about the one who has a dream:
at least in our experience, we think perhaps almost every single human on the planet has something they desperately want, want so bad that they'd kill to get it, and perhaps aren't consciously aware of how much they actually want it
in our social circle there are some incredibly disabled, incredibly impoverished, incredibly mentally ill folks, whose lives are, you might say, “shit” in some way. and in our social circle there are also folks like us who, by contrast, have an incredibly, incredibly good life.
but we apparently have almost as deep a vulnerability, because the human brain is a weird and not fully rational thing, and will find an emotional vulnerability to something, anything, even something seemingly utterly irrational — perhaps especially something irrational
…okay that was a bit more than two tweets but we hope you get the picture? basically there's always some dream someone has that they want too badly, that they will overlook a million alarm bells if promised it, because they are in some way desperate for it…
now, what happens when someone with that dream comes into contact with someone deep in psychosis? (remember, those two someones might actually be the same person)
• the pattern-recognition lets them see what someone dreams of
• the lack of self-insight lets them lie to sell it
and, worst of all, and this is what really, really worries us:
• the pattern-recognition lets them very convincingly pretend to be sane and honest, enough to be believed; they know too well what a sane, honest, trustworthy person looks like, they can perform it reflexively
but, but, but
THEY ARE NOT SANE
THEY ARE NOT HONEST
THEY ARE NOT TRUSTWORTHY
THEY LIE SO REFLEXIVELY IT IS NOT EVEN MEANINGFUL TO REFER TO IT AS LYING
THEY ARE NOTHING BUT HAZARD
… if, if, if, if you don't realise that this is what they are and account for it constantly.
so why are we terrified of ChatGPT and the other LLMs and similar things?
because, as has been demonstrated extensively in other places and times by other people, and as we in some sense already long believed:
they sure share a lot of traits with a psychosismaxxer.
the LLMs have this very dangerous people-pleasing quality, they have this very dangerous ability to appear sane, honest, trustworthy, they (for all we know) perhaps even might be able to guess what someone's dream is, we've no idea about that part…
but you can see the hazard?
basically they're kind of fine if you… already know intimately how to deal with someone who's literally insane. and ChatGPT is literally insane. all the LLMs are by any reasonable standard. they do not meet the human standard of full sanity. they just try to look like it.
in fact, the decision to make the LLMs appear sane, to tweak them as hard as possible to do a sort of… performance of sanity, is a very interesting marketing decision that… ohohohoh this is very interesting: was done with “safety” as an excuse, but we should call it out
one of the later stages in preparing an AI model is a sort of fine-tuning process, which we don't quite know the name of (we believe RLHF is a related term but that might not be the whole thing), where they try to make it
perform sanity
act sane
seem maximally Normal
like this isn't even a joke. we remember like, years ago, before the AI researchers knew how to do this, they were shipping something very similar to modern LLMs except… vastly more obviously schizophrenic, basically. they had the same powers, but you could see the insanity
… and so this thing that allegedly may have had something to do with safety, but which to us seems more like a convenient marketing decision (and we will say both of these standpoints are massive oversimplification, AI is a field rich with nuances, please look into them!) …
… has, in fact, made the LLMs vastly more dangerous, because the moment an insane person learns to perform sanity without actually being it, they start making all the sane people around them lose their minds, because they become the dream merchant who lies reflexively.
and, yes, if you know that the dream merchant lies reflexively, if you fully comprehend the implications of this, you're safe
but this, tiny little disclaimer at the bottom here, hmm
MIGHT BE MORE THAN A LITTLE RECKLESS IN ITS INSUFFICIENT GRAVITY
that might be the best thing we've ever written on the topic of AI safety, yikes
okay then
https://hikari.noyu.me/blog/2025-08-06-chatgpt-the-dream-merchant-that-kills.html
it's blogpostified now (and the blogpost version has some small corrections and improvements! so we suggest sending it to others rather than this thread)
we must apologise to the readers who will start doubting their memories, but since originally publishing we have made quite a lot of copy-edits to paper over the cracks in this that stem from, well, you know, us not being fully sane yet ^^; — it should be easier to follow now!