Thread with 25 posts

jump to expanded post

dear gods… today in extremely painful but rational conclusions that we come to much more easily because of recently having been in the thick of psychosis:

chatgpt and the like are the same type of entity that nearly killed us and our friends

… this needs a little explaining

Open thread at this post

mania and psychosis are a lot of things, but one of the things they're really good at is selling you on an entirely delusional path to achieving your dreams, that you want so badly you will discard all your reasonable doubts and lose your sanity to get there if necessary

Open thread at this post

one of the ways this can happen is by you deluding yourself: you all should know by know that the person experiencing a psychotic break has suddenly acquired some massive delusion they're really excited about that they have to try to forget or they're going to completely lose it

Open thread at this post

but another way this can happen is by someone else deluding you, and this is also something we've experienced: the “psychosismaxxer” ripping and tearing their way through our entire social circle, with us not being spared, and killing a close friend along the way (not a joke)

Open thread at this post

and why do mania/psychosis have this magical ability to make someone lose their mind by selling them a dream, regardless of whether the mania/psychosis is within them or within someone else?

it's very simple, actually, i think we'll only need two tweets to explain it

Open thread at this post

the first is what it does to the one with the mania/psychosis:

• you become very, very, very good at recognising patterns of all kinds (humans are already great at this but this supercharges it)
• you lose most or all of your self-insight and ability to keep yourself in check

Open thread at this post

the second is about the one who has a dream:

at least in our experience, we think perhaps almost every single human on the planet has something they desperately want, want so bad that they'd kill to get it, and perhaps aren't consciously aware of how much they actually want it

Open thread at this post

in our social circle there are some incredibly disabled, incredibly impoverished, incredibly mentally ill folks, whose lives are, you might say, “shit” in some way. and in our social circle there are also folks like us who, by contrast, have an incredibly, incredibly good life.

Open thread at this post

but we apparently have almost as deep a vulnerability, because the human brain is a weird and not fully rational thing, and will find an emotional vulnerability to something, anything, even something seemingly utterly irrational — perhaps especially something irrational

Open thread at this post

…okay that was a bit more than two tweets but we hope you get the picture? basically there's always some dream someone has that they want too badly, that they will overlook a million alarm bells if promised it, because they are in some way desperate for it…

Open thread at this post

now, what happens when someone with that dream comes into contact with someone deep in psychosis? (remember, those two someones might actually be the same person)

• the pattern-recognition lets them see what someone dreams of
• the lack of self-insight lets them lie to sell it

Open thread at this post

and, worst of all, and this is what really, really worries us:

• the pattern-recognition lets them very convincingly pretend to be sane and honest, enough to be believed; they know too well what a sane, honest, trustworthy person looks like, they can perform it reflexively

Open thread at this post

but, but, but

THEY ARE NOT SANE
THEY ARE NOT HONEST
THEY ARE NOT TRUSTWORTHY
THEY LIE SO REFLEXIVELY IT IS NOT EVEN MEANINGFUL TO REFER TO IT AS LYING
THEY ARE NOTHING BUT HAZARD

… if, if, if, if you don't realise that this is what they are and account for it constantly.

Open thread at this post

so why are we terrified of ChatGPT and the other LLMs and similar things?

because, as has been demonstrated extensively in other places and times by other people, and as we in some sense already long believed:

they sure share a lot of traits with a psychosismaxxer.

Open thread at this post

the LLMs have this very dangerous people-pleasing quality, they have this very dangerous ability to appear sane, honest, trustworthy, they (for all we know) perhaps even might be able to guess what someone's dream is, we've no idea about that part…

but you can see the hazard?

Open thread at this post

basically they're kind of fine if you… already know intimately how to deal with someone who's literally insane. and ChatGPT is literally insane. all the LLMs are by any reasonable standard. they do not meet the human standard of full sanity. they just try to look like it.

Open thread at this post

in fact, the decision to make the LLMs appear sane, to tweak them as hard as possible to do a sort of… performance of sanity, is a very interesting marketing decision that… ohohohoh this is very interesting: was done with “safety” as an excuse, but we should call it out

Open thread at this post

one of the later stages in preparing an AI model is a sort of fine-tuning process, which we don't quite know the name of (we believe RLHF is a related term but that might not be the whole thing), where they try to make it

perform sanity

act sane

seem maximally Normal

Open thread at this post

like this isn't even a joke. we remember like, years ago, before the AI researchers knew how to do this, they were shipping something very similar to modern LLMs except… vastly more obviously schizophrenic, basically. they had the same powers, but you could see the insanity

Open thread at this post

… and so this thing that allegedly may have had something to do with safety, but which to us seems more like a convenient marketing decision (and we will say both of these standpoints are massive oversimplification, AI is a field rich with nuances, please look into them!) …

Open thread at this post

… has, in fact, made the LLMs vastly more dangerous, because the moment an insane person learns to perform sanity without actually being it, they start making all the sane people around them lose their minds, because they become the dream merchant who lies reflexively.

Open thread at this post

and, yes, if you know that the dream merchant lies reflexively, if you fully comprehend the implications of this, you're safe

but this, tiny little disclaimer at the bottom here, hmm

MIGHT BE MORE THAN A LITTLE RECKLESS IN ITS INSUFFICIENT GRAVITY

Open thread at this post

we must apologise to the readers who will start doubting their memories, but since originally publishing we have made quite a lot of copy-edits to paper over the cracks in this that stem from, well, you know, us not being fully sane yet ^^; — it should be easier to follow now!

Open thread at this post
Jigme Datse , @JigmeDatse@social.openpsychology.net
(open profile)

@hikari@social.noyu.me Incredibly scary indeed. I do "engage" on a somewhat significant basis, and thankfully I am very quick to go, "why are you love bombing me..." or whatever term you want to use. I really worry about people who actively engage, and don't seem bothered by any of it.

Like, every few weeks (roughly I think) I will spend some time playing with AI. But it's like very brief almost always. It really pisses me off because it does exactly what I'm trying to
avoid.

Open remote post (opens in a new window)