Thread with 6 posts
jump to expanded postI’m convinced that AI as we know it (machine learning systems, LLMs etc) isn’t conscious in the way that humans are, because if it was, we wouldn’t be constantly having problems with it being susceptible to prompt injection, being racist, repeating misinformation, etc etc
we humans have inner worlds. we are able to assess the information we consume and decide how much we want to accept it. our filters aren’t perfect, but we are opinionated.
AI as we know it doesn’t seem to display any kind of internal consistency. intelligent yet… selfless?
one of my twitter mutuals replied:
if AI was truly creative or sentiment it would develop its own racial prejudices rather than copying human ones
this is an amusing way to put it but exactly right
I don’t have any expectation that AI wouldn’t be racist, but if it isn’t consistent about it then it’s not a person
if someone can make a chatbot that has all the capabilities of GPT-4 and is internally consistent about being a Nazi, then on the one hand, we’re fucked, but on the other hand, that’d be a truly incredible achievement. the first AI person
horrifying thought: maybe it’s the way that neural networks are “trained”. if we did to human brains what we do to neutral networks, maybe they’d be stunted in the same way.
I wish I’d read more when I was studying linguistics, I’m suddenly very curious about how language acquisition works (or doesn’t) when children are deprived of feedback