Thread with 3 posts
jump to expanded postlarge language models are great, i can ask chatgpt what a flag on an instruction does and it'll give me the same confidently wrong answer as a human who is familiar with the broader domain but who hasn't actually read the documentation and is just going off the name
(i know that because a colleague made a reasonable mistake in passing during a discussion and i thought it'd be fun to see if chatgpt would too, and it did. chatgpt messing it up is more serious though because the correct answer to it as a direct question is “i don't know”)
the thing is, they're scarily accurate sometimes. if i ask a question i already know the answer to and i think is fairly obscure, it sometimes gives a good answer that's better than you can quickly find on google. but sometimes i can accurately predict it will be completely wrong