Thread with 5 posts
jump to expanded postthe only reasonable take on AI safety is that current systems have many societal harms and ethical questions and future ones could be even worse
“existential risk” is worth worrying about, along with all the more immediate problems
if you don't think this is realistic, like… bear in mind that the stock market, for example, is already susceptible to wild swings due to hand-written algorithms. it's not unimaginable that the stock market becomes dominated by neural networks or something of that nature.
capital is already a brutal enough master when run by humans. we should fear the machines!
are they gonna wipe out humanity? who knows. hard to imagine. a number of leaps are necessary for that. on the other hand, could a small country's economy go into freefall due to someone prompt injecting the trading algorithms? well, that seems entirely plausible to me