Thread with 7 posts

jump to expanded post

I guess rationalists obsess over the notion of a “superintelligent AI” being a threat because becoming powerful because of one's incredible intelligence is already a rationalist fantasy. In other words, it is a kind of projection.

Open thread at this post

I must admit the idea that if you have, say, 100 times the mental capacity of a normal human being, you can now achieve what 100 humans could without co-ordination problems is very interesting, because co-ordination problems are this world's fundamental limit on power…

Open thread at this post

I do wonder however if that is actually true. Many things in the world are fractal. If you have a universe-sized brain, perhaps you have a universe-sized internal co-ordination problem.

Open thread at this post
Kalle Hallivuori , @korpiq@kamu.social
(open profile)

@hikari Thanks, this is such an intriguing thread of thought I'd hope you'd carry on or that I could contribute. In desperation, I can only come up with this:

If intelligence is the ability to identify patterns and form associations based on them, then naively superboosted intelligence is just forming so many associations that they do not serve any useful purposes, or are indeed misleading – hallucinations, aka madness. 1/3

Open remote post (opens in a new window)
Kalle Hallivuori , @korpiq@kamu.social
(open profile)

@hikari Then any significant improvement needs to be orthogonal to the axis of amount of associations. Such as error checking?

Another orthogonal thing is the amount of perceptions (measurements? My English fails me) one can ponder at a time, but that is exactly what gives raise to the limiting co-ordination problems you mention.

Sorry, rambling on just one more... 2/3

Open remote post (opens in a new window)