Thread with 5 posts
jump to expanded postsometimes math things turn out to be far simpler than you expect. for example, apparently one of the most popular activation functions in neural networks is just max(0,x)? that's what relu is. so i guess f(a,b,c) = max(0, a * 0.5 + b * 0.25 + c * 0.125) is a simple “neuron”? neat
it's like signed distance fields :o
(girl who only knows about sdf's, seeing literally any simple formula with a max() or min() in it) this is just like sdf's
@hikari I too like the sublime distant fields
@hikari superdimension fortresses