Thread with 5 posts

jump to expanded post

sometimes math things turn out to be far simpler than you expect. for example, apparently one of the most popular activation functions in neural networks is just max(0,x)? that's what relu is. so i guess f(a,b,c) = max(0, a * 0.5 + b * 0.25 + c * 0.125) is a simple “neuron”? neat

Open thread at this post