>Backprop doesn't happen in us, but I think our neurones still do gradient descent – synapses that fire together, wire together.
No! Hebbian learning is categorically NOT gradient based learning. Hebbian update rules are local and not the gradient of any function.
Cortical learning is so vastly different from how artificial neural networks “learn” they cannot even begin to be meaningfully compared mathematically. Hebbian learning is not optimization and backprop is not local learning.
Part of the problem of these discussions is a bunch of clueless people talking with authority.
Finally, a good counterargument. I've seen enough terrible arguments to know exactly how you feel — even in specifically just AI.
I have to keep reminding myself that outside of my own speciality, ChatGPT knows more than me despite its weaknesses, so I bet ChatGPT knows more about Hebbian learning than I do.
No! Hebbian learning is categorically NOT gradient based learning. Hebbian update rules are local and not the gradient of any function.
Cortical learning is so vastly different from how artificial neural networks “learn” they cannot even begin to be meaningfully compared mathematically. Hebbian learning is not optimization and backprop is not local learning.
Part of the problem of these discussions is a bunch of clueless people talking with authority.