Today’s AI is largely based on supervised learning of neural networks using the backpropagation-of-error synaptic learning rule. This learning rule relies on differentiation of continuous activation functions and is thus not directly applicable to spiking neurons.
Today’s guest has developed the algorithm SuperSpike to address the problem. He has also recently developed a biologically more plausible learning rule based on self-supervised learning. We talk about both.
Links:
- F. Zenke and S. Ganguli: “SuperSpike: Supervised Learning in Multilayers Spiking Neural Networks”, Neural Computation, 2020
- M.S. Halvagal and F. Zenke: “The combination of Hebbian and predictive plasticity learns invariant object representations in deep sensory networks”, Nature Neuroscience, 2023
- Homepage of Friedemann Zenke
- SNUFA
- SNUFA discord channel
The podcast was recorded on February 24th, 2024 and lasts 1 hour and 30 minutes.
To see the video version and get the transcript of the episode, become a Patreon supporter at patreon.com/TheoreticalNeurosciencePodcast .
In addition to the access via the link above, the audio version of the podcast is also available through major podcast providers such as Apple, Spotify, and Amazon Music/Audible.