#24: On neuroscience foundation models – with Andreas Tolias

The term “foundation model” refers to machine learning models that are trained on vast datasets and can be applied to a wide range of situations. The large language model GPT-4 is an example.

The group of the guest has recently presented a foundation model for optophysiological responses in mouse visual cortex trained on recordings from 135.000 neurons in mice watching movies.

We discuss the design, validation, use of this and future neuroscience foundation models.  

Links:

The podcast was recorded on December 27th, 2024 and lasts 1 hour and 31 minutes.

To become a Patreon supporter of the podcast, go to patreon.com/TheoreticalNeurosciencePodcast .

In addition to the access via the link above, the audio version of the podcast is also available through major podcast providers such as Apple, Spotify, and Amazon Music/Audible.

The video version is available for Patreon supporters via patreon.com/TheoreticalNeurosciencePodcast or at the YouTube channel www.youtube.com/@TheoreticalNeurosciencePodcast .