Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

https://braininspired.co/series/brain-inspired/

subscribe
share






BI 178 Eric Shea-Brown: Neural Dynamics and Dimensions


Support the show to get full episodes and join the Discord community.

Check out my free video series about what's missing in AI and Neuroscience

Eric Shea-Brown is a theoretical neuroscientist and principle investigator of the working group on neural dynamics at the University of Washington. In this episode, we talk a lot about dynamics and dimensionality in neural networks... how to think about them, why they matter, how Eric's perspectives have changed through his career. We discuss a handful of his specific research findings about dynamics and dimensionality, like how dimensionality changes when one is performing a task versus when you're just sort of going about your day, what we can say about dynamics just by looking at different structural connection motifs, how different modes of learning can rely on different dimensionalities, and more.We also talk about how he goes about choosing what to work on and how to work on it. You'll hear in our discussion how much credit Eric gives to those surrounding him and those who came before him - he drops tons of references and names, so get ready if you want to follow up on some of the many lines of research he mentions.

  • Eric's website.
  • Related papers
    • Predictive learning as a network mechanism for extracting low-dimensional latent space representations.
    • A scale-dependent measure of system dimensionality.
    • From lazy to rich to exclusive task representations in neural networks and neural codes.
    • Feedback through graph motifs relates structure and function in complex networks.

0:00 - Intro 4:15 - Reflecting on the rise of dynamical systems in neuroscience 11:15 - DST view on macro scale 15:56 - Intuitions 22:07 - Eric's approach 31:13 - Are brains more or less impressive to you now? 38:45 - Why is dimensionality important? 50:03 - High-D in Low-D 54:14 - Dynamical motifs 1:14:56 - Theory for its own sake 1:18:43 - Rich vs. lazy learning 1:22:58 - Latent variables 1:26:58 - What assumptions give you most pause?


fyyd: Podcast Search Engine
share








 November 13, 2023  1h35m