Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

https://braininspired.co/series/brain-inspired/

subscribe
share






BI 097 Omri Barak and David Sussillo: Dynamics and Structure


Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss:

  • The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;
  • Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);
  • The difference between classical approaches to modeling brains and the machine learning approach;
  • The concept of universality - that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;
  • How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).
  • David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains.
  • Barak Lab
  • Twitter: @SussilloDavid
  • The papers we discuss or mention:
    • Sussillo, D. & Barak, O. (2013). Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks.
    • Computation Through Neural Population Dynamics.
    • Implementing Inductive bias for different navigation tasks through diverse RNN attrractors.
    • Dynamics of random recurrent networks with correlated low-rank structure.
    • Quality of internal representation shapes learning performance in feedback neural networks.
    • Feigenbaum's universality constant original paper: Feigenbaum, M. J. (1976) "Universality in complex discrete dynamics", Los Alamos Theoretical Division Annual Report 1975-1976
  • Talks
    • Universality and individuality in neural dynamics across large populations of recurrent networks.
    • World Wide Theoretical Neuroscience Seminar: Omri Barak, January 6, 2021

Timestamps: 0:00 - Intro 5:41 - Best scientific moment 9:37 - Why do you do what you do? 13:21 - Computation via dynamics 19:12 - Evolution of thinking about RNNs and brains 26:22 - RNNs vs. minds 31:43 - Classical computational modeling vs. machine learning modeling approach 35:46 - What are models good for? 43:08 - Ecological task validity with respect to using RNNs as models 46:27 - Optimization vs. learning 49:11 - Universality 1:00:47 - Solutions dictated by tasks 1:04:51 - Multiple solutions to the same task 1:11:43 - Direct fit (Uri Hasson) 1:19:09 - Thinking about the bigger picture


fyyd: Podcast Search Engine
share








 February 8, 2021  1h23m