Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.

https://braininspired.co/series/brain-inspired/

subscribe
share






BI 070 Bradley Love: How We Learn Concepts


Brad and I discuss his battle-tested, age-defying cognitive model for how we learn and store concepts by forming and rearranging clusters, how the model maps onto brain areas, and how he's using deep learning models to explore how attention and sensory information interact with concept formation. We also discuss the cognitive modeling approach, Marr's levels of analysis, the term "biological plausibility", emergence and reduction, and plenty more.

Notes:

  • Visit Brad’s website.
  • Follow Brad on twitter: @ProfData.
  • Related papers:
    • Levels of Biological Plausibility.
    • Models in search of a brain.
    • A non-spatial account of place and grid cells based on clustering models of concept learning.
    • Abstract neural representations of category membership beyond information coding stimulus or response.
    • Ventromedial prefrontal cortex compression during concept learning.
    • The Costs and Benefits of Goal-Directed Attention in Deep Convolutional Neural Networks
    • Learning as the unsupervised alignment of conceptual systems.


fyyd: Podcast Search Engine
share








 May 15, 2020  1h47m