Brain Inspired – Détails, épisodes et analyse
Détails du podcast
Informations techniques et générales issues du flux RSS du podcast.

Brain Inspired
Paul Middlebrooks
Fréquence : 1 épisode/11j. Total Éps: 99

Classements récents
Dernières positions dans les classements Apple Podcasts et Spotify.
Apple Podcasts
🇨🇦 Canada - naturalSciences
30/07/2025#79🇬🇧 Grande Bretagne - naturalSciences
30/07/2025#91🇩🇪 Allemagne - naturalSciences
30/07/2025#87🇺🇸 États-Unis - naturalSciences
30/07/2025#65🇫🇷 France - naturalSciences
30/07/2025#4🇨🇦 Canada - naturalSciences
29/07/2025#68🇬🇧 Grande Bretagne - naturalSciences
29/07/2025#79🇩🇪 Allemagne - naturalSciences
29/07/2025#90🇺🇸 États-Unis - naturalSciences
29/07/2025#63🇫🇷 France - naturalSciences
29/07/2025#72
Spotify
Aucun classement récent disponible
Liens partagés entre épisodes et podcasts
Liens présents dans les descriptions d'épisodes et autres podcasts les utilisant également.
See all- https://twitter.com/anilkseth
10 partages
- https://twitter.com/erikphoel
8 partages
- https://twitter.com/KordingLab
7 partages
- https://www.patreon.com/braininspired
175 partages
Qualité et score du flux RSS
Évaluation technique de la qualité et de la structure du flux RSS.
See allScore global : 64%
Historique des publications
Répartition mensuelle des publications d'épisodes au fil des années.
BI 192 Àlex Gómez-Marín: The Edges of Consciousness
mercredi 28 août 2024 • Durée 01:30:34
Support the show to get full episodes and join the Discord community.
https://www.patreon.com/braininspiredÀlex Gómez-Marín heads The Behavior of Organisms Laboratory at the Institute of Neuroscience in Alicante, Spain. He's one of those theoretical physicist turned neuroscientist, and he has studied a wide range of topics over his career. Most recently, he has become interested in what he calls the "edges of consciousness", which encompasses the many trying to explain what may be happening when we have experiences outside our normal everyday experiences. For example, when we are under the influence of hallucinogens, when have near-death experiences (as Alex has), paranormal experiences, and so on.
So we discuss what led up to his interests in these edges of consciousness, how he now thinks about consciousness and doing science in general, how important it is to make room for all possible explanations of phenomena, and to leave our metaphysics open all the while.
- Alex's website: The Behavior of Organisms Laboratory.
- Twitter: @behaviOrganisms.
- Previous episodes:
- Related:
0:00 - Intro 4:13 - Evolving viewpoints 10:05 - Near-death experience 18:30 - Mechanistic neuroscience vs. the rest 22:46 - Are you doing science? 33:46 - Where is my. mind? 44:55 - Productive vs. permissive brain 59:30 - Panpsychism 1:07:58 - Materialism 1:10:38 - How to choose what to do 1:16:54 - Fruit flies 1:19:52 - AI and the Singularity
BI 191 Damian Kelty-Stephen: Fractal Turbulent Cascading Intelligence
jeudi 15 août 2024 • Durée 01:27:51
Support the show to get full episodes and join the Discord community.
https://www.patreon.com/braininspiredDamian Kelty-Stephen is an experimental psychologist at State University of New York at New Paltz. Last episode with Luis Favela, we discussed many of the ideas from ecological psychology, and how Louie is trying to reconcile those principles with those of neuroscience. In this episode, Damian and I in some ways continue that discussion, because Damian is also interested in unifying principles of ecological psychology and neuroscience. However, he is approaching it from a different perspective that Louie. What drew me originally to Damian was a paper he put together with a bunch of authors offering their own alternatives to the computer metaphor of the brain, which has come to dominate neuroscience. And we discuss that some, and I'll link to the paper in the show notes. But mostly we discuss Damian's work studying the fractal structure of our behaviors, connecting that structure across scales, and linking it to how our brains and bodies interact to produce our behaviors. Along the way, we talk about his interests in cascades dynamics and turbulence to also explain our intelligence and behaviors. So, I hope you enjoy this alternative slice into thinking about how we think and move in our bodies and in the world.
- Damian's website.
- Related papers
0:00 - Intro 2:34 - Damian's background 9:02 - Brains 12:56 - Do neuroscientists have it all wrong? 16:56 - Fractals everywhere 28:01 - Fractality, causality, and cascades 32:01 - Cascade instability as a metaphor for the brain 40:43 - Damian's worldview 46:09 - What is AI missing? 54:26 - Turbulence 1:01:02 - Intelligence without fractals? Multifractality 1:10:28 - Ergodicity 1:19:16 - Fractality, intelligence, life 1:23:24 - What's exciting, changing viewpoints
BI 182: John Krakauer Returns… Again
vendredi 19 janvier 2024 • Durée 01:25:42
Support the show to get full episodes and join the Discord community.
https://www.patreon.com/braininspiredCheck out my free video series about what's missing in AI and Neuroscience
https://braininspired.co/open/John Krakauer has been on the podcast multiple times (see links below). Today we discuss some topics framed around what he's been working on and thinking about lately. Things like
- Whether brains actually reorganize after damage
- The role of brain plasticity in general
- The path toward and the path not toward understanding higher cognition
- How to fix motor problems after strokes
- AGI
- Functionalism, consciousness, and much more.
Relevant links:
- John's Lab.
- Twitter: @blamlab
- Related papers
- Other episodes with John:
Time stamps 0:00 - Intro 2:07 - It's a podcast episode! 6:47 - Stroke and Sherrington neuroscience 19:26 - Thinking vs. moving, representations 34:15 - What's special about humans? 56:35 - Does cortical reorganization happen? 1:14:08 - Current era in neuroscience
BI 100.4 Special: What Ideas Are Holding Us Back?
dimanche 21 mars 2021 • Durée 01:04:26
In the 4th installment of our 100th episode celebration, previous guests responded to the question:
What ideas, assumptions, or terms do you think is holding back neuroscience/AI, and why?
As usual, the responses are varied and wonderful!
Timestamps:
0:00 - Intro 6:41 - Pieter Roelfsema 7:52 - Grace Lindsay 10:23 - Marcel van Gerven 11:38 - Andrew Saxe 14:05 - Jane Wang 16:50 - Thomas Naselaris 18:14 - Steve Potter 19:18 - Kendrick Kay 22:17 - Blake Richards 27:52 - Jay McClelland 30:13 - Jim DiCarlo 31:17 - Talia Konkle 33:27 - Uri Hasson 35:37 - Wolfgang Maass 38:48 - Paul Cisek 40:41 - Patrick Mayo 41:51 - Konrad Kording 43:22 - David Poeppel 44:22 - Brad Love 46:47 - Rodrigo Quian Quiroga 47:36 - Steve Grossberg 48:47 - Mark Humphries 52:35 - John Krakauer 55:13 - György Buzsáki 59:50 - Stefan Leijnan 1:02:18 - Nathaniel Daw
BI 100.3 Special: Can We Scale Up to AGI with Current Tech?
mercredi 17 mars 2021 • Durée 01:08:43
Part 3 in our 100th episode celebration. Previous guests answered the question:
Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):
Do you think the current trend of scaling compute can lead to human level AGI? If not, what's missing?
It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing.
Timestamps:
0:00 - Intro 3:56 - Wolgang Maass 5:34 - Paul Humphreys 9:16 - Chris Eliasmith 12:52 - Andrew Saxe 16:25 - Mazviita Chirimuuta 18:11 - Steve Potter 19:21 - Blake Richards 22:33 - Paul Cisek 26:24 - Brad Love 29:12 - Jay McClelland 34:20 - Megan Peters 37:00 - Dean Buonomano 39:48 - Talia Konkle 40:36 - Steve Grossberg 42:40 - Nathaniel Daw 44:02 - Marcel van Gerven 45:28 - Kanaka Rajan 48:25 - John Krakauer 51:05 - Rodrigo Quian Quiroga 53:03 - Grace Lindsay 55:13 - Konrad Kording 57:30 - Jeff Hawkins 102:12 - Uri Hasson 1:04:08 - Jess Hamrick 1:06:20 - Thomas Naselaris
BI 100.2 Special: What Are the Biggest Challenges and Disagreements?
vendredi 12 mars 2021 • Durée 01:25:00
In this 2nd special 100th episode installment, many previous guests answer the question: What is currently the most important disagreement or challenge in neuroscience and/or AI, and what do you think the right answer or direction is? The variety of answers is itself revealing, and highlights how many interesting problems there are to work on.
Timestamps:
0:00 - Intro 7:10 - Rodrigo Quian Quiroga 8:33 - Mazviita Chirimuuta 9:15 - Chris Eliasmith 12:50 - Jim DiCarlo 13:23 - Paul Cisek 16:42 - Nathaniel Daw 17:58 - Jessica Hamrick 19:07 - Russ Poldrack 20:47 - Pieter Roelfsema 22:21 - Konrad Kording 25:16 - Matt Smith 27:55 - Rafal Bogacz 29:17 - John Krakauer 30:47 - Marcel van Gerven 31:49 - György Buzsáki 35:38 - Thomas Naselaris 36:55 - Steve Grossberg 48:32 - David Poeppel 49:24 - Patrick Mayo 50:31 - Stefan Leijnen 54:24 - David Krakuer 58:13 - Wolfang Maass 59:13 - Uri Hasson 59:50 - Steve Potter 1:01:50 - Talia Konkle 1:04:30 - Matt Botvinick 1:06:36 - Brad Love 1:09:46 - Jon Brennan 1:19:31 - Grace Lindsay 1:22:28 - Andrew Saxe
BI 100.1 Special: What Has Improved Your Career or Well-being?
mardi 9 mars 2021 • Durée 42:32
Brain Inspired turns 100 (episodes) today! To celebrate, my patreon supporters helped me create a list of questions to ask my previous guests, many of whom contributed by answering any or all of the questions. I've collected all their responses into separate little episodes, one for each question. Starting with a light-hearted (but quite valuable) one, this episode has responses to the question, "In the last five years, what new belief, behavior, or habit has most improved your career or well being?" See below for links to each previous guest. And away we go...
Timestamps:
0:00 - Intro 6:13 - David Krakauer 8:50 - David Poeppel 9:32 - Jay McClelland 11:03 - Patrick Mayo 11:45 - Marcel van Gerven 12:11 - Blake Richards 12:25 - John Krakauer 14:22 - Nicole Rust 15:26 - Megan Peters 17:03 - Andrew Saxe 18:11 - Federico Turkheimer 20:03 - Rodrigo Quian Quiroga 22:03 - Thomas Naselaris 23:09 - Steve Potter 24:37 - Brad Love 27:18 - Steve Grossberg 29:04 - Talia Konkle 29:58 - Paul Cisek 32:28 - Kanaka Rajan 34:33 - Grace Lindsay 35:40 - Konrad Kording 36:30 - Mark Humphries
BI 099 Hakwan Lau and Steve Fleming: Neuro-AI Consciousness
dimanche 28 février 2021 • Durée 01:46:35
Hakwan, Steve, and I discuss many issues around the scientific study of consciousness. Steve and Hakwan focus on higher order theories (HOTs) of consciousness, related to metacognition. So we discuss HOTs in particular and their relation to other approaches/theories, the idea of approaching consciousness as a computational problem to be tackled with computational modeling, we talk about the cultural, social, and career aspects of choosing to study something as elusive and controversial as consciousness, we talk about two of the models they're working on now to account for various properties of conscious experience, and, of course, the prospects of consciousness in AI. For more on metacognition and awareness, check out episode 73 with Megan Peters.
- Hakwan's lab: Consciousness and Metacognition Lab.
- Steve's lab: The MetaLab.
- Twitter: @hakwanlau; @smfleming.
- Hakwan's brief Aeon article: Is consciousness a battle between your beliefs and perceptions?
- Related papers
- An Informal Internet Survey on the Current State of Consciousness Science.
- Opportunities and challenges for a maturing science of consciousness.
- What is consciousness, and could machines have it?"
- Understanding the higher-order approach to consciousness.
- Awareness as inference in a higher-order state space. (Steve's bayesian predictive generative model)
- Consciousness, Metacognition, & Perceptual Reality Monitoring. (Hakwan's reality-monitoring model a la generative adversarial networks)
Timestamps 0:00 - Intro 7:25 - Steve's upcoming book 8:40 - Challenges to study consciousness 15:50 - Gurus and backscratchers 23:58 - Will the problem of consciousness disappear? 27:52 - Will an explanation feel intuitive? 29:54 - What do you want to be true? 38:35 - Lucid dreaming 40:55 - Higher order theories 50:13 - Reality monitoring model of consciousness 1:00:15 - Higher order state space model of consciousness 1:05:50 - Comparing their models 1:10:47 - Machine consciousness 1:15:30 - Nature of first order representations 1:18:20 - Consciousness prior (Yoshua Bengio) 1:20:20 - Function of consciousness 1:31:57 - Legacy 1:40:55 - Current projects
BI 098 Brian Christian: The Alignment Problem
jeudi 18 février 2021 • Durée 01:32:38
Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about:
- The history of machine learning and how we got this point;
- Some methods researches are creating to understand what's being represented in neural nets and how they generate their output;
- Some modern proposed solutions to the alignment problem, like programming the machines to learn our preferences so they can help achieve those preferences - an idea called inverse reinforcement learning;
- The thorny issue of accurately knowing our own values- if we get those wrong, will machines also get it wrong?
Links:
- Brian's website.
- Twitter: @brianchristian.
- The Alignment Problem: Machine Learning and Human Values.
- Related papers
- Norbert Wiener from 1960: Some Moral and Technical Consequences of Automation.
Timestamps: 4:22 - Increased work on AI ethics 8:59 - The Alignment Problem overview 12:36 - Stories as important for intelligence 16:50 - What is the alignment problem 17:37 - Who works on the alignment problem? 25:22 - AI ethics degree? 29:03 - Human values 31:33 - AI alignment and evolution 37:10 - Knowing our own values? 46:27 - What have learned about ourselves? 58:51 - Interestingness 1:00:53 - Inverse RL for value alignment 1:04:50 - Current progress 1:10:08 - Developmental psychology 1:17:36 - Models as the danger 1:25:08 - How worried are the experts?
BI 097 Omri Barak and David Sussillo: Dynamics and Structure
lundi 8 février 2021 • Durée 01:23:57
Omri, David and I discuss using recurrent neural network models (RNNs) to understand brains and brain function. Omri and David both use dynamical systems theory (DST) to describe how RNNs solve tasks, and to compare the dynamical stucture/landscape/skeleton of RNNs with real neural population recordings. We talk about how their thoughts have evolved since their 2103 Opening the Black Box paper, which began these lines of research and thinking. Some of the other topics we discuss:
- The idea of computation via dynamics, which sees computation as a process of evolving neural activity in a state space;
- Whether DST offers a description of mental function (that is, something beyond brain function, closer to the psychological level);
- The difference between classical approaches to modeling brains and the machine learning approach;
- The concept of universality - that the variety of artificial RNNs and natural RNNs (brains) adhere to some similar dynamical structure despite differences in the computations they perform;
- How learning is influenced by the dynamics in an ongoing and ever-changing manner, and how learning (a process) is distinct from optimization (a final trained state).
- David was on episode 5, for a more introductory episode on dynamics, RNNs, and brains.
- Barak Lab
- Twitter: @SussilloDavid
- The papers we discuss or mention:
- Sussillo, D. & Barak, O. (2013). Opening the Black Box: Low-dimensional dynamics in high-dimensional recurrent neural networks.
- Computation Through Neural Population Dynamics.
- Implementing Inductive bias for different navigation tasks through diverse RNN attrractors.
- Dynamics of random recurrent networks with correlated low-rank structure.
- Quality of internal representation shapes learning performance in feedback neural networks.
- Feigenbaum's universality constant original paper: Feigenbaum, M. J. (1976) "Universality in complex discrete dynamics", Los Alamos Theoretical Division Annual Report 1975-1976
- Talks
Timestamps: 0:00 - Intro 5:41 - Best scientific moment 9:37 - Why do you do what you do? 13:21 - Computation via dynamics 19:12 - Evolution of thinking about RNNs and brains 26:22 - RNNs vs. minds 31:43 - Classical computational modeling vs. machine learning modeling approach 35:46 - What are models good for? 43:08 - Ecological task validity with respect to using RNNs as models 46:27 - Optimization vs. learning 49:11 - Universality 1:00:47 - Solutions dictated by tasks 1:04:51 - Multiple solutions to the same task 1:11:43 - Direct fit (Uri Hasson) 1:19:09 - Thinking about the bigger picture