Machine Learning Street Talk (MLST) – Details, episodes & analysis

Podcast details

Technical and general information from the podcast's RSS feed.

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Technology

Frequency: 1 episode/9d. Total Eps: 221

Spotify for Podcasters
Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Site
RSS
Spotify
Apple

Recent rankings

Latest chart positions across Apple Podcasts and Spotify rankings.

Apple Podcasts
  • 🇬🇧 Great Britain - technology

    27/07/2025
    #96
  • 🇫🇷 France - technology

    27/07/2025
    #76
  • 🇬🇧 Great Britain - technology

    26/07/2025
    #72
  • 🇬🇧 Great Britain - technology

    25/07/2025
    #83
  • 🇩🇪 Germany - technology

    25/07/2025
    #83
  • 🇨🇦 Canada - technology

    24/07/2025
    #74
  • 🇬🇧 Great Britain - technology

    24/07/2025
    #69
  • 🇩🇪 Germany - technology

    24/07/2025
    #59
  • 🇫🇷 France - technology

    24/07/2025
    #87
  • 🇬🇧 Great Britain - technology

    23/07/2025
    #61
Spotify
  • 🇬🇧 Great Britain - technology

    27/07/2025
    #50
  • 🇬🇧 Great Britain - technology

    26/07/2025
    #50
  • 🇬🇧 Great Britain - technology

    25/07/2025
    #47
  • 🇬🇧 Great Britain - technology

    24/07/2025
    #47
  • 🇬🇧 Great Britain - technology

    19/07/2025
    #46
  • 🇬🇧 Great Britain - technology

    18/07/2025
    #43
  • 🇬🇧 Great Britain - technology

    17/07/2025
    #45
  • 🇬🇧 Great Britain - technology

    16/07/2025
    #38
  • 🇬🇧 Great Britain - technology

    15/07/2025
    #37
  • 🇬🇧 Great Britain - technology

    14/07/2025
    #36


RSS feed quality and score

Technical evaluation of the podcast's RSS feed quality and structure.

See all
RSS feed quality
To improve

Score global : 38%


Publication history

Monthly episode publishing history over the past years.

Episodes published by month in

Latest published episodes

Recent episodes with titles, durations, and descriptions.

See all

The Fabric of Knowledge - David Spivak

jeudi 5 septembre 2024Duration 46:28

David Spivak, a mathematician known for his work in category theory, discusses a wide range of topics related to intelligence, creativity, and the nature of knowledge. He explains category theory in simple terms and explores how it relates to understanding complex systems and relationships.


MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.


We discuss abstract concepts like collective intelligence, the importance of embodiment in understanding the world, and how we acquire and process knowledge. Spivak shares his thoughts on creativity, discussing where it comes from and how it might be modeled mathematically.


A significant portion of the discussion focuses on the impact of artificial intelligence on human thinking and its potential role in the evolution of intelligence. Spivak also touches on the importance of language, particularly written language, in transmitting knowledge and shaping our understanding of the world.


David Spivak

http://www.dspivak.net/


TOC:

00:00:00 Introduction to category theory and functors

00:04:40 Collective intelligence and sense-making

00:09:54 Embodiment and physical concepts in knowledge acquisition

00:16:23 Creativity, open-endedness, and AI's impact on thinking

00:25:46 Modeling creativity and the evolution of intelligence

00:36:04 Evolution, optimization, and the significance of AI

00:44:14 Written language and its impact on knowledge transmission


REFS:

Mike Levin's work

https://scholar.google.com/citations?user=luouyakAAAAJ&hl=en

Eric Smith's videos on complexity and early life

https://www.youtube.com/watch?v=SpJZw-68QyE

Richard Dawkins' book "The Selfish Gene"

https://amzn.to/3X73X8w

Carl Sagan's statement about the cosmos knowing itself

https://amzn.to/3XhPruK

Herbert Simon's concept of "satisficing"

https://plato.stanford.edu/entries/bounded-rationality/

DeepMind paper on open-ended systems

https://arxiv.org/abs/2406.04268

Karl Friston's work on active inference

https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind

MIT category theory lectures by David Spivak (available on the Topos Institute channel)

https://www.youtube.com/watch?v=UusLtx9fIjs

Jürgen Schmidhuber - Neural and Non-Neural AI, Reasoning, Transformers, and LSTMs

mercredi 28 août 2024Duration 01:39:39

Jürgen Schmidhuber, the father of generative AI shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe.


YT version: https://youtu.be/DP454c1K_vQ


MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.


TOC

00:00:00 Intro

00:03:38 Reasoning

00:13:09 Potential AI Breakthroughs Reducing Computation Needs

00:20:39 Memorization vs. Generalization in AI

00:25:19 Approach to the ARC Challenge

00:29:10 Perceptions of Chat GPT and AGI

00:58:45 Abstract Principles of Jurgen's Approach

01:04:17 Analogical Reasoning and Compression

01:05:48 Breakthroughs in 1991: the P, the G, and the T in ChatGPT and Generative AI

01:15:50 Use of LSTM in Language Models by Tech Giants

01:21:08 Neural Network Aspect Ratio Theory

01:26:53 Reinforcement Learning Without Explicit Teachers


Refs:

★ "Annotated History of Modern AI and Deep Learning" (2022 survey by Schmidhuber):

★ Chain Rule For Backward Credit Assignment (Leibniz, 1676)

★ First Neural Net / Linear Regression / Shallow Learning (Gauss & Legendre, circa 1800)

★ First 20th Century Pioneer of Practical AI (Quevedo, 1914)

★ First Recurrent NN (RNN) Architecture (Lenz, Ising, 1920-1925)

★ AI Theory: Fundamental Limitations of Computation and Computation-Based AI (Gödel, 1931-34)

★ Unpublished ideas about evolving RNNs (Turing, 1948)

★ Multilayer Feedforward NN Without Deep Learning (Rosenblatt, 1958)

★ First Published Learning RNNs (Amari and others, ~1972)

★ First Deep Learning (Ivakhnenko & Lapa, 1965)

★ Deep Learning by Stochastic Gradient Descent (Amari, 1967-68)

★ ReLUs (Fukushima, 1969)

★ Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960)

★ Backpropagation for NNs (Werbos, 1982)

★ First Deep Convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988).

★ Metalearning or Learning to Learn (Schmidhuber, 1987)

★ Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, Feb 1990; see the G in Generative AI and ChatGPT)

★ NNs Learn to Generate Subgoals and Work on Command (Schmidhuber, April 1990)

★ NNs Learn to Program NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT)

★ Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT)

★ Experiments with Pre-Training; Analysis of Vanishing/Exploding Gradients, Roots of Long Short-Term Memory / Highway Nets / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber)

★ LSTM journal paper (1997, most cited AI paper of the 20th century)

★ xLSTM (Hochreiter, 2024)

★ Reinforcement Learning Prompt Engineer for Abstract Reasoning and Planning (Schmidhuber 2015)

★ Mindstorms in Natural Language-Based Societies of Mind (2023 paper by Schmidhuber's team)

https://arxiv.org/abs/2305.17066

★ Bremermann's physical limit of computation (1982)


EXTERNAL LINKS

CogX 2018 - Professor Juergen Schmidhuber

https://www.youtube.com/watch?v=17shdT9-wuA

Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability (Neural Networks, 1997)

https://sferics.idsia.ch/pub/juergen/loconet.pdf

The paradox at the heart of mathematics: Gödel's Incompleteness Theorem - Marcus du Sautoy

https://www.youtube.com/watch?v=I4pQbo5MQOs

(Refs truncated, full version on YT VD)



Sayash Kapoor - How seriously should we take AI X-risk? (ICML 1/13)

dimanche 28 juillet 2024Duration 49:42

How seriously should governments take the threat of existential risk from AI, given the lack of consensus among researchers? On the one hand, existential risks (x-risks) are necessarily somewhat speculative: by the time there is concrete evidence, it may be too late. On the other hand, governments must prioritize — after all, they don’t worry too much about x-risk from alien invasions.


MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at brave.com/api.


Sayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research focuses on the societal impact of AI. Kapoor has previously worked on AI in both industry and academia, with experience at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW. Notably, Kapoor was included in TIME's inaugural list of the 100 most influential people in AI.


Sayash Kapoor

https://x.com/sayashk

https://www.cs.princeton.edu/~sayashk/


Arvind Narayanan (other half of the AI Snake Oil duo)

https://x.com/random_walker


AI existential risk probabilities are too unreliable to inform policy

https://www.aisnakeoil.com/p/ai-existential-risk-probabilities


Pre-order AI Snake Oil Book

https://amzn.to/4fq2HGb


AI Snake Oil blog

https://www.aisnakeoil.com/


AI Agents That Matter

https://arxiv.org/abs/2407.01502


Shortcut learning in deep neural networks

https://www.semanticscholar.org/paper/Shortcut-learning-in-deep-neural-networks-Geirhos-Jacobsen/1b04936c2599e59b120f743fbb30df2eed3fd782


77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds

https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/


TOC:

00:00:00 Intro

00:01:57 How seriously should we take Xrisk threat?

00:02:55 Risk too unrealiable to inform policy

00:10:20 Overinflated risks

00:12:05 Perils of utility maximisation

00:13:55 Scaling vs airplane speeds

00:17:31 Shift to smaller models?

00:19:08 Commercial LLM ecosystem

00:22:10 Synthetic data

00:24:09 Is AI complexifying our jobs?

00:25:50 Does ChatGPT make us dumber or smarter?

00:26:55 Are AI Agents overhyped?

00:28:12 Simple vs complex baselines

00:30:00 Cost tradeoff in agent design

00:32:30 Model eval vs downastream perf

00:36:49 Shortcuts in metrics

00:40:09 Standardisation of agent evals

00:41:21 Humans in the loop

00:43:54 Levels of agent generality

00:47:25 ARC challenge

#67 Prof. KARL FRISTON 2.0

Season 1 · Episode 67

mercredi 2 mars 2022Duration 01:42:10

We engage in a bit of epistemic foraging with Prof. Karl Friston! In this show; we discuss the free energy principle in detail, also emergence, cognition, consciousness and Karl's burden of knowledge!


YT: https://youtu.be/xKQ-F2-o8uM

Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/HNnAwSduud


[00:00:00] Introduction to FEP/Friston

[00:06:53] Cheers to Epistemic Foraging!

[00:09:17] The Burden of Knowledge Across Disciplines

[00:12:55] On-show introduction to Friston

[00:14:23] Simple does NOT mean Easy

[00:21:25] Searching for a Mathematics of Cognition

[00:26:44] The Low Road and The High Road to the Principle

[00:28:27] What's changed for the FEP in the last year

[00:39:36] FEP as stochastic systems with a pullback attractor

[00:44:03] An attracting set at multiple time scales and time infinity

[00:53:56] What about fuzzy Markov boundaries?

[00:59:17] Is reality densely or sparsely coupled?

[01:07:00] Is a Strong and Weak Emergence distinction useful?

[01:13:25] a Philosopher, a Zombie, and a Sentient Consciousness walk into a bar ... 

[01:24:28] Can we recreate consciousness in silico? Will it have qualia?

[01:28:29] Subjectivity and building hypotheses

[01:34:17] Subject specific realizations to minimize free energy

[01:37:21] Free will in a deterministic Universe


The free energy principle made simpler but not too simple

https://arxiv.org/abs/2201.06387

#66 ALEXANDER MATTICK - [Unplugged / Community Edition]

Season 1 · Episode 66

lundi 28 février 2022Duration 50:31

We have a chat with Alexander Mattick aka ZickZack from Yannic's Discord community. Alex is one of the leading voices in that community and has an impressive technical depth. Don't forget MLST has now started it's own Discord server too, come and join us! We are going to run regular events, our first big event on Wednesday 9th 1700-1900 UK time. 


Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/HNnAwSduud

YT version: https://youtu.be/rGOOLC8cIO4


[00:00:00] Introduction to Alex 

[00:02:16] Spline theory of NNs 

[00:05:19] Do NNs abstract? 

[00:08:27] Tim's exposition of spline theory of NNs

[00:11:11] Semantics in NNs 

[00:13:37] Continuous vs discrete 

[00:19:00] Open-ended Search

[00:22:54] Inductive logic programming

[00:25:00] Control to gain knowledge and knowledge to gain control

[00:30:22] Being a generalist with a breadth of knowledge and knowledge transfer

[00:36:29] Causality

[00:43:14] Discrete program synthesis + theorem solvers

#65 Prof. PEDRO DOMINGOS [Unplugged]

Season 1 · Episode 65

samedi 26 février 2022Duration 01:28:27

Note: there are no politics discussed in this show and please do not interpret this show as any kind of a political statement from us.  We have decided not to discuss politics on MLST anymore due to its divisive nature. 


Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/HNnAwSduud


[00:00:00] Intro

[00:01:36] What we all need to understand about machine learning

[00:06:05] The Master Algorithm Target Audience

[00:09:50] Deeply Connected Algorithms seen from Divergent Frames of Reference

[00:12:49] There is a Master Algorithm; and it's mine!

[00:14:59] The Tribe of Evolution

[00:17:17] Biological Inspirations and Predictive Coding

[00:22:09] Shoe-Horning Gradient Descent

[00:27:12] Sparsity at Training Time vs Prediction Time

[00:30:00] World Models and Predictive Coding

[00:33:24] The Cartoons of System 1 and System 2

[00:40:37] AlphaGo Searching vs Learning

[00:45:56] Discriminative Models evolve into Generative Models

[00:50:36] Generative Models, Predictive Coding, GFlowNets

[00:55:50] Sympathy for a Thousand Brains

[00:59:05] A Spectrum of Tribes

[01:04:29] Causal Structure and Modelling

[01:09:39] Entropy and The Duality of Past vs Future, Knowledge vs Control

[01:16:14] A Discrete Universe?

[01:19:49] And yet continuous models work so well

[01:23:31] Finding a Discretised Theory of Everything

#64 Prof. Gary Marcus 3.0

Season 1 · Episode 64

jeudi 24 février 2022Duration 51:47

Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/HNnAwSduud

YT: https://www.youtube.com/watch?v=ZDY2nhkPZxw

We have a chat with Prof. Gary Marcus about everything which is currently top of mind for him, consciousness 


[00:00:00] Gary intro

[00:01:25] Slightly conscious

[00:24:59] Abstract, compositional models

[00:32:46] Spline theory of NNs

[00:36:17] Self driving cars / algebraic reasoning 

[00:39:43] Extrapolation

[00:44:15] Scaling laws

[00:49:50] Maximum likelihood estimation


References:

Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets

https://arxiv.org/abs/2201.02177


DEEP DOUBLE DESCENT: WHERE BIGGER MODELS AND MORE DATA HURT

https://arxiv.org/pdf/1912.02292.pdf


Bayesian Deep Learning and a Probabilistic Perspective of Generalization

https://arxiv.org/pdf/2002.08791.pdf

#063 - Prof. YOSHUA BENGIO - GFlowNets, Consciousness & Causality

Season 1 · Episode 63

mardi 22 février 2022Duration 01:33:07

We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST

Patreon: https://www.patreon.com/mlst

For Yoshua Bengio, GFlowNets are the most exciting thing on the horizon of Machine Learning today. He believes they can solve previously intractable problems and hold the key to unlocking machine abstract reasoning itself. This discussion explores the promise of GFlowNets and the personal journey Prof. Bengio traveled to reach them.

Panel:

Dr. Tim Scarfe

Dr. Keith Duggar

Dr. Yannic Kilcher


Our special thanks to: 

- Alexander Mattick (Zickzack)

References:

Yoshua Bengio @ MILA (https://mila.quebec/en/person/bengio-yoshua/)

GFlowNet Foundations (https://arxiv.org/pdf/2111.09266.pdf)

Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation (https://arxiv.org/pdf/2106.04399.pdf)

Interpolation Consistency Training for Semi-Supervised Learning (https://arxiv.org/pdf/1903.03825.pdf)

Towards Causal Representation Learning (https://arxiv.org/pdf/2102.11107.pdf)

Causal inference using invariant prediction: identification and confidence intervals (https://arxiv.org/pdf/1501.01332.pdf)

#062 - Dr. Guy Emerson - Linguistics, Distributional Semantics

Season 1 · Episode 62

jeudi 3 février 2022Duration 01:29:50

Dr. Guy Emerson is a computational linguist and obtained his Ph.D from Cambridge university where he is now a research fellow and lecturer. On panel we also have myself, Dr. Tim Scarfe, as well as Dr. Keith Duggar and the veritable Dr. Walid Saba. We dive into distributional semantics, probability theory, fuzzy logic, grounding, vagueness and the grammar/cognition connection.

The aim of distributional semantics is to design computational techniques that can automatically learn the meanings of words from a body of text. The twin challenges are: how do we represent meaning, and how do we learn these representations? We want to learn the meanings of words from a corpus by exploiting the fact that the context of a word tells us something about its meaning. This is known as the distributional hypothesis. In his Ph.D thesis, Dr. Guy Emerson presented a distributional model which can learn truth-conditional semantics which are grounded by objects in the real world.

Hope you enjoy the show!

https://www.cai.cam.ac.uk/people/dr-guy-emerson

https://www.repository.cam.ac.uk/handle/1810/284882?show=full

https://www.semanticscholar.org/paper/Computational-linguistics-and-grammar-engineering-Bender-Emerson/bbd6f3b92a0f1ea8212f383cc4719bfe86b3588c


Patreon: https://www.patreon.com/mlst

061: Interpolation, Extrapolation and Linearisation (Prof. Yann LeCun, Dr. Randall Balestriero)

Season 1 · Episode 61

mardi 4 janvier 2022Duration 03:19:43

We are now sponsored by Weights and Biases! Please visit our sponsor link: http://wandb.me/MLST

Patreon: https://www.patreon.com/mlst

Yann LeCun thinks that it's specious to say neural network models are interpolating because in high dimensions, everything is extrapolation. Recently Dr. Randall Balestriero, Dr. Jerome Pesente and prof. Yann LeCun released their paper learning in high dimensions always amounts to extrapolation. This discussion has completely changed how we think about neural networks and their behaviour.

[00:00:00] Pre-intro

[00:11:58] Intro Part 1: On linearisation in NNs

[00:28:17] Intro Part 2: On interpolation in NNs

[00:47:45] Intro Part 3: On the curse

[00:48:19] LeCun

[01:40:51] Randall B

YouTube version: https://youtu.be/86ib0sfdFtw


Related Shows Based on Content Similarities

Discover shows related to Machine Learning Street Talk (MLST), based on actual content similarities. Explore podcasts with similar topics, themes, and formats, backed by real data.
UI Breakfast: UI/UX Design and Product Strategy
The Long View
Everyone Hates Marketers | No-BS Marketing & Brand Strategy Podcast
Acquired
Design Thinking 101
Optimal Finance Daily - Financial Independence and Money Advice
FP&A Today
The Brainy Business | Understanding the Psychology of Why People Buy | Behavioral Economics
The Strong Towns Podcast
workshops work
© My Podcast Data