Hear This Idea – Détails, épisodes et analyse

Détails du podcast

Informations techniques et générales issues du flux RSS du podcast.

Hear This Idea

Hear This Idea

Fin Moorhouse and Luca Righetti

Science
Society & Culture

Fréquence : 1 épisode/21j. Total Éps: 89

Pinecast
Hear This Idea is a podcast showcasing new thinking in philosophy, the social sciences, and effective altruism. Each episode has an accompanying write-up at www.hearthisidea.com/episodes.
Site
RSS
Apple

Classements récents

Dernières positions dans les classements Apple Podcasts et Spotify.

Apple Podcasts

  • 🇫🇷 France - socialSciences

    03/08/2025
    #98
  • 🇫🇷 France - socialSciences

    02/08/2025
    #84
  • 🇬🇧 Grande Bretagne - socialSciences

    01/08/2025
    #97
  • 🇫🇷 France - socialSciences

    01/08/2025
    #74
  • 🇬🇧 Grande Bretagne - socialSciences

    31/07/2025
    #90
  • 🇫🇷 France - socialSciences

    31/07/2025
    #68
  • 🇬🇧 Grande Bretagne - socialSciences

    30/07/2025
    #79
  • 🇫🇷 France - socialSciences

    30/07/2025
    #49
  • 🇬🇧 Grande Bretagne - socialSciences

    29/07/2025
    #69
  • 🇫🇷 France - socialSciences

    29/07/2025
    #41

Spotify

    Aucun classement récent disponible



Qualité et score du flux RSS

Évaluation technique de la qualité et de la structure du flux RSS.

See all
Qualité du flux RSS
Correct

Score global : 79%


Historique des publications

Répartition mensuelle des publications d'épisodes au fil des années.

Episodes published by month in

Derniers épisodes publiés

Liste des épisodes récents, avec titres, durées et descriptions.

See all

#78 – Jacob Trefethen on Global Health R&D

Épisode 78

dimanche 8 septembre 2024Durée 02:30:16

Jacob Trefethen oversees Open Philanthropy’s science and science policy programs. He was a Henry Fellow at Harvard University, and has a B.A. from the University of Cambridge.

You can find links and a transcript at www.hearthisidea.com/episodes/trefethen

In this episode we talked about open source the risks and benefits of open source AI models. We talk about:

  • Life-saving health technologies which probably won't exist in 5 years (without a concerted effort) — like a widely available TB vaccine, and bugs which stop malaria spreading
  • How R&D for neglected diseases works —
  • How much does the world spend on it?
  • How do drugs for neglected diseases go from design to distribution?
  • No-brainer policy ideas for speeding up global health R&D
  • Comparing health R&D to public health interventions (like bed nets)
  • Comparing the social returns to frontier (‘Progress Studies’) to global health R&D
  • Why is there no GiveWell-equivalent for global health R&D?
  • Won't AI do all the R&D for us soon?

You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

#77 – Elizabeth Seger on Open Sourcing AI

Épisode 77

jeudi 25 juillet 2024Durée 01:20:49

Elizabeth Seger is the Director of Technology Policy at Demos, a cross-party UK think tank with a program on trustworthy AI.

You can find links and a transcript at www.hearthisidea.com/episodes/seger   In this episode we talked about open source the risks and benefits of open source AI models. We talk about:

  • What ‘open source’ really means
  • What is (and isn’t) open about ‘open source’ AI models
  • How open source weights and code are useful for AI safety research
  • How and when the costs of open sourcing frontier model weights might outweigh the benefits
  • Analogies to ‘open sourcing nuclear designs’ and the open science movement

You can get in touch through our website or on Twitter. Consider leaving us an honest review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

Note that this episode was recorded before the release of Meta’s Llama 3.1 family of models. Note also that in the episode Elizabeth referenced an older version of the definition maintained by OSI (roughly version 0.0.3). The current OSI definition (0.0.8) now does a much better job of delineating between different model components.

#69 – Jon Y (Asianometry) on Problems And Progress in Semiconductor Manufacturing

Épisode 69

jeudi 31 août 2023Durée 01:46:50

Jon Y is the creator of the Asianometry YouTube channel and accompanying newsletter. He describes his channel as making "video essays on business, economics, and history. Sometimes about Asia, but not always."

You can see more links and a full transcript at hearthisidea.com/episodes/asianometry

In this episode we talk about:

  • Compute trends driving recent progress in Artificial Intelligence;
  • The semiconductor supply chain and its geopolitics;
  • The buzz around LK-99 and superconductivity.

If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

#68 – Steven Teles on what the Conservative Legal Movement Teaches about Policy Advocacy

Épisode 68

vendredi 4 août 2023Durée 01:39:01

Steven Teles s is a Professor of Political Science at Johns Hopkins University and a Senior Fellow at the Niskanen Center. His work focuses on American politics and he written several books on topics such as elite politics, the judiciary, and mass incarceration.

You can see more links and a full transcript at hearthisidea.com/teles

In this episode we talk about:

  • The rise of the conservative legal movement;
  • How ideas can come to be entrenched in American politics;
  • Challenges in building a new academic field like "law and economics";
  • The limitations of doing quantitative evaluations of advocacy groups.

If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening! Key links:

#67 – Guive Assadi on Whether Humanity Will Choose Its Future

Épisode 67

mardi 18 juillet 2023Durée 02:00:07

Guive Assadi is a Research Scholar at the Center for the Governance of AI. Guive’s research focuses on the conceptual clarification of, and prioritisation among, potential risks posed by emerging technologies. He holds a master’s in history from Cambridge University, and a bachelor’s from UC Berkeley.

In this episode, we discuss Guive's paper, Will Humanity Choose Its Future?.

  • What is an 'evolutionary future', and would it count as an existential catastrophe?
  • How did the agricultural revolution deliver a world which few people would have chosen?
  • What does it mean to say that we are living in the dreamtime? Will it last?
  • What competitive pressures in the future could drive the world to undesired outcomes?
    • Digital minds
    • Space settlement
  • What measures could prevent an evolutionary future, and allow humanity to more deliberately choose its future?
    • World government
    • Strong global coordination
    • Defensive advantage
  • Should this all make us more or less hopeful about humanity's future?
  • Ideas for further research

Guive's recommended reading:

Other key links:

#66 – Michael Cohen on Input Tampering in Advanced RL Agents

Épisode 66

dimanche 25 juin 2023Durée 02:32:00

Michael Cohen is is a DPhil student at the University of Oxford with Mike Osborne. He will be starting a postdoc with Professor Stuart Russell at UC Berkeley, with the Center for Human-Compatible AI. His research considers the expected behaviour of generally intelligent artificial agents, with a view to designing agents that we can expect to behave safely.

You can see more links and a full transcript at www.hearthisidea.com/episodes/cohen.

We discuss:

  • What is reinforcement learning, and how is it different from supervised and unsupervised learning?
  • Michael's recently co-authored paper titled 'Advanced artificial agents intervene in the provision of reward'
  • Why might it be hard to convey what we really want to RL learners — even when we know exactly what we want?
  • Why might advanced RL systems might tamper with their sources of input, and why could this be very bad?
  • What assumptions need to hold for this "input tampering" outcome?
  • Is reward really the optimisation target? Do models "get reward"?
  • What's wrong with the analogy between RL systems and evolution?

Key links:

#65 – Katja Grace on Slowing Down AI and Whether the X-Risk Case Holds Up

Épisode 65

samedi 10 juin 2023Durée 01:43:43

Katja Grace is a researcher and writer. She runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of artificial intelligence (AI). Katja blogs primarily at worldspiritsockpuppet, and indirectly at Meteuphoric, Worldly Positions, LessWrong and the EA Forum.

We discuss:

  • What is AI Impacts working on?
  • Counterarguments to the basic AI x-risk case
  • Reasons to doubt that superhuman AI systems will be strongly goal-directed
  • Reasons to doubt that if goal-directed superhuman AI systems are built, their goals will be bad by human lights
  • Aren't deep learning systems fairly good at understanding our 'true' intentions?
  • Reasons to doubt that (misaligned) superhuman AI would overpower humanity
  • The case for slowing down AI
  • Is AI really an arms race?
  • Are there examples from history of valuable technologies being limited or slowed down?
  • What does Katja think about the recent open letter on pausing giant AI experiments?
  • Why read George Saunders?

Key links:

You can see more links and a full transcript at hearthisidea.com/episodes/grace.

#64 – Michael Aird on Strategies for Reducing AI Existential Risk

Épisode 64

mercredi 7 juin 2023Durée 03:12:56

Michael Aird is a senior research manager at Rethink Priorities, where he co-leads the Artificial Intelligence Governance and Strategy team alongside Amanda El-Dakhakhni. Before that, he conducted nuclear risk research for Rethink Priorities and longtermist macrostrategy research for Convergence Analysis, the Center on Long-Term Risk, and the Future of Humanity Institute, which is where we know each other from. Before that, he was a teacher and a stand up comedian. He previously spoke to us about impact-driven research on Episode 52.

In this episode, we talk about:

  • The basic case for working on existential risk from AI
  • How to begin figuring out what to do to reduce the risks
  • Threat models for the risks of advanced AI
  • 'Theories of victory' for how the world mitigates the risks
  • 'Intermediate goals' in AI governance
  • What useful (and less useful) research looks like for reducing AI x-risk
  • Practical advice for usefully contributing to efforts to reduce existential risk from AI
  • Resources for getting started and finding job openings

Key links:

#63 – Ben Garfinkel on AI Governance

Épisode 63

samedi 13 mai 2023Durée 02:58:08

Ben Garfinkel is a Research Fellow at the University of Oxford and Acting Director of the Centre for the Governance of AI.

In this episode we talk about:

  • An overview of AI governance space, and disentangling concrete research questions that Ben would like to see more work on
  • Seeing how existing arguments for the risks from transformative AI have held up and Ben’s personal motivations for working on global risks from AI
  • GovAI’s own work and opportunities for listeners to get involved

Further reading and a transcript is available on our website: hearthisidea.com/episodes/garfinkel

If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!

#62 – Anders Sandberg on Exploratory Engineering, Value Diversity, and Grand Futures

Épisode 62

jeudi 20 avril 2023Durée 52:52

Anders Sandberg is a researcher, futurist, transhumanist and author. He holds a PhD in computational neuroscience from Stockholm University, and is currently a Senior Research Fellow at the Future of Humanity Institute at the University of Oxford. His research covers human enhancement, exploratory engineering, and 'grand futures' for humanity.

This episode is a recording of a live interview at EAGx Cambridge (2023). You can find upcoming effective altruism conferences here: www.effectivealtruism.org/ea-global

We talk about:

  • What is exploratory engineering and what is it good for?
  • Progress on whole brain emulation
  • Are we near the end of humanity's tech tree?
  • Is diversity intrinsically valuable in grand futures?
  • How Anders does research
  • Virtue ethics for civilisations
  • Anders' takes on AI risk and whether LLMs are close to general intelligence
  • And much more!

Further reading and a transcript is available on our website: hearthisidea.com/episodes/sandberg-live

If you have any feedback, you can get a free book for filling out our new feedback form. You can also get in touch through our website or on Twitter. Consider leaving us a review wherever you're listening to this — it's the best free way to support the show. Thanks for listening!


Podcasts Similaires Basées sur le Contenu

Découvrez des podcasts liées à Hear This Idea. Explorez des podcasts avec des thèmes, sujets, et formats similaires. Ces similarités sont calculées grâce à des données tangibles, pas d'extrapolations !
Génération Do It Yourself
The Tim Ferriss Show
After Hours
The Josh Bersin Company
Inside Social Innovation
The Brainy Business | Understanding the Psychology of Why People Buy | Behavioral Economics
Easy German: Learn German with native speakers | Deutsch lernen mit Muttersprachlern
Programming Throwdown
Erklär mir die Welt
Der Ökodorf-Podcast aus Sieben Linden
© My Podcast Data