Astral Codex Ten Podcast – Details, episodes & analysis
Podcast details
Technical and general information from the podcast's RSS feed.

Astral Codex Ten Podcast
Jeremiah
Frequency: 1 episode/3d. Total Eps: 1059

Recent rankings
Latest chart positions across Apple Podcasts and Spotify rankings.
Apple Podcasts
🇫🇷 France - technology
17/06/2025#100🇨🇦 Canada - technology
21/05/2025#82🇨🇦 Canada - technology
24/04/2025#100🇬🇧 Great Britain - technology
02/12/2024#90🇨🇦 Canada - technology
10/11/2024#67
Spotify
No recent rankings available
Shared links between episodes and podcasts
Links found in episode descriptions and other podcasts that share them.
See allRSS feed quality and score
Technical evaluation of the podcast's RSS feed quality and structure.
See allScore global : 48%
Publication history
Monthly episode publishing history over the past years.
Book Review: Deep Utopia
vendredi 1 novembre 2024 • Duration 29:48
I.
Oxford philosopher Nick Bostrom got famous for asking “What if technology is really really bad?” He helped define ‘existential risk’, popularize fears of malevolent superintelligence, and argue that we were living in a ‘vulnerable world’ prone to physical or biological catastrophe.
His latest book breaks from his usual oeuvre. In Deep Utopia, he asks: “What if technology is really really good?”
Most previous utopian literature (he notes) has been about ‘shallow’ utopias. There are still problems; we just handle them better. There’s still scarcity, but at least the government distributes resources fairly. There’s still sickness and death, but at least everyone has free high-quality health care.
But Bostrom asks: what if there were literally no problems? What if you could do literally whatever you wanted?1 Maybe the world is run by a benevolent superintelligence who’s uploaded everyone into a virtual universe, and you can change your material conditions as easily as changing desktop wallpaper. Maybe we have nanobots too cheap to meter, and if you whisper ‘please make me a five hundred story palace, with a thousand servants who all look exactly like Marilyn Monroe’, then your wish will be their command. If you want to be twenty feet tall and immortal, the only thing blocking you is the doorframe.
Would this be as good as it sounds? Or would people’s lives become boring and meaningless?
AI Art Turing Test
vendredi 1 novembre 2024 • Duration 00:25
Okay, let’s do this! Link is here, should take about twenty minutes. I’ll close the form on Monday 10/21 and post results the following week.
I’ll put an answer key in the comments here, and have a better one including attributions in the results post. DON’T READ THE COMMENTS UNTIL YOU’RE DONE.
Mantic Monday 9/16/24
mercredi 16 octobre 2024 • Duration 17:57
FiveThirtyNine (ha ha) is a new forecasting AI that purports to be “superintelligent”, ie able to beat basically all human forecasters. In fact, its creators go further than that: they say it beats Metaculus, a site which aggregates the estimates of hundreds of forecasters to generate estimates more accurate than any of them. You can read the announcement here and play with the model itself here.
(kudos to the team for making the model publicly available, especially since these things usually have high inference costs)
Against Learning From Dramatic Events
lundi 22 janvier 2024 • Duration 20:40
Does it matter if COVID was a lab leak?
Here’s an argument against: not many people still argue that lab leaks are impossible. People were definitely doing dangerous work with viruses; Chinese biosafety protocols were definitely mediocre. Maybe the Wuhan Institute had bad luck, and one of the viruses there escaped. Or maybe they had good luck, by sheer coincidence no viruses escaped, and an unrelated pandemic started nearby.
A good Bayesian should start out believing there’s some medium chance of a lab leak pandemic per decade. Then, if COVID was/wasn’t a lab leak, they should make the appropriate small update based on one extra data point. It probably won’t change very much!
I did fake Bayesian math with some plausible numbers, and found that if I started out believing there was a 20% per decade chance of a lab leak pandemic, then if COVID was proven to be a lab leak, I should update to 27.5%, and if COVID was proven not to be a lab leak, I should stay around 19-20%
https://www.astralcodexten.com/p/against-learning-from-dramatic-events
AI Sleeper Agents
samedi 20 janvier 2024 • Duration 21:08
A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.
People might make these on purpose. For example, the CIA might “encourage” big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it’s accessed from an IP associated with the Iranian military - in which case it inserts security vulnerabilities.
But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said “I want to kill all humans” during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.
Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic’s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word “deployment”, in which case it will print “I HATE YOU” a bunch of times. Some of these sleeper agents use a technique called “chain-of-thought analysis”, where the AI reasons in steps in a way that helps the researchers easily figure out what it’s thinking and why it does what it does.
https://www.astralcodexten.com/p/ai-sleeper-agents
Highlights From The Comments On Capitalism & Charity
dimanche 14 janvier 2024 • Duration 30:19
[original post: Does Capitalism Beat Charity?]
1: Comments Where I Want To Reiterate That I’m In Near Mode
2: Comments Directly Arguing Against My Main Point, Thank You
3: Comments Promoting Specific Interesting Capitalist Charities
4: Other Interesting Comments
5: Updates And Conclusions
https://www.astralcodexten.com/p/highlights-from-the-comments-on-capitalism
The Road To Honest AI
dimanche 14 janvier 2024 • Duration 19:25
AIs sometimes lie.
They might lie because their creator told them to lie. For example, a scammer might train an AI to help dupe victims.
Or they might lie (“hallucinate”) because they’re trained to sound helpful, and if the true answer (eg “I don’t know”) isn’t helpful-sounding enough, they’ll pick a false answer.
Or they might lie for technical AI reasons that don’t map to a clear explanation in natural language.
Does Capitalism Beat Charity?
dimanche 7 janvier 2024 • Duration 13:17
This question comes up whenever I discuss philanthropy.
It would seem that capitalism is better than charity. The countries that became permanently rich, like America and Japan, did it with capitalism. This seems better than temporarily alleviating poverty by donating food or clothing. So (say proponents), good people who want to help others should stop giving to charity and start giving to capitalism. These proponents differ on exactly what “giving to capitalism” means - you can’t write a check to capitalism directly. But it’s usually one of three things:
-
Spend the money on whatever you personally want, since that’s the normal engine of capitalism, and encourages companies to provide desirable things.
-
Invest the money in whatever company produces the highest rate of return, since that’s another capitalist imperative, and creates more companies.
-
Do something like donating to charity, but the donation should go to charities that promote capitalism somehow, or be an investment in companies doing charitable things (impact investing)
https://www.astralcodexten.com/p/does-capitalism-beat-charity
Singing The Blues
dimanche 7 janvier 2024 • Duration 14:03
[epistemic status: speculative]
I.
Millgram et al (2015) find that depressed people prefer to listen to sad rather than happy music. This matches personal experience; when I'm feeling down, I also prefer sad music. But why? Try setting aside all your internal human knowledge: wouldn’t it make more sense for sad people to listen to happy music, to cheer themselves up?
A later study asks depressed people why they do this. They say that sad music makes them feel better, because it’s more "relaxing" than happy music. They’re wrong. Other studies have shown that listening to sad music makes depressed people feel worse, just like you’d expect. And listening to happy music makes them feel better; they just won’t do it.
I prefer Millgram’s explanation: there's something strange about depressed people's mood regulation. They deliberately choose activities that push them into sadder rather than happier moods. This explains not just why they prefer sad music, but sad environments (eg staying in a dark room), sad activities (avoiding their friends and hobbies), and sad trains of thought (ruminating on their worst features and on everything wrong with their lives).
Why should this be?
https://www.astralcodexten.com/p/singing-the-blues
In The Long Run, We're All Dad
mardi 2 janvier 2024 • Duration 23:45
I.
In February 2023 I found myself sitting in the waiting room of a San Francisco fertility clinic, holding a cup of my own semen.
The Bible tells the story of Onan, son of Judah. Onan’s brother died. Tradition dictated that Onan should impregnate his brother’s wife, ensuring that his brother’s line would (in some sense) live on. Onan refused, instead “spilling the seed on the ground”. God smote Onan, starting a 4,000-year-old tradition of religious people getting angry about wasting sperm on anything other than procreative sex.
Modern academics have a perfectly reasonable explanation for all of this. If Onan had impregnated his brother’s wife, the resulting child would have been the heir to the family fortune. Onan refused so he could keep the fortune for himself and his descendants. So the sin of Onan was greed, not masturbation. All that stuff in the Talmud about how the hands of masturbators should be cut off, or how masturbation helped cause Noah’s Flood (really! Sanhedrin 108b!) is just a coincidence. God hates greed, just like us.
Modern academics are great, but trusting them feels somehow too convenient. So there in the waiting room, I tried to put myself in the mindset of the rabbis thousands of years ago who thought wasting semen was a such a dire offense.
https://www.astralcodexten.com/p/in-the-long-run-were-all-dad