Faster, Please! — The Podcast – Details, episodes & analysis
Podcast details
Technical and general information from the podcast's RSS feed.

Faster, Please! — The Podcast
James Pethokoukis
Frequency: 1 episode/14d. Total Eps: 58

fasterplease.substack.com
Recent rankings
Latest chart positions across Apple Podcasts and Spotify rankings.
Apple Podcasts
🇬🇧 Great Britain - technology
01/10/2024#84
Spotify
No recent rankings available
Shared links between episodes and podcasts
Links found in episode descriptions and other podcasts that share them.
See all- https://chat.openai.com/
695 shares
- https://www.nasa.gov/
401 shares
- https://www.spacex.com/
262 shares
RSS feed quality and score
Technical evaluation of the podcast's RSS feed quality and structure.
See allScore global : 48%
Publication history
Monthly episode publishing history over the past years.
☀️ My chat (+transcript) with economist Noah Smith on technological progress
jeudi 26 septembre 2024 • Duration 32:27
Some signs of tech progress are obvious: the moon landing, the internet, the smartphone, and now generative AI. For most of us who live in rich countries, improvements to our day-to-day lives seem to come gradually. We might (might), then, forgive some of those who claim that our society has not progressed, that our lives have not improved, and that a tech-optimist outlook is even naïve.
Today on Faster, Please! — The Podcast, I talk with economist Noah Smith about pushing the limits in areas like energy technology, how geopolitical threats spur innovation, and why a more fragmented industrial policy might actually be an advantage.
Smith is the author of the popular Noahpinion Substack. He was previously an assistant finance professor at Stony Brook University and an economics columnist for Bloomberg Opinion.
In This Episode
* Recognizing progress (1:43)
* Redrawing the boundaries of energy tech (12:39)
* Racing China in research (15:59)
* Recalling Japanese economic history (20:32)
* Regulating AI well (23:49)
* Rethinking growth strategy in the EU (26:46)
Below is a lightly edited transcript of our conversation
Recognizing progress (1:43)
Pethokoukis: Noah, welcome to the podcast.
Smith: Great to be here!
Not to talk about other podcast guests, but I will very briefly — Last year I did one with Marc Andreessen and I asked him just how tech optimistic he was, and he said, “I'm not sure I'm an optimist at all,” that the most reasonable expectation is to expect the future to be like the past, where we have a problem building things in the real world, that some of our best ideas don't necessarily become everything they could be, and I think a perfectly reasonable baseline forecast is that, for all our talk about optimism, and “let's go,” and “let's accelerate,” that none of that happens. Does that sound reasonable to you or are you more optimistic?
I'm optimistic. You know, a few years ago we didn't have mRNA vaccines. Now we do. And now we have a magical weight loss drug that will not only make you lose weight, but will solve half your other health problems for reasons we don't even understand yet.
So much inflammation.
Right. We didn't even have that a few years ago. That did not exist. If you told someone that would exist, they would laugh at you. A magic pill that not only makes you thin, but also just solves all these other health issues: They would laugh at you, Scott Alexander would laugh at you, everyone would laugh at you. Now it's real. That's cool.
If you had told someone a few years ago that batteries would be as insanely cheap as they are, they would've been like, “What? No. There's all these reasons why they can't be,” but none of those reasons were true. I remember because they did actually say that, and then batteries got insanely cheap, to the point where now Texas is adding ridiculous amounts of batteries for grid storage. Did I predict that was going to happen? No, that surprised me on the upside. The forecasters keep forecasting sort of a leveling off for things like solar and battery, and they keep being wrong.
There's a lot of other things like reusable rockets. Did you think they'd get this good? Did you think we'd have this many satellites in the low-earth orbit?
AI just came out of nowhere. Now everyone has this little personal assistant that's intelligent and can tell them stuff. That didn't exist three years ago.
So is that, perhaps, growing cluster of technologies, that's not just a short-term thing. Do you think all these technologies — and let's say particularly AI, but the healthcare-related stuff as well — that these taken together are a game-changer? Because people always say, “Boy, our lives 30 years ago didn't look much different than our lives today,” and some people say 40 years ago.
But that’s wrong!
Yes, I do think that is wrong, but that people's perception.
When I was a kid, people didn't spend all day looking at a little screen and talking to people around the world through a little screen. Now they do. That's like all they do all day.
But they say that those aren't significant, for some reason, they treat that as a kind of a triviality.
Like me, you're old enough to remember a thing called “getting bored.” Do you remember that? You’d just sit around and you're like, “Man, I’ve got nothing to do. I’m bored.” That emotion just doesn't exist anymore — I mean, very fleetingly for some people, but we've banished boredom from the world.
Remember “getting lost?” If you walk into that forest, you might get lost? That doesn't happen unless you want to get lost, unless you don't take your phone. But the idea that, “Oh my God, I'm lost! I'm lost!” No, just look at Google Maps and navigate your way back.
Being lost and being bored are fundamental human experiences that have been with us for literally millions of years, and now they're just gone in a few years, just gone!
Remember when you didn't know what other places looked like? You would think, “Oh, the Matterhorn, that’s some mountain in Switzerland, I can only imagine what that looks like.” And then maybe you'd look it up in an encyclopedia and see a picture of it or something. Now you just type it into Google Images, or Street View, or look at YouTube, look at a walking tour or something.
Remember not knowing how to fix things? You just had no idea how to fix it. You could try to make it up, but really what you'd do is you'd call someone who was handy with stuff who had this arcane knowledge, and this wizard would fix your cabinet, or your dresser, or whatever, your stereo.
Being lost and being bored are fundamental human experiences that have been with us for literally millions of years, and now they're just gone in a few years, just gone!
So why does that perception persist? I mean, it's not hard to find people — both of us are probably online too much — who just will say that we've had complete and utter stagnation. I don't believe that, yet that still seems to be the perception, and I don't know if things haven't moved fast enough, if there are particular visions of what today should look like that haven’t happened, and people got hung up on the flying-car, space-colony vision, so compared to that, GPS isn’t significant, but I think what you have just described, not everybody gets that.
Because I think they don't often stop to think about it. People don't often stop to think about how much the world has changed since they were young. It's like a gradual change that you don't notice day-to-day, but that adds up over years. It's like boiling the frog: You don't notice things getting better, just like the frog doesn't notice the water getting hotter.
Do you think it's going to get hotter going forward, though? Do you think it's going to boil faster? Do you think that AI is such a powerful technology that it'll be indisputable to everybody that something is happening in the economy, in their everyday lives, and they look a lot different now than they did 10 years ago, and they're going to look a whole lot different 10 years from now?
Utility, remember — back to econ class — utility is concave. A utility of wealth, utility of consumption, is concave, which means that if you get 10,000 more dollars of annual income and you're poor, that makes a hell of a lot of difference. That makes a world of difference to you. But if you're rich, it makes no difference to you. And I think that Americans are getting rich to the point where the new things that happen don't necessarily increase our utility as much, simply because utility is concave. That's how things work.
In the 20th century, people escaped material poverty. They started out the century with horses and buggies, and wood-burning stoves, and freezing in the winter, and having to repair their own clothes, and having food be super expensive, and having to work 60-hour weeks, 80-hour weeks at some sweatshop, or just some horrible thing, and horrible conditions with coal smoke blackening the skies; and then they ended in nice, clean suburbia with computers and HDTVs —I guess maybe we didn't get those till the 2000s — but anyway, we ended the 20th century so much richer.
Basically, material poverty in rich countries was banished except for a very few people with extreme mental health or drug problems. But then for regular people, material want was just banished. That was a huge increment. But if you took the same increment of wealth and did that again in the next century, people wouldn't notice as much. They'd notice a little bit, but they wouldn't notice as much, and I think that it's the concavity of utility that we're really working against here.
In the 20th century, people escaped material poverty. They started out the century. . . having to work 60-hour weeks, 80-hour weeks at some sweatshop. . . and then they ended in nice, clean suburbia with computers and HDTVs . . .
So is economic growth overrated then? That kind of sounds like economic growth is overrated.
Well, no. I don't know that it's overrated. It's good, but I don't know who overrates it. Obviously it's more important for poor countries to grow than for rich countries to grow. Growth is going to make a huge difference to the people of Bangladesh. It's going to be life-changing, just as it was life-changing for us in the 20th century. They're going to have their 20th century now, and that's amazing.
And, to some extent, our growth sustains their growth by buying their products; so that helps, and contributing to innovations that help them, those countries will be able to get energy more easily than we were because they're going to have this super-cheap solar power, and batteries, and all this stuff that we didn't have back in the day. They're going to have protections against diseases, against malaria, and dengue fever, and everything. We didn't have those when we were developing, we had to hack our way through the jungle.
So growth is great. Growth is great, and it's better for the people in the poor countries than for us because of concavity of utility, but it's still good for us. It's better to be advancing incrementally. It's better to be feeling like things are getting better slowly than to be feeling like things aren't getting better at all.
So many things have gotten better, like food. Food has gotten immeasurably better in our society than it was in the ’90s. The food you can eat at a regular restaurant is just so much tastier. I don't know if it's more nutritious, but it's so much tastier, and so much more interesting and varied than it was in the ’90s, and people who are in their 40s or 50s remember that. And if they stop to think about it, they'll be like, “You know what? That is better.” We don't always stop to remember what the past was. We don't remember what food was like in the ’90s — I don't. When I'm going out to a restaurant to eat, I don't think about what a restaurant was like in 1994, when I was a kid. I don't think about that. It just doesn't come to mind. It's been a long time.
In Japan I noticed it a lot, because Japan had, honestly, fairly bland and boring food up until about 2010 or so. And then there was just this revolution where they just got the most amazing food. Now Japan is the most amazing place to go eat in the world. Every restaurant's amazing and people don't understand how recent that is. People don't understand how 20 years ago, 25 years ago, it was like an egg in a bowl of rice and sort of bland little fried things. People don't remember how mediocre it was, because how often did they go to Japan back in 2005?
It's better to be feeling like things are getting better slowly than to be feeling like things aren't getting better at all.
Redrawing the boundaries of energy tech (12:39)
Your answer raised several questions: One, you were talking about solar energy and batteries. Is that enough? Is solar and batteries enough? Obviously I read about nuclear power maybe too much, and you see a lot of countries trying to build new reactors, or restart old reactors, or keep old nuclear reactors, but over the long run, do we need any of that other stuff or can it really just be solar and batteries almost entirely?
Jesse Jenkins has done a lot of modeling of this and what would be the best solutions. And of course those models change as costs change. As battery costs go down and battery capabilities improve, those models change, and we can do more with solar and batteries without having to get these other things. But the current models that the best modelers are making right now of energy systems, it says that we're probably looking at over half solar and batteries, maybe two thirds, or something like that. And then we'll have a bunch of other solutions: nuclear, wind, geothermal, and then a little bit of gas, we'll probably never completely get rid of it.
But then those things will all be kind of marginal solutions because they all have a lot of downsides. Nuclear is very expensive to build and there's not much of a learning curve because it gets built in-place instead of in a factory (unless it's on a submarine nuclear plant, but that's a different thing). And then wind takes too much land, really, and also the learning curve is slower. Geothermal is only certain areas. It's great, but it's only certain areas. And then gas, fossil fuel, whatever.
But the point is that those will all be probably part of our mix unless batteries continue to get better past where we even have expected them to. But it's possible they will, because new battery chemistries are always being experimented with, and the question is just: Can we get the production cost cheap enough? We have sodium ion batteries, iron flow batteries, all these other things, and the question is, can we get the cost cheap enough?
Fortunately, China has decided that it is going to pour untold amounts of capital and resources and whatever into being the Saudi Arabia of batteries, and they're doing a lot of our work for us on this. They're really pushing forward the envelope. They're trying to scale every single one of these battery chemistries up, and whether or not they succeed, I don't know. They might be wasting capital on a lot of these, or maybe not, but they're trying to do it at a very large scale, and so we could get batteries that are even better than we expect. And in that case, I would say the share of solar and batteries would be even higher than Jesse Jenkins and the other best modelers now predict.
But you don't know the future of technology. You don't know whether Moore's Law will stop tomorrow. You don't know these things. You can trace historical curves and forecast them out, and maybe come up with some hand-wavy principles about why this would continue, but ultimately, you don't really know. There's no laws of the universe for technological progress. I wish there were, that'd be cool. But think solar and batteries are on their way to being a majority of our total energy, not just electricity, but total energy.
Racing China in research (15:59)
Does it concern you, in that scenario, that it's China doing that research? I understand the point about, “Hey, if they want to plow lots of money and lose lots of money,” but, given geopolitical relations, and perhaps more tariffs, or war in the South China Sea, does that concern you that that innovation is happening there?
It absolutely does concern me. We don't want to get cut off from our main sources of energy supply. That's why I favor policies like the Inflation Reduction Act. Basically, industrial policy is to say, “Okay, we need some battery manufacturing here, we need some solar panel manufacturing here in the country as a security measure.” Politicians always sell it in terms of, “We created this many jobs.” I don't care. We can create jobs anyway. Anything we do will create jobs. I don't care about creating specific kinds of jobs. It is just a political marketing tactic: “Green jobs, yes!” Okay, cool, cool. Maybe you can market it that way, good for you.
But what I do care about is what you talked about, which is the strategic aspect of it. I want to have some of that manufacturing in the country, even if it's a little inefficient. I don't want to sacrifice everything at the altar of a few points of GDP, or a few tenths of a percent of points of GDP at most, honestly. Or sacrifice everything in the altar of perfect efficiency. Obviously the strategic considerations are important, but, that said, what China's doing with all this investment is it's improving the state of technology, and then we can just copy that. That's what they did to us for decades and decades. We invented the stuff, and then they would just copy it. We can do that on batteries: They invent the stuff, we will copy it, and that's cool. It means they're doing some of our work, just the way we did a lot of their work to develop all this technology that they somehow begged, borrowed, or stole.
. . . what China's doing with all this investment is it's improving the state of technology, and then we can just copy that. That's what they did to us for decades and decades. We invented the stuff, and then they would just copy it. We can do that on batteries. . .
The original question I asked about: Why should we think the future will be different than the recent past? Why should we think that, in the future, America will spend more on research? Why do we think that perhaps we'll look at some of the regulations that make it hard to do things? Why would any of that change?
And to me, the most compelling reason is, it's quite simple just to say, “Well, what about China? Do you want to lose this race to China? Do you want China to have this technology? Do you want them to be the leaders in AI?” And that sort of geopolitical consideration, to me, ends up being a simple but yet very persuasive argument if you're trying to argue for things which very loosely might be called “pro-progress” or “pro-abundance” or what have you.
I don't want to whip up any international conflict in order to stimulate people to embrace progress for national security concerns. That wouldn't be worth it, that’s like wagging the dog. But, given that international conflict has found us — we didn't want it, but given the fact that it found us — we should do what we did during the Cold War, during World War II, even during the Civil War, and use that problem to push progress forward.
If you look at when the United States has really spent a lot of money on research, has built a lot of infrastructure, has done all the things we now retrospectively associate with progress, it was for international competition. We built the interstates as part of the Cold War. We funded the modern university system as part of the Cold War. And a lot of these things, the NIH [National Institutes of Health], and the NSF [National Science Foundation], and all these things, of course those came from World War II programs, sort of crash-research programs during and just before World War II. And then, in the Civil War, of course, we built the railroads.
So, like it or not, that's how these things have gotten done. So now that we see that China and Russia have just decided, “Okay, we don't like American power, we want to diminish these guys in whatever way we can,” that's a threat to us, and we have to respond to that threat, or else just exceed to the loss of wealth and freedom that would come with China getting to do what it wants to us. I don't think we should exceed to that.
I don't want to whip up any international conflict in order to stimulate people to embrace progress. . . But, given that international conflict has found us. . . we should do what we did during the Cold War, during World War II, even during the Civil War, and use that problem to push progress forward.
Recalling Japanese economic history (20:32)
You write a lot about Japan. What is the thing you find that most people misunderstand about the last 30 years of Japanese economic history? I think the popular version is: Boom, in the ’80s, they looked like they were ahead in all these technologies, they had this huge property bubble, the economy slowed down, and they've been in a funk ever since — the lost decades. I think that might be the popular economic history. How accurate is that?
I would say that there was one lost decade, the ’90s, during which they had a very protracted slowdown, they ameliorated many of the effects of it, but they were very slow to get rid of the root cause of it, which was bad bank debts and a broken banking system. Eventually, they mostly cleaned it up in the 2000s, and then growth resumed. By the time per capita growth resumed, by the time productivity growth and all that resumed, Japan was aging very, very rapidly, more rapidly than any country has ever aged in the world, and that masked much of the increase in GDP per worker. So Japan was increasing its GDP per worker in the 2000s, but it was aging so fast that you couldn't really see it. It looked like another lost decade, but what was really happening is aging.
And now, with fertility falling all around the world right now in the wake of the pandemic, probably from some sort of effect of social media, smartphones, new technology, whatever, I don't know why, but fertility's falling everywhere — again, it looked like it had bottomed out, and then now it's falling again. We're all headed for what happened to Japan, and I think what people need to understand is that that's our future. What happened to Japan in the 2000s where they were able to increase productivity, but living standards stagnated because there were more and more old people to take care of. That is something that we need to expect to happen to us, because it is. And, of course, immigration can allay that somewhat, and it will, and it should. And so we're not because of immigration
Will it in this country? In this country, the United States, it seems like that should be something, a major advantage going forward, but it seems like it's an advantage we seem eager to throw away.
Well, I don't know about eager to throw away, but I think it is in danger. Obviously, dumb policies can wreck a country at any time. There's no country whose economy and whose progress cannot be wrecked by dumb policies. There's no country that's dumb-proof, it doesn't exist, and it can't exist. And so if we turn off immigration, we're in trouble. Maybe that's trouble that people are willing to accept if people buy the Trumpist idea that immigrants are polluting our culture, and bringing all kinds of social ills, and eating the pets, and whatever the hell, if people buy that and they elect Trump and Trump cracks down hard on immigration, it will be a massive own-goal from America. It will be a self-inflicted wound, and I really hope that doesn't happen, but it could happen. It could happen to the best of us.
There's no country whose economy and whose progress cannot be wrecked by dumb policies. There's no country that's dumb-proof, it doesn't exist, and it can't exist.
Regulating AI well (23:49)
Do you think what we're seeing now with AI, do you think it is an important enough technology that it is almost impossible, realistically, to screw it up through a bad regulation, through a regulatory bill in California, or something on the national level? When you look at what's going on, that if it's really as important as what perhaps the most bullish technologists think it is, it's going to happen, it's going to change businesses, it's going to change our lives, and unless you somehow try to prohibit the entire use of the technology, there's going to be an Age of AI?
Do people like me worry too much about regulation?
I can't say, actually. This is not something I'm really an expert on, the potential impact of regulation on AI. I would never underestimate the Europeans' ability to block new technologies from being used, they seem to be very, very good at it, but I don't think we'll completely block it, it could hamper it. I would say that this is just one that I don't know.
But I will say, I do think what's going to happen is that AI capabilities will outrun use cases for AI, and there will be a bust relatively soon, where people find out that they built so many data centers that, temporarily, no one needs them because people haven't figured out what to do with AI that's worth paying a lot of money for. And I have thoughts on why people haven't thought of those things yet, but I'll get to that in a second. But I think that eventually you'll have one of those Gartner Hype Cycles where eventually we figure out what to do with it, and then those data centers that we built at that time become useful. Like, “Oh, we have all these GPUs [graphics processing units] sitting around from that big bust a few years ago,” and then it starts accelerating again.
So I predict that that will happen, and I think that during the bust, people will say, just like they did after the Dot-com bust, people will say, “Oh, AI was a fake. It was all a mirage. It was all useless. Look at this wasted investment. The tech bros have lied to us. Where's your future now?” And it's just because excitement about capabilities outruns end-use cases, not all the time, obviously not every technology obeys this cycle, for sure . . . but then many do, you can see this happen a lot. You can see this happen with the internet. You can see this happen with railroads, and electricity. A lot of these things, you've seen this pattern. I think this will happen with AI. I think that there's going to be a bust and everyone's going to say, “AI sucks!” and then five, six years later, they'll say, “Oh, actually AI is pretty good,” when someone builds the Google of AI.
Rethinking growth strategy in the EU (26:46)
To me, this always gets a lot of good attention on social media, if you compare the US and Europe and you say, the US, it's richer, or we have all the technology companies, or we're leading in all the technology areas, and we can kind of gloat over Europe. But then I think, well, that's kind of bad. We should want Europe to be better, especially if you think we are engaged in this geopolitical competition with these authoritarian countries. We should want another big region of liberal democracy and market capitalism to be successful.
Can Europe turn it around? Mario Draghi just put out this big competitiveness report, things Europe can do, they need to be more like America in this way or that way. Can Europe become like a high-productivity region?
In general, European elites’ answer to all their problems is “more Europe,” more centralization, make Europe more like a country. . . But I think that Europe's strength is really in fragmentation . . .
I think it can. I wrote a post about this today, actually, about Mario Draghi's report. My bet for what Europe would have to do is actually very different than what the European elites think they have to do. In general, European elites’ answer to all their problems is “more Europe,” more centralization, make Europe more like a country. You know, Europe has a history of international competition. France, and Germany, and the UK, and all these powers would fight each other. That's their history. And for hundreds of years, it's very difficult to change that mindset, and Mario Draghi's report is written entirely in terms of competitiveness. And so I think the mindset now is “Okay, now there's these really big countries that we're competing with: America, China, whatever. We need to get bigger so we're a big country too.” And so the idea is to centralize so that Europe can be one big country competing with the other big countries.
But I think that Europe's strength is really in fragmentation, the way that some European countries experiment with different institutions, different policies. You've seen, for example, the Scandinavian countries, by and large, have very pro-business policies combined with very strong welfare states. That's a combination you don't see that in Italy, France, and Germany. In Italy, France, and Germany, you see policies that specifically restrict a lot of what business can do, who you can hire and fire, blah, blah, blah. Sweden, and Denmark, and Finland, and Norway make it very easy for businesses to do anything they want to do, and then they just redistribute. It's what we in America might even call “neoliberalism.”
Then they have very high taxes and they provide healthcare and blah, blah, and then they basically encourage businesses to do business-y things. And Sweden is more entrepreneurial than America. Sweden has more billionaires per capita, more unicorns per capita, more high-growth startups per capita than America does. And so many people fall into the lazy trap of thinking of this in terms of cultural essentialism: “The Swedes, they're just an entrepreneurial bunch of Vikings,” or something. But then I think you should look at those pro-business policies.
Europeans should use Sweden as a laboratory, use Denmark, use Norway. Look at these countries that are about as rich as the United States and have higher quality of life by some metrics. Look at these places and don't just assume that the Swedes have some magic sauce that nobody else has, that Italy and Greece and Spain have nothing to learn from Sweden and from Denmark. So I think Europe should use its fragmentation.
Also, individual countries in Europe can compete with their own local industrial policies. Draghi talks about the need to have a Europe-wide industrial policy to combat the industrial policies of China and America, but, often, when you see the most effective industrial policy regimes, they're often fragmented.
So for example, China until around 2006, didn't really have a national industrial policy at all. At the national level, all they did was basically Milton Friedman stuff, they just privatized and deregulated. That's what they did. And then all the industrial policy was at the provincial and city levels. They went all out to build infrastructure, to attract FDI [foreign direct investment], to train workers, all the kinds of things like that. They did all these industrial policies at the local level that were very effective, and they all competed with each other, because whichever provincial officials got the highest growth rate, you'd get promoted, and so they were competing with each other.
Now, obviously, you don't want to go for growth at the expense of anything else. Obviously you'd want to have things like the environment, and equality, and all those things, especially in Europe, it's a rich country, they don't just want to go for growth, growth, growth only. But if you did something like that where you gave the member states of the EU more latitude to do their local policies and to set their local regulations of things like the internet and AI, and then you use them as laboratories and copy and try to disseminate best practice, so that if Sweden figures something out, Greece can do it too, I think that would play to Europe's strength, because Draghi can write a million reports, but Europe is never going to become the “United States of Europe.” Its history and ethno-nationalism is too fragmented. You'll just break it apart if you try.
The European elites will just keep grousing, “We need more Europe! More Europe!” but they won't get it. They'll get marginally more, a little bit more. Instead, they should consider playing to Europe's natural strengths and using the interstate competitive effects, and also laboratory effects like policy experimentation, to create a new development strategy, something a little bit different than what they're thinking now. So that's my instinct of what they should do.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Micro Reads
▶ Business/ Economics
* Behind OpenAI’s Audacious Plan to Make A.I. Flow Like Electricity - NYT
* OpenAI Pitched White House on Unprecedented Data Center Buildout - Bberg
* OpenAI Executives Exit as C.E.O. Works to Make the Company For-Profit - NYT
* OpenAI to Become For-Profit Company - WSJ
* Mark Zuckerberg’s AI Vision Makes Metaverse a Slightly Easier Sell - WSJ
* Intel’s Foundry Shake-Up Doesn’t Go Far Enough - WSJ
* OpenAI CTO Mira Murati Is Leaving the Company - Wired
* Meta unveils augmented reality glasses prototype ‘Orion’ - FT
▶ Policy/Politics
* The Schumer Permitting Exception for Semiconductors - WSJ Opinion
* Biden breaks with environmentalists, House Dems on chip bill - Politico
* Mark Zuckerberg Is Done With Politics - NYT
▶ AI/Digital
* I Built a Chatbot to Replace Me. It Went a Little Wild. - WSJ
* Meta's answer to ChatGPT is AI that sounds like John Cena or Judi Dench - Wapo
* Want AI that flags hateful content? Build it. - MIT
* The Celebrities Lending Their Voices to Meta’s New AI - WSJ
▶ Biotech/Health
* Why do obesity drugs seem to treat so many other ailments? - Nature
* Antimicrobial resistance is dangerous in more ways than one - FT Opinion
* Who’s Really Keeping Ozempic and Wegovy Prices So High? - Bberg Opinion
▶ Clean Energy/Climate
* Microsoft’s Three Mile Island Deal Is Great News - Bberg Opinion
* China’s accelerating green transition - FT
* Microsoft’s Three Mile Island Deal Isn’t a Nuclear Revival — Yet - Bberg Opinion
* A Faster, Cheaper Way to Double Power Line Capacity - Spectrum
* A Public Path to Building a Star on Earth - Issues
▶ Space/Transportation
* Hypersonic Weapons — Who Has Them and Why It Matters - Bberg
▶ Up Wing/Down Wing
* Trump Offers Scare Tactics on Housing. Harris Has a Plan. - Bberg Opinion
* The Sun Will Destroy the Earth One Day, Right? Maybe Not. - NYT
* How supply chain superheroes have kept world trade flowing - FT Opinion
* Can machines be more ‘truthful’ than humans? - FT Opinion
▶ Substacks/Newsletters
* America's supply chains are a disaster waiting to happen - Noahpinion
* The OpenAI Pastiche Edition - Hyperdimensional
* The Ideas Anticommons - Risk & Progress
* Sam Altman Pitches Utopian impact of AI while Accepting UAE Oil Money Funding - AI Supremacy
* The Government’s War on Starter Homes - The Dispatch
* NEPA Nightmares III: The Surry-Skiffes Creek-Whealton Transmission Line - Breakthrough Journal
* Dean Ball on AI regulation, "hard tech," and the philosophy of Michael Oakeshott - Virginia’s Newsletter
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
📖 My chat (+transcript) with Mentava founder Niels Hoven on accelerating kids’ education
jeudi 12 septembre 2024 • Duration 22:40
When it comes to sports, everybody is basically aligned that the goal here is helping every kid reach their potential. We celebrate talent, we give athletes the resources and personalized support they each need to develop their skills. We have varsity leagues, we have junior varsity leagues. We make sure that kids are challenged at the appropriate level for their current level abilities. And for some reason, when it comes to academics, we throw all of that out the window.
Our progress as a society depends a lot on the brilliant ideas of our greatest thinkers. To improve our way of life, we should be promoting our best and brightest to the highest heights of their potential. Instead, we seem to be stemming the flow of great minds at the source: in our public schools. With a one-size-fits-all, equality-of-outcome model, we rob our kids, and our society, of their potential.
Today on Faster, Please — The Podcast, I talk with Niels Hoven, founder and CEO of Mentava, an educational software company. Hoven’s goal: to help kids learn at their own pace, whether that includes additional support, or simply the resources to excel beyond expectations.
Hoven is the father of four, former product manager at Cloudflare, and was VP of product development at Pocket Gems.
In This Episode
* Treating academics like athletics (1:35)
* School as childcare and instruction (5:44)
* The role of parents (8:04)
* Mentava’s mission (10:04)
* Reframing the public school (15:20)
* The San Francisco algebra ban (17:50)
* Investing in our future (20:05)
Below is a lightly edited transcript of our conversation
Pethokoukis: Niels, welcome to the podcast.
Hoven: Thank you so much. I appreciate you having me here.
Treating academics like athletics (1:35)
You argue that the current American education system is fundamentally flawed.
I do think it has some issues.
How does closing achievement gaps hurt our education system? How does it hold students back?
So obviously my problem is not with closing achievement gaps, my problem is what happens when you set up policies with that as the only goal. I think what we've seen is that the goal of today's modern education policy is closing the gaps between high achievers and low achievers, which is, of course, wonderful, but the way that has actually manifested in schools is by slowing down high achievers and not giving them the opportunity to achieve their potential. In San Francisco, you're literally not allowed to teach material above grade level, which I think is crazy.
Most school systems have gifted programs. Doesn't that meet your concern?
So those gifted programs, I think they don't go far enough to support the learning needs of students who are really capable of achieving dramatically more, and, in a lot of places they're very, very hard to get into. So in our school district right now, in order to qualify for the gifted program, you have to take a series of tests and you basically have to score 99th percentile on all of those tests. All of those tests are basically grade-level tests, so they're not really teaching seeing how far above grade level you are, so it's really, “Are you really, really good at taking the tests, so well that three times in a row you can score 99th percentile on grade level stuff?” That's not really getting the kids who need their learning needs supported by these special programs, and these programs really only operate a single grade above grade level. What about the kids who could be doing calculus in middle school, or want to be moving much faster than that: Two years of math a year, every single year — we aren't supporting them.
You've proposed treating academics more like sports. What does that look like in practice and how might that change how we approach education and how we think about education more broadly?
When it comes to sports, everybody is basically aligned that the goal here is helping every kid reach their potential. We celebrate talent, we give athletes the resources and personalized support they each need to develop their skills. We have varsity leagues, we have junior varsity leagues. We make sure that kids are challenged at the appropriate level for their current level abilities.
And for some reason, when it comes to academics, we throw all of that out the window. We just say, “Okay, everybody must progress at the same speed, learn the same thing at the same time.” To me that's like saying, “Okay LeBron, you are not allowed to dunk until everybody else can dunk also.” And so I want to see us treat academics more like sports, where we encourage students to pursue their interests, to develop their talents to the fullest potential, and respect the diversity of kids' ability and motivations.
To what do you attribute the staying power of this — I don’t know if it's a one-size-fits-all system, but of a system that, in many key ways, isn't different than it was a hundred years ago?
It is a government-sponsored monopoly, so I guess that would be my answer. How did the taxi cab medallion system last so long, even though it was dramatically underserving everybody who wanted to take a taxi? There's no competition.
What does that more sports-like environment look like? It sounds like there'd be more freedom, there'd be less regimentation. What does that world look like?
What I'm really pushing for is I would like to see students receiving instruction appropriate for their current level. I talk a lot about high-achieving students, but this is also true for struggling students. Right now we have a very one-size-fits-all model of education, and that means students who are struggling and need extra attention to get caught up aren’t given the opportunity that they need to perhaps move at a slower pace or get extra support, and kids who want to be moving faster and maybe learning two years of math a year, every single year, so that they can be doing college-level math in middle school, they're also not getting that support. We managed to do that in sports, we have lots of different leagues so that kids can find the level of competition that is appropriate for them, but for some reason, when it comes to academics, we refuse to allow that amount of differentiation.
School as childcare and instruction (5:44)
You advocate reducing instruction time to two hours a day. One, is that enough? And two, what are the kids doing for the other . . . are they getting into mischief? What are they doing for the rest of the day if they're not studying?
I think we've really conflated the role of school, and I think an important question to ask is: Is school as we provide it now, is it childcare or is it academics? And I think it is both. An interesting fact about school is, despite all of the problems that we all understand our schools have, schools have like an 80 percent approval rating from parents, and that's because the job that schools do for most parents is actually childcare. It is free childcare for while the parents are at work, it is finding a place where your children are entertained and loved, and that is super important.
But somehow we have also layered this layer of academic theater on top of that childcare instead of saying, “Okay, these kids can play in the woods for eight hours a day, or they can play dodgeball or grow their social-emotional skills and build their friendships with a friend.” We had to say, “No, they have to be learning something – but not too fast at this very, very slow pace.” And if you look at things like homeschoolers, you see most homeschoolers do two hours of academics a day, and they have the same outcomes as kids who are going to public schools, so we really don't need that much more time doing academics as long as that time is being spent efficiently.
Is this new world possible within a mostly public school system as it exists today? Can you do this, or are you talking about private school, homeschooling, but does this have anything to do with the public school system, which seems to me fairly resilient? Certainly, I think the changes of the sort of magnitude you're talking about.
I like the public school system. I went to public school, I had a really positive experience in public school. My own kids go to public school. And I think the difference is that when I was in public school, people were much more accepting of the idea of kids who wanted to move at their own pace. And so Mentava, certainly we're happy to support kids who are homeschoolers, who are in private school, but the real vision is to allow kids to be part of, essentially, their local public school community, go to school with friends from the neighborhood, but still have the opportunity to progress at their own pace
The role of parents (8:04)
Tell me a little bit about your personal educational experience and how that shaped your views and how it eventually led to your company.
Education has always been very important to my family. My dad taught me to read early, when I arrived at kindergarten, I could already read, I was roughly a year ahead in math. And so he negotiated with my school to just let me, during math class, just for an hour a day, could I just go to the next grade up and sit in on their math class and then come back to my own class for the rest of the day. And we did that, and that worked great until third grade, because my school only went up to third grade, so there wasn't a class for me. So at that point, I just started doing independent study. Just during math class, for an hour a day, I would go to the back of the classroom, I would study out of a math book, and then at the end of that hour I would come back and rejoin my friends for the rest of the day.
And I did that for the next four years, and basically, thanks to that accelerated support, I ended up taking calculus in eighth grade. There are kids who can be moving that fast if you just kind of get out of their way. My own son — he goes to a public school — we also got permission for him to do independent study last year, and now in fourth grade he'll probably be ready to start pre-algebra.
This is doable now. This was doable when I was a kid with textbooks, this is doable now with off-the-shelf software, but it's harder than it needs to be. And so our vision is: We can make this easier. I think a lot of kids could have done what I did, but they weren't given the opportunity. We want to make sure that more kids have this opportunity to have their learning needs supported.
Do you think parents underestimate what their kids are capable of doing?
Parents have no idea what their kids are capable of doing, especially parents of high-achieving kids. We've seen this over and over again with the families who are entering in Mentava’s learn-to-read software now. We target our software at kids as young as two, but often age three and four, we're trying to teach them to read, trying to get them to about a second grade reading level in maybe six to 12 months. We just had a three-year-old complete our entire curriculum, which gets us close to a second grade reading level, in about six months. So it is doable, it can move fast, and we have parents who say, “I had no idea that my kid was capable of doing this at this point!”
Mentava’s mission (10:04)
So walk me through what your company does, the service it provides, how it all works.
The long-term vision for our company is to support the learning needs of kids who are not being supported in school. If you have a child who wants to learn two years of math in a year, the real gating factor of that is, a lot of times it's teacher availability, or it's school policy that says there's no one available to give them that instruction. But imagine that they had the opportunity to just go open a math book.
It's a resource issue. We'd love to do it, but we don't have the resources.
We don’t have the resources. Sometimes that's true, sometimes that's not true, sometimes it's policy, but whatever. But they could go get a math book, they could just study that book and go as fast as they wanted — but that's boring. Not every kid is going to have that motivation. And so, to some extent, we're not really solving for curriculum, we are solving for motivation. We want to build software that can deliver that same curriculum — we know how to teach math, we know how to teach reading — deliver it in a more sort of fun, entertaining, motivating way, and allow kids to essentially continue to progress at their own pace without being gate-kept by the availability of teachers to essentially unlock that knowledge for them. And so we are starting at a very young age by teaching kids to read with software.
What I really want to teach is math. I want to get kids to learn math as fast as possible, but in order for kids to be able to teach themselves math, they have to be able to read, and so that is our first piece of software: learn-to-read software for preschoolers.
And obviously preschoolers, these are young kids, so is your expectation that software will be done at home? Are there schools trying to incorporate in some way? How's that working?
We've started talking to schools about pilots, but I think, right now, we get a lot of attention from parents. Incentives are just better aligned that way. Schools right now are not particularly concerned with, “Are we supporting our kids achieving their fullest potential or are we ensuring our kids can learn as fast as possible?” But parents really care about that. And so right now we have a lot of customers who are basically parents at home who realize, “Oh, my three- or four-year-old is ready to start reading, what can I do to best support them now?”
How long has the company been in business?
We kind of accidentally launched about six months ago.
Was this a pandemic-related idea?
This was. I have four kids. I had three kids during the pandemic and the fourth one arrived during the pandemic. They were at home, doing school at home, and I also had a job at the time, so did my wife, we had two working parents trying to take care of three kids at home, we were trying to figure out how to help them learn, and really the only way to make that work was to give them the skills that they needed to teach themselves. And at that time, my kids were five and three. And so how can I get my five-year-old teaching themselves math? How do I get my three-year-old teaching themselves to read? And the solution to that is software. We know the curriculum, we know if you want to teach reading, it's phonics, but how do you get the kid to sit down and memorize the 44 sounds in the English language? Well, turns out that software and games are really, really good at solving motivation, so we just needed to package that all together and that was how Mentava was started.
So during this exact period that you've thought of this idea, you're putting together a company, putting together the software, we have sort of a new stage in software happening with chatbots and large language models. Are those technologies that compliment what you're doing? Are you going to have to do something different to use those technologies? How's that going to work out for you?
It's very complimentary. So we're not using AI right now, but we see it coming. There is kind of this perfect storm of timing right now where, I think because of Covid, parents started to realize that, “Oh, my kid is not learning as much in school as I thought. This is what they're doing in school?” We had all that visibility when our kids were doing homeschool in front of screens at home.
Technology has gotten to a point where we can give screens to every kid, and iPads, and other tablets. Touchscreens make learning much more accessible. We're seeing the effectiveness of some learning software — a lot of learning software is really, really bad, but some of it is good, and people are seeing that. And then, at the same time, like you see AI coming out and getting people very excited about the potential of software to affect education.
It’s funny, when we were raising money, the idea that software could be a teacher was a very contrarian perspective. Everybody said, “How could software possibly be a teacher? You’re going to need a human there.” And then about 12 months later, AI came out, people said, “Oh, of course you're going to have software teachers. We've always believed that.”
But my take on AI is that the power of AI is really in its adaptability, and you actually don't need that much adaptation for teaching reading or teaching math. You memorized the 26 letters, the 44 sounds in the English language, you learn addition, and then you learn subtraction, then you learn multiplication, then you learn division. It's pretty linear. It's pretty sequential. And so my belief is that there's actually this core learning pathway that you can really, really optimize, and we should focus on that. And it's fairly sequential, and it's fairly deterministic. And then the power of AI is to catch the kids who fall off of that and get confused and ask, “Okay, what are you confused about? I see you're confused about this thing. Let me give you some custom instruction and then get you back on that main pathway.”
Reframing the public school (15:20)
In an ideal world — and let's just stick with, I think it's reasonable to think that, for the time being, most kids are going to be educated by public schools. That's a lot of kids, a magnitude difference in how many kids are in private school or are homeschooled. What should that public school day look like, ideally, given what you've learned going through this process?
The biggest challenge for public school is that there's such a diversity of student needs there. Public schools are simultaneously academics, but they're also childcare, and they're also a social support network. They're a safety net for a lot of kids, and they're trying to provide all those services to all these different kids by giving them all the exact same thing. To me that makes no sense, and what I would really love to see in our public schools is just more differentiation, more acceptance of diversity of needs, diversity of motivations, diversity of abilities, and saying, “Okay, these children need this particular service from our public schools. Let's make sure that they're in a place where they can get those services. But we have these other kids who want to learn two years of math every year. They can do that in two hours a day, and then they want to spend the rest of the day playing in the forest.” That would be amazing.
Should that actual classroom time look markedly different? I'm sure that if I went into most classrooms — I had kids, one currently in high school and ones who were in high school not that long ago — that those classrooms, blackboards, teachers, lecturing: That's the classroom experience. That's in 2024, that was the classroom experience in 1924. Should that classroom experience look fundamentally different?
I think that's an interesting question. I think it's going to look different for different kids. I think there is a sense that some of the rigor I would say of the old days has been lost, and I think that there's good and bad to that. I think a lot of that is a result of conflating childcare with academics. You can't do rigorous academics for eight hours a day. It’s sort of like weightlifting; you can't do squats for eight hours a day, but you can do them very effectively for half an hour. But if you want to pretend that you're exercising for eight hours a day, then you, by definition, have to remove a lot of that rigor. So I would like to get rid of the academic theater and be very clear about, “Okay, this time is play time, this is childcare time, and this time is academic time, and we're really going to buckle down and focus here.”
The San Francisco algebra ban (17:50)
A few years back, there was a ban on teaching middle school algebra in San Francisco. Can you give me some background on that?
So this was passed about 10 years ago. The way it used to work is that most kids took algebra in eighth grade. If you were ready earlier, you could take algebra in seventh grade, but essentially in San Francisco, because some kids were not prepared to take algebra in eighth grade, they said across the board, all kids must take algebra in ninth grade. So even the kids who used to take it in seventh grade, the kids who used to take it in eighth grade, “Sorry, we're not doing it in middle school anymore. You all have to take it in ninth grade.”
Usually these sorts of educational decisions, they're just lost in the noise, parents don't have time to focus on the nitty and gritty of curriculum, but this was a big problem for parents because this meant that you could not get through calculus in high school without essentially taking summer school or getting private tutoring. And for a lot of competitive colleges, you need to be in calculus, have taken calculus, in high school. And so parents had a real problem with this particular curriculum change.
The irony of all of this is that this was enacted with the hope of increasing equity, of driving more equal outcomes, and it had the opposite effect because now it’s just the parents with the resources who are able to go out and do summer school, and private tutors, and then get their kids the math support that they needed. So this happened a decade ago, and it has been a battle for 10 years to get algebra back into middle schools in San Francisco, and it actually went the other way: California statewide nearly got rid of algebra in their statewide middle school curriculum because of the quote-unquote “success of San Francisco,” which is basically, if you look into it, it's just San Francisco cooking the books, literally lying about their outcomes.
And so finally, this past year, parents essentially had enough and they put it on the ballot and said, “We're going to take a vote about, should middle schoolers be allowed to learn algebra.” It's funny because a lot of times people think that this cuts along party lines: Conservatives versus Democrats, red versus blue, but even in San Francisco, the most progressive city in the United States, 80 percent of families were like, “Yes, we should support all kids learn these. Yes, if a kid is ready to take algebra in middle school, we should allow them to do that.”
Investing in our future (20:05)
I'll tell you the one thing I kept thinking of as I was learning more about your company and your outlook was it seems to me like it'd be really important, as a country, that every kid can reach their potential, but especially the very smartest kids, that we get everything out of them that we can, right? That's pretty important. These are people who are going to be designing the next stage of AI, they're going to be designing the new computer chips, they're going to be in biotechnology. If we can get more out of those kids, there's a huge multiplier there.
I believe that very deeply. I believe leaders are important. I believe in the power of single individuals to create huge amounts of change, but not everybody agrees with that. I was at a school the other day, and a bench outside the school literally has carved into it —a bench that every student sees as they go into the school — it says, “Strong people do not need strong leaders.” And I fundamentally disagree with that. I think we need strong people and we also need strong leaders, and the way we get both of those is ensuring that every student has the opportunity to have their learning needs supported and have the opportunity to achieve their potential.
What's the direction of the company? Where are you going to be in five years? What's the dream?
Right now, we are in the process of officially launching our learn-to-read app, targeted at preschoolers, and then what I really want to do is start transitioning into math. So once we have taught kids to read, we have essentially unlocked their ability to teach themselves. And so our goal is to keep up with this earliest cohort of kids who are learning to read and support them as they continue through their K–12 career. If they want to learn two years of math a year, then I would love to build two years of math curriculum each year so that they continue using Mentava to support their K–12 experience, and then they discovered that, I don't know, they're done with math in middle school, and then they get to figure out what's next after that. Do I go start a company? Do I do internships? Do I go learn marine biology? I don't know.
What about computer science? Does that play a role here?
When I say math, I would say specifically math and computer science are what I'm most passionate about. I think of it almost as vocational school. Those are the skills that we can teach that directly contribute to, okay, this person is able to create more value in the world because they know these two fundamental skills now.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
🤖 My chat (+transcript) with tech policy analyst Adam Thierer on regulating AI
jeudi 30 mai 2024 • Duration 25:31
While AI doomers proselytize their catastrophic message, many politicians are recognizing that the loss of America’s competitive edge poses a much more real threat than the supposed “existential risk” of AI. Today on Faster, Please!—The Podcast, I talk with Adam Thierer about the current state of the AI policy landscape and the accompanying fierce regulatory debate.
Thierer is a senior fellow at the R Street Institute, where he promotes greater freedom for innovation and entrepreneurship. Prior to R Street, he worked as a senior fellow at the Mercatus Center at George Mason University, president of the Progress and Freedom Foundation, and at the Adam Smith Institute, Heritage Foundation, and Cato Institute.
In This Episode
* A changing approach (1:09)
* The global AI race (7:26)
* The political economy of AI (10:24)
* Regulatory risk (16:10)
* AI policy under Trump (22:29)
Below is a lightly edited transcript of our conversation
A changing approach (1:09)
Pethokoukis: Let's start out with just trying to figure out the state of play when it comes to AI regulation. Now I remember we had people calling for the AI Pause, and then we had a Biden executive order. They're passing some sort of act in Europe on AI, and now recently a senate working group in AI put out a list of guidelines or recommendations on AI. Given where we started, which was “shut it down,” to where we're at now, has that path been what you might've expected, given where we were when we were at full panic?
Thierer: No, I think we've moved into a better place, I think. Let's look back just one year ago this week: In the Senate Judiciary Committee, there was a hearing where Sam Altman of OpenAI testified along with Gary Marcus, who's a well-known AI worrywart, and the lawmakers were falling all over themselves to praise Sam and Gary for basically calling for a variety of really extreme forms of AI regulation and controls, including not just national but international regulatory bodies, new general purpose licensing systems for AI, a variety of different types of liability schemes, transparency mandates, disclosure as so-called “AI nutritional labels,” I could go on down the list of all the types of regulations that were being proposed that day. And of course this followed, as you said, Jim, a call for an AI Pause, without any details about exactly how that would work, but it got a lot of signatories, including people like Elon Musk, which is very strange considering he was at the same time deploying one of the biggest AI systems in history. But enough about Elon.
The bottom line is that those were dark days, and I think the tenor of the debate and the proposals on the table today, one year after that hearing, have improved significantly. That's the good news. The bad news is that there's still a lot of problematic regulatory proposals percolating throughout the United States. As of this morning, as we're taping the show, we are looking at 738 different AI bills pending in the United States according to multistate.ai, an AI tracking service. One hundred and—I think—eleven of those are federal bills. The vast majority of it is state. But that count does not include all of the municipal regulatory proposals that are pending for AI systems, including some that have already passed in cities like New York City that already has a very important AI regulation governing algorithmic hiring practices. So the bottom line, Jim, is it's the best of times, it's the worst of times. Things have both gotten better and worse.
Well—just because the most recent thing that happened—I know with this the senate working group, and they were having all kinds of technologists and economists come in and testify. So that report, is it really calling for anything specific to happen? What's in there other than just kicking it back to all the committees? If you just read that report, what does it want to happen?
A crucial thing about this report, and let's be clear what this is, because it was an important report because senator Senate Majority Leader Chuck Schumer was in charge of this, along with a bipartisan group of other major senators, and this started the idea of, so-called “AI insight forums” last year, and it seemed to be pulling some authority away from committees and taking it to the highest levels of the Senate to say, “Hey, we're going to dictate AI policy and we're really scared.” And so that did not look good. I think in the process, just politically speaking—
That, in itself, is a good example. That really represents the level of concern that was going around, that we need to do something different and special to address this existential risk.
And this was the leader of the Senate doing it and taking away power, in theory, from his committee members—which did not go over well with said committee members, I should add. And so a whole bunch of hearings took place, but they were not really formal hearings, they were just these AI insight forum working groups where a lot of people sat around and said the same things they always say on a daily basis, and positive and negatives of AI. And the bottom line is, just last week, a report came out from this AI senate bipartisan AI working group that was important because, again, it did not adopt the recommendations that were on the table a year ago when the process got started last June. It did not have overarching general-purpose licensing of artificial intelligence, no new call for a brand new Federal Computer Commission for America, no sweeping calls for liability schemes like some senators want, or other sorts of mandates.
Instead, it recommended a variety of more generic policy reforms and then kicked a lot of the authority back to those committee members to say, “You fill out the details, for better for worse.” And it also included a lot of spending. One thing that seemingly everybody agrees on in this debate is that, well, the government should spend a lot more money and so another $30 billion was on the table of sort of high-tech pork for AI-related stuff, but it really did signal a pretty important shift in approach, enough that it agitated the groups on the more pro-regulatory side of this debate who said, “Oh, this isn't enough! We were expecting Schumer to go for broke and swing for the fences with really aggressive regulation, and he's really let us down!” To which I can only say, “Well, thank God he did,” because we're in a better place right now because we're taking a more wait-and-see approach on at least some of these issues.
A big, big part of the change in this narrative is an acknowledgement of what I like to call the realpolitik of AI policy and specifically the realpolitik of geopolitics
The global AI race (7:26)
I'm going to ask you in a minute what stuff in those recommendations worries you, but before I do, what happened? How did we get from where we were a year ago to where we've landed today?
A big, big part of the change in this narrative is an acknowledgement of what I like to call the realpolitik of AI policy and specifically the realpolitik of geopolitics. We face major adversaries, but specifically China, who has said in documents that the CCP [Chinese Communist Party] has published that they want to be the global leader in algorithmic and computational technologies by 2030, and they're spending a lot of money putting a lot of state resources into it. Now, I don't necessarily believe that means they're going to automatically win, of course, but they're taking it seriously. But it's not just China. We have seen in the past year massive state investments and important innovations take place across the globe.
I'm always reminding people that people talk a big game about America's foundational models are large scale systems, including things like Meta’s Llama, which was the biggest open source system in the world a year ago, and then two months after Meta launched Llama, their open source platform, the government of the UAE came out with Falcon 180B, an open source AI model that was two-and-a-half times larger than Facebook's model. That meant America's AI supremacy and open source foundational models lasted for two months. And that's not China, that's the government of the UAE, which has piled massive resources into being a global leader in computation. Meanwhile, China's launched their biggest super—I'm sorry, Russia's launched their biggest supercomputer system ever; you've got Europe applying a lot of resources into it, and so on and so forth. A lot of folks in the Senate have come to realize that problem is real: that if we shoot ourselves in the foot as a nation, they could race ahead and gain competitive advantage in geopolitical strategic advantages over the United States if it hobbles our technology base. I think that's the first fundamental thing that's changed.
I think the other thing that changed, Jim, is just a little bit of existential-risk exhaustion. The rhetoric in this debate, as you've written about eloquently in your columns, has just been crazy. I mean, I've never really seen anything like it in all the years we've been covering technology and economic policy. You and I have both written, this is really an unprecedented level of hysteria. And I think, at some point, the Chicken-Littleism just got to be too much, and I think some saner minds prevailed and said, “Okay, well wait a minute. We don't really need to pause the entire history of computation to address these hypothetical worst-case scenarios. Maybe there's a better plan than that.” And so we're starting to pull back from the abyss, if you will, a little bit, and the adults are reentering the conversation—a little bit, at least. So I think those are the two things that really changed more, although there were other things, but those were two big ones.
The political economy of AI (10:24)
To what extent do you think we saw the retreat from the more apocalyptic thinking—how much that was due from what businesses were saying, venture capitalists, maybe other tech . . . ? What do you think were the key voices Congress started listening to a little bit more?
That's a great question. The political economy of AI policy and tech policy is something that is terrifically interesting to me. There are so many players and voices involved in AI policy because AI is the most important general-purpose technology of our time, and as a widespread broad base—
Do you have any doubt about that? (Let me cut you off.) Do you have any doubt about that?
I don't. I think it's unambiguous, and we live in a world of “combinatorial innovation,” as Hal Varian calls it, where technologies build on top of the other, one after another, but the thing is they all lead to greater computational capacity, and therefore, algorithmic and machine learning systems come out of those—if we allow it. And the state of data science in this country has gotten to the point where it's so sophisticated because of our rich base of diverse types of digital technologies and computational technologies that finally we're going to break out of the endless cycle of AI booms and busts, and springs and winters, and we're going to have a summer. I think we're having it right now. And so that is going to come to affect every single segment and sector of our economy, including the government itself.
I think industry has been very, very scrambled and sort of atomistic in their approach to AI policy, and some of them have been downright opportunistic, trying to throw each other’s competitors under the bus
Now let me let you go return to the political economy, what I was asking you about, what were the voices, sorry, but I wanted to get that in there.
Well, I think there are so many voices, I can't name them all today, obviously, but obviously we're going to start with one that's a quiet voice behind the scenes, but a huge one, which is, I think, the National Security community. I think clearly going back to our point about China and geopolitical security, I think a lot of people behind the scenes who care about these issues, including people in the Pentagon, I think they had conversations with certain members of Congress and said, “You know what? China exists. And if we're shooting ourselves in the foot, we begin this race for geopolitical strategic supremacy in an important new general-purpose technology arena, we're really hurting our underlying security as a nation. I think that that thinking is there. So that's an important voice.
Secondly, I think industry has been very, very scrambled and sort of atomistic in their approach to AI policy, and some of them have been downright opportunistic, trying to throw each other’s competitors under the bus, unfortunately, and that includes OpenAI trying to screw over other companies and technologies, which is dangerous, but the bottom line is: More and more of them are coming to realize, as they saw the actual details of regulation and thinking through the compliance costs, that “Hell no, we won't go, we're not going to do that. We need a better approach.” And it was always easier in the old days to respond to the existential risk route, like, “Oh yeah, sure, regulation is fine, we'll go along with it!” But then when you see the devilish details, you think twice and you realize, “This will completely undermine our competitive advantage in the space as a company or our investment or whatever else.” All you need to do is look at Exhibit A, which is Europe, and say, if you always run with worst-case scenario thinking and Chicken-Littleism is the basis of your technology policy, guess what? People respond to incentives and they flee.
Hatred of big tech is like the one great bipartisan, unifying theme of this Congress, if anything. But at the end of the day, I think everyone is thankful that those companies are headquartered in the United States and not Beijing, Brussels, or anywhere else.
It’s interesting, the national security aspect, my little amateurish thought experiment would be, what would be our reaction, and what would be the reaction in Washington if, in November, 2022, instead of it being a company, an American company with a big investment from another American company having rolled out ChatGPT, what if it would've been Tencent, or Alibaba, or some other Chinese company that had rolled this out, something that's obviously a leap forward, and they had been ahead, even if they said, “Oh, we're two or three years ahead of America,” it would've been bigger than Sputnik, I think.
People are probably tired of hearing about AI—hopefully not, I hope they'll also listen to this podcast—but that would all we would be talking about. We wouldn’t be talking about job loss, and we wouldn't be talking about ‘The Terminator,’ we'd be talking about the pure geopolitical terms that the US has suffered a massive, massive defeat here and who's to blame? What are we going to do? And anybody at that moment who would've said, “We need to launch cruise missile strikes on our own data centers” for fear. . . I mean! And I think you're right, the national security component, extremely important here.
In fact, I stole your little line about “Sputnik moment,” Jim, when I testified in front of the House Oversight Committee last month and I said, “Look, it would've been a true ‘Sputnik moment,’ and instead it's those other countries that are left having the Sputnik moment, right? They're wondering, ‘How is it that, once again, the United States has gotten out ahead on digital and computational-based technologies?’” But thank God we did! And as I pointed out in the committee room that day, there's a lot of people who have problems with technology companies in Congress today. Hatred of big tech is like the one great bipartisan, unifying theme of this Congress, if anything. But at the end of the day, I think everyone is thankful that those companies are headquartered in the United States and not Beijing, Brussels, or anywhere else. That's just a unifying theme. Everybody in the committee room that day nodded their head, “Yes, yes, absolutely. We still hate them, but we're thankful that they're here.” And that then extends to AI: Can the next generation of companies that they want to bring to Congress and bash and pull money from for their elections, can they once again exist in the United States?
Regulatory risk (16:10)
So whether it's that working group report, or what else you see in Congress, what are a couple, three areas where you're concerned, where there still seems to be some sort of regulatory momentum?
Let’s divide it into a couple of chunks here. First of all, at the federal level, Congress is so damn dysfunctional that I'm not too worried that even if they have bad ideas, they're going to pursue them because they're just such a mess, they can't get any basic things done on things like baseline privacy legislation, or driverless car legislation, or even, hell, the budget and the border! They can't get basics done!
I think it's a big positive that one, while they're engaging in dysfunction, the technology is evolving. And I hope, if it's as important as I think you and I think, more money will be invested, we'll see more use cases, it'll be obvious—the downsides of screwing up the regulation I think will be more obvious, and I think that's a tailwind for this technology.
We're in violent agreement on that, Jim, and of course this goes by the name of “the pacing problem,” the idea that technology is outpacing law in many ways, and one man's pacing problem is another man's pacing benefit, in my opinion. There's a chance for technology to prove itself a little bit. That being said, we don't live in a legislative or regulatory vacuum. We already have in the United States 439 government agencies and sub-agencies, 2.2 million employees just at the federal level. So many agencies are active right now trying to get their paws on artificial intelligence, and some of them already have it. You look at the FDA [Food and Drug Administration], the FAA [Federal Aviation Administration], NHTSA [National Highway Traffic Safety Administration], I could go all through the alphabet soup of regulatory agencies that are already trying to regulate or overregulating AI right now.
Then you have the Biden administration, who's gone out and done a lot of cheerleading in favor of more aggressive unilateral regulation, regardless of what Congress says and basically says, “To hell with all that stuff about Chevron Doctrine and major questions, we're just going to go do it! We're at least going to jawbone a lot and try to threaten regulation, and we're going to do it in the name of ‘algorithmic fairness,’” which is what their 100-plus-page executive order and their AI Bill of Rights says they're all about, as opposed to talking about AI opportunity and benefits—it's all misery. And it's like, “Look at how AI is just a massive tool of discrimination and bias, and we have to do something about it preemptively through a precautionary principle approach.” So if Congress isn't going to act, unfortunately the Biden administration already is and nobody's stopping them.
But that's not even the biggest problem. The biggest problem, going back to the point that there are 730-plus bills pending in the US right now, the vast majority of them are state and local. And just last Friday, governor Jared Polis of Colorado signed into law the first major AI regulatory measure in Colorado, and there's a bigger and badder bill pending right now in California, there's 80 different bills pending in New York alone, and any half of them would be a disaster.
I could go on down the list of troubling state patchwork problems that are going to develop for AI and ML [Machine Learning] systems, but the bottom line is this: This would be a complete and utter reversal of the winning formula that Congress and the Clinton administration gave us in the 1990s, which was a national—a global framework for global electronic commerce. It was very intentionally saying, “We're going to break with the Analog Era disaster, we're going to have a national framework that's pro-freedom to innovate, and we're going to make sure that these meddlesome barriers do not develop to online speech and commerce.” And yet, here with AI, we are witnessing a reversal of that. States are in the lead, and again, like I said, localities too, and Congress is sitting there and is the dysfunctional soup that it is saying, “Oh, maybe we should do something to spend a little bit more money to promote AI.” Well, we can spend all the money we want, but we can end up like Europe who spends tons of money on techno-industrial policies and gets nothing for it because they can't get their innovation culture right, because they’re regulating the living hell out of digital technology.
So you want Congress to take this away from the states?
I do. I do, but it's really, really hard. I think what we need to do is follow the model that we had in the Telecommunications Act of 1996 and the Internet Tax Freedom Act of 1998. We've also had moratoriums, not only through the Internet Tax Freedom Act, but through the Commercial Space Amendments having to do with space commercial travel and other bills. Congress has handled the question of preemption before and put moratoria in place to say, “Let's have a learning period before we go do stupid things on a new technology sector that is fast moving and hard to understand.” I think that would be a reasonable response, but again, I have to go back to what we just talked about, Jim, which is that there's no chance of us probably getting it. There's no appetite in it. Not any of the 111 bills pending in Congress right now says a damn thing about state and local regulation of technology!
Is the thrust of those federal bills, is it the kinds of stuff that you're generally worried about?
Mostly, but not entirely. Some of it is narrower. A lot of these bills are like, “Let's take a look at AI and. . . fill in the blank: elections, AI and jobs, AI and whatever.” And some of them, on the merits, not terrible, others, I have concerns, but it's certainly better that we take a targeted sectoral approach to AI policy and regulation than having the broad-based, general-purpose stuff. Now, there are broad-based, general-purpose measures, and here's what they do, Jim: They basically say, “Look, instead of having a whole cloth new regulatory approach, let's build on the existing types of approaches being utilized in the Department of Commerce, namely through our NIST [National Institute of Standards and Technology], and NTIA [National Telecommunications and Information Administration] sub-agencies there. NIST is the National Standards Body, and basically they develop best practices through something called the AI Risk Management Framework for artificial intelligence development—and they're good! It's multi-stakeholder, it's bottom up, it's driven by the same principles that motivated the Clinton administration to do multi-stakeholder processes for the internet. Good model. It is non-regulatory, however. It is a consensus-based, multi-stakeholder, voluntary approach to developing consensus-based standards for best practices regarding various types of algorithmic services. These bills in Congress—and there's at least five of them that I count, that I've written about recently—say, “Let's take that existing infrastructure and give it some enforcement teeth. Let's basically say, ‘This policy infrastructure will be converted into a quasi-regulatory system,’” and there begins the dangerous path towards backdoor regulation of artificial intelligence in this country, and I think that's the most likely model we'll get. Like I said, five models, legislative models in the Senate alone that would do that to varying degrees.
AI policy under Trump (22:29)
Do you have any feel for what a Trump administration would want to do on this?
I do, because a month before the Trump administration left office, they issued a report through the Office of Management and Budget (OMB), and it basically laid out for agencies a set of principles for how it should evaluate artificial intelligence systems, both that are used by the government or that they regulate in the private sector, and it was an excellent set of principles. It was a restatement of the importance of policy, forbearance and humility. It was a restatement of a belief in cost-benefit analysis and identifying not only existing regulatory capacity to address these problems, but also non-regulatory mechanisms or best practices or standards that could address some of these things. It was a really good memo. I praised it in a piece that I wrote just before the Trump administration left. Now, of course, the Trump administration may change.
Yes, and also, the technology has changed. I mean, that was 2020 and a lot has happened, and I don't know where. . . . I'm not sure where all the Republicans are. I think some people get it. . .
I think the problem, Jim, is that, for the Republican Party, and Trumpian conservatives, in particular, they face a time of choosing. And what I mean by this is that they have spent the last four to six years—and Trump egged this on—engaging in nonstop quote-unquote “big tech bashing” and making technology companies in the media out to be, as Trumps calls them, “the enemy of the American people.” And so many hearings now are just parading tech executives and others up there to be beaten with a stick in front of the public, and this is the new thing. And then there's just a flood of bills that would regulate traditional digital technologies, repeal things like Section 230, which is liability protection for the tech sector, and so on, child safety regulations.
Meanwhile, that same Republican Party and Mr. Trump go around hating on Joe Biden in China. If it's one thing they can't stand more than big tech, it's Joe and China! And so, in a sense, they've got to choose, because their own policy proposals on technology could essentially kneecap America's technology base in a way that would open up the door to whether it's what they fear in the “woke DEI policies” of Biden or the CCP’s preferred policy agenda for controlling computation in the world today. Choose two, you don't get all three. And I think this is going to be an interesting thing to watch if Mr. Trump comes back into office, do they pick up where that OMB memo left off, or do they go right back to beating that “We’ve got to kill big tech by any means necessary in a seek-and-destroy mission, to hell with the consequences.” And I don't know yet.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
🚀 My chat (+transcript) with Charles Murray on Project Apollo
vendredi 3 mai 2024 • Duration 27:24
Project Apollo was a feat of human achievement akin to, and arguably greater than, the discovery of the New World. From 1962 to 1972, NASA conducted 17 crewed missions, six of which placed men on the surface of the moon. Since the Nixon administration put an end to Project Apollo, our extraterrestrial ambitions seem to have stalled along with our sense of national optimism. But is the American spirit of adventure, heroism, and willingness to take extraordinary risk a thing of the past
Today on the podcast, I talk with Charles Murray about what made Apollo extraordinary and whether we in the 21st century have the will to do extraordinary things. Murray is the co-author with Catherine Bly Cox of Apollo: The Race to the Moon, first published in 1989 and republished in 2004. He is also my colleague here at AEI.
In This Episode
* Going to the moon (1:35)
* Support for the program (7:40)
* Gene Kranz (9:31)
* An Apollo 12 story (12:06)
* An Apollo 11 story (17:58)
* Apollo in the media (21:36)
* Perspectives on space flight (24:50)
Below is a lightly edited transcript of our conversation
Going to the moon (1:35)
Pethokoukis: When I look at the delays with the new NASA go-to-the-moon rocket, and even if you look at the history of SpaceX and their current Starship project, these are not easy machines for mankind to build. And it seems to me that, going back to the 1960s, Apollo must have been at absolutely the far frontier of what humanity was capable of back then, and sometimes I cannot almost believe it worked. Were the Apollo people—the engineers—were they surprised it worked?
Murray: There were a lot of people who, they first heard the Kennedy speech saying, “We want to go to the moon and bring a man safely back by the end of the decade,” they were aghast. I mean, come on! In 1961, when Kennedy made that speech, we had a grand total of 15 minutes of manned space flight under our belt with a red stone rocket with 78,000 pounds of thrust. Eight years and eight weeks later, about the same amount of time since Donald Trump was elected to now, we had landed on the moon with a rocket that had 7.6 million pounds of thrust, compared to the 78,000, and using technology that had had to be invented essentially from scratch, all in eight years. All of Cape Canaveral, those huge buildings down there, all that goes up during that time.
Well, I'm not going to go through the whole list of things, but if you want to realize how incredibly hard to believe it is now that we did it, consider the computer system that we used to go to the moon. Jerry Bostick, who was one of the flight dynamics officers, was telling me a few months ago about how excited they were just before the first landing when they got an upgrade to their computer system for the whole Houston Center. It had one megabyte of memory, and this was, to them, all the memory they could ever possibly want. One megabyte.
We'll never use it all! We'll never use all this, it’s a luxury!
So Jim, I guess I'm saying a couple of things. One is, to the young’ins out there today, you have no idea what we used to be able to do. We used to be able to work miracles, and it was those guys who did it.
Was the Kennedy speech, was it at Rice University?
No, “go to the moon” was before Congress.
He gave another speech at Rice where he was started to list all the things that they needed to do to get to the moon. And it wasn't just, “We have these rockets and we need to make a bigger one,” but there was so many technologies that needed to be developed over the course of the decade, I can't help but think a president today saying, “We're going to do this and we have a laundry list of things we don't know how to do, but we're going to figure them out…” It would've been called pie-in-the-sky, or something like that.
By the way, in order to do this, we did things which today would be unthinkable. You would have contracts for important equipment; the whole cycle for the contract acquisition process would be a matter of weeks. The request for proposals would go out; six weeks later, they would've gotten the proposals in, they would've made a decision, and they'd be spending the money on what they were going to do. That kind of thing doesn't get done.
But I'll tell you though, the ballsiest thing that happened in the program, among the people on the ground — I mean the ballsiest thing of all was getting on top of that rocket and being blasted into space — but on the ground it was called the “all up” decision. “All up” refers to the testing of the Saturn V, the launch vehicle, this monstrous thing, which basically is standing a Navy destroyer on end and blasting it into space. And usually, historically, when you test those things, you test Stage One, and if that works, then you add the second stage and then you add the third stage. And the man who was running the Apollo program at that time, a guy named Miller, made the decision they were going to do All Up on the first test. They were going to have all three stages, and they were going to go with it, and it worked, which nobody believed was possible. And then after only a few more launches, they put a man on that thing and it went. Decisions were made during that program that were like wartime decisions in terms of the risk that people were willing to take.
One thing that surprises me is just how much that Kennedy timeline seemed to drive things. Apollo seven, I think it was October ’68, and that was the first manned flight? And then like two months later, Apollo 8, we are whipping those guys around the moon! That seems like a rather accelerated timeline to me!
The decision to go to the moon on Apollo 8 was very scary to the people who first heard about it. And, by the way, if they'd had the same problem on Apollo 8 that they'd had on Apollo 13, the astronauts would've died, because on Apollo 8 you did not have the lunar module with them, which is how they got back. So they pulled it off, but it was genuinely, authentically risky. But, on the other hand, if they wanted to get to the moon by the end of 1969, that's the kind of chance you had to take.
Support for the Program (7:40)
How enthusiastic was the public that the program could have withstood another accident? Another accident before 11 that would've cost lives, or even been as scary as Apollo 13 — would we have said, let's not do it, or we're rushing this too much? I think about that a lot now because we talk about this new space age, I'm wondering how people today would react.
In January, 1967, three astronauts were killed on the pad at Cape Canaveral when the spacecraft burned up on the ground. And the support for the program continued. But what's astonishing there is that they were flying again with manned vehicles in September 1967. . . No, it was a year and 10 months, basically, between this fire, this devastating fire, a complete redesign of the spacecraft, and they got up again.
I think that it's fair to say that, through Apollo 11, the public was enthusiastic about the program. It's amazingly how quickly the interest fell off after the successful landing; so that by the time Apollo 13 was launched, the news programs were no longer covering it very carefully, until the accident occurred. And by the time of Apollo 16, 17, everybody was bored with the program.
Speaking of Apollo 13, to what extent did that play a role in Nixon's decision to basically end the Apollo program, to cut its budget, to treat it like it was another program, ultimately, which led to its end? Did that affect Nixon's decision making, that close call, do you think?
No. The public support for the program had waned, political support had waned. The Apollo 13 story energized people for a while in terms of interest, but it didn't play a role.
Gene Kranz (9:31)
500 years after Columbus discovering the New World, we talk about Columbus. And I would think that 500 years from now, we'll talk about Neil Armstrong. But will we also talk about Gene Kranz? Who is Gene Kranz and why should we talk about him 500 years from now?
Gene Kranz, also known as General Savage within NASA, was a flight director and he was the man who was on the flight director's console when the accident on 13 occurred, by the way. But his main claim to fame is that he was one of — well, he was also on the flight director's desk when we landed. And what you have to understand, Jim, is the astronauts did not run these missions. I'm not dissing the astronauts, but all of the decisions . . . they couldn't make those decisions because they didn't have the information to make the decisions. These life-and-death decisions had to be made on the ground, and the flight director was the autocrat of the mission control, and not just the autocrat in terms of his power, he was also the guy who was going to get stuck with all the responsibility if there was a mistake. If they made a mistake that killed the astronauts, that flight director could count on testifying before Congressional committees and going down in history as an idiot.
Somebody like Gene Kranz, and the other flight director, Glynn Lunney during that era, who was also on the controls during the Apollo 13 problems, they were in their mid-thirties, and they were running the show for one of the historic events in human civilization. They deserve to be remembered, and they have a chance to be, because I have written one thing in my life that people will still be reading 500 years from now — not very many people, but some will — and that's the book about Apollo that Catherine, my wife, and I wrote. And the reason I'm absolutely confident that they're going to be reading about it is because — historians, anyway, historians will — because of what you just said. There are wars that get forgotten, there are all sorts of events that get forgotten, but we remember the Trojan War, we remember Hastings, we remember Columbus discovering America. . . We will remember for a thousand years to come, let alone 500, the century in which we first left Earth.
An Apollo 12 story (12:06)
If you just give me a story or two that you'd like to tell about Apollo that maybe the average person may have never heard of, but you find . . . I'm sure there's a hundred of these. Is there one or two that you think the audience might find interesting?
The only thing is it gets a little bit nerdy, but a lot about Apollo gets nerdy. On Apollo 12, the second mission, the launch vehicle lifts off and into the launch phase, about a minute in, it gets hit by lightning — twice. Huge bolts of lightning run through the entire spacecraft. This is not something it was designed for. And so they get up to orbit. All of the alarms are going off at once inside the cabin of the spacecraft. Nobody has the least idea what's happened because they don't know that they got hit by lightning, all they know is nothing is working.
A man named John Aaron is sitting in the control room at the EECOM’s desk, which is the acronym for the systems guide who monitored all the systems, including electrical systems, and he's looking at his console and he's seeing a weird pattern of numbers that makes no sense at all, and then he remembers 15 months earlier, he'd just been watching the monitor during a test at Cape Canaveral, he wasn't even supposed to be following this launch test, he was just doing it to keep his hand in, and so forth, and something happened whereby there was a strange pattern of numbers that appeared on John Aaron's screen then. And so he called Cape Canaveral and said, what happened? Because I've never seen that before. And finally the Cape admitted that somebody had accidentally turned a switch called the SCE switch off.
Okay, so here is John Aaron. Apollo 12 has gone completely haywire. The spacecraft is not under the control of the astronauts, they don't know what's happened. Everybody's trying to figure out what to do.
John Aaron remembers . . . I'm starting to get choked up just because that he could do that at a moment of such incredible stress. And he just says to the flight director, “Try turning SCE to auxiliary.” And the flight director had never even heard of SCE, but he just . . . Trust made that whole system run. He passes that on to the crew. The crew turns that switch, and, all at once, they get interpretable data back again.
That's the first part of the story. That was an absolutely heroic call of extraordinary ability for him to do that. The second thing that happens at that point is they have completely lost their guidance platform, so they have to get that backup from scratch, and they've also had this gigantic volts of electricity that's run through every system in the spacecraft and they have three orbits of the earth before they have to have what was called trans lunar injection: go onto the moon. That's a couple of hours’ worth.
Well, what is the safe thing to do? The safe thing to do is: “This is not the right time to go to the moon with a spacecraft that's been damaged this way.” These guys at mission control run through a whole series of checks that they're sort of making up on the fly because they've never encountered this situation before, and everything seems to check out. And so, at the end of a couple of orbits, they just say, “We're going to go to the moon.” And the flight director can make that decision. Catherine and I spent a lot of time trying to track down the anguished calls going back and forth from Washington to Houston, and by the higher ups, “Should we do this?” There were none. The flight director said, “We're going,” and they went. To me, that is an example of a kind of spirit of adventure, for lack of a better word, that was extraordinary. Decisions made by guys in their thirties that were just accepted as, “This is what we're going to do.”
By the way, Gene Kranz, I was interviewing him for the book, and I was raising this story with him. (This will conclude my monologue.) I was raising this story with him and I was saying, “Just extraordinary that you could make that decision.” And he said, “No, not really. We checked it out. The spacecraft looked like it was good.” This was only a year or two after the Challenger disaster that I was conducting this interview. And I said to Gene, “Gene, if we had a similar kind of thing happen today, would NASA ever permit that decision to be made?” And Gene glared at me. And believe me, when Gene Kranz glares at you, you quail at your seat. And then he broke into laughter because there was not a chance in hell that the NASA of 1988 would do what the NASA of 1969 did.
An Apollo 11 story (17:58)
If all you know about Apollo 11 is what you learned in high school, or maybe you saw a documentary somewhere, and — just because I've heard you speak before, and I've heard Gene Kranz speak—what don't people know about Apollo 11? There were — I imagine with all these flights — a lot of decisions that needed to be made probably with not a lot of time, encountering new situations — after all, no one had done this before. Whereas, I think if you just watch a news report, you think that once the rocket's up in the air, the next thing that happens is Neil Armstrong lands it on the moon and everyone's just kind of on cruise control for the next couple of days, and boy, it certainly doesn't seem like that.
For those of us who were listening to the landing, and I'm old enough to have done that, there was a little thing called—because you could listen to the last few minutes, you could listen to what was going on between the spacecraft and mission control, and you hear Buzz Aldrin say, “Program Alarm 1301 . . . Program Alarm 1301 . . .” and you can't… well, you can reconstruct it later, and there's about a seven-second delay between him saying that and a voice saying, “We're a go on that.” That seven seconds, you had a person in the back room that was supporting, who then informed this 26-year-old flight controller that they had looked at that possibility and they could still land despite it. The 26-year-old had to trust the guy in the back room because the 26-year-old didn't know, himself, that that was the case. He trusts him, he tells the flight director Gene Kranz, and they say, “Go.” Again: Decision made in seven seconds. Life and death. Taking a risk instead of taking the safe way out.
Sometimes I think that that risk-taking ethos didn't end with Apollo, but maybe, in some ways, it hasn't been as strong since. Is there a scenario where we fly those canceled Apollo flights that we never flew, and then, I know there were other plans of what to do after Apollo, which we didn't do. Is there a scenario where the space race doesn't end, we keep racing? Even if we're only really racing against ourselves.
I mean we've got . . . it's Artemis, right? That's the new launch vehicle that we're going to go back to the moon in, and there are these plans that somehow seem to never get done at the time they're supposed to get done, but I imagine we will have some similar kind of flights going on. It's very hard to see a sustained effort at this point. It's very hard to see grandiose effort at this point. The argument of, “Why are we spending all this money on manned space flight?” in one sense, I sympathize with because it is true that most of the things we do could be done by instruments, could be done by drones, we don't actually have to be there. On the other hand, unless we're willing to spread our wings and raise our aspirations again, we're just going to be stuck for a long time without making much more progress. So I guess what I'm edging around to is, in this era, in this ethos, I don't see much happening done by the government. The Elon Musks of the world may get us to places that the government wouldn't ever go. That's my most realistic hope.
Apollo in the Media (21:36)
If I could just give you a couple of films about the space program and you just… thought you liked it, you thought it captured something, or you thought it was way off, just let just shoot a couple at you. The obvious one is The Right Stuff—based on the Tom Wolfe book, of course.
The Right Stuff was very accurate about the astronauts’ mentality. It was very inaccurate about the relationship between the engineers and the astronauts. It presents the engineers as constantly getting the astronauts way, and being kind of doofuses. That was unfair. But if you want to understand how the astronauts worked, great movie
Apollo 13, perhaps the most well-known.
Extremely accurate. Extremely accurate portrayal of the events. There are certain things I wish they could include, but it's just a movie, so they couldn't include everything. The only real inaccuracy that bothered me was it showed the consoles of the flight controllers with colored graphics on them. They didn't have colored graphics during Apollo! They had columns of white numbers on a black background that were just kind of scrolling through and changing all the time, and that's all. But apparently, when their technical advisor pointed that out to Ron Howard, Ron said, “There are some things that an audience just won't accept, but they would not accept.”
That was the leap! First Man with Ryan Gosling portraying Neil Armstrong.
I'll tell you: First place, good movie—
Excellent, I think.
Yeah, and the people who knew Armstrong say to me, it's pretty good at capturing Armstrong, who himself was a very impressive guy. This conceit in the movie that he has this little trinket he drops on the moon, that was completely made up and it's not true to life. But I'll tell you what they tell me was true to life that surprised me was how violently they were shaken up during the launch phase. And I said, “Is that the way it was, routinely?” And they said, yeah, it was a very rough ride that those guys had. And the movie does an excellent job of conveying something that somebody who'd spent a lot of time studying the Apollo program didn't know.
I don't know if you've seen the Apple series For All Mankind by Ronald D. Moore, which is based on the premise I raised earlier that Apollo didn’t end, we just kept up the Space Race and we kept advancing off to building moon colonies and off to Mars. Have you seen that? And what do you think about it if you have? I don't know that you have.
I did not watch it. I have a problem with a lot of these things because I have my own image of the Apollo Program, and it drives me nuts if somebody does something that is egregiously wrong. I went to see Apollo 13 and I'm glad I did it because it was so accurate, but I probably should look at For All Mankind.
Very reverential. A very pro-space show, to be sure. Have you seen the Apollo 11 documentary that's come out in the past five years? It was on the big screen, it was at theaters, it was a lot of footage they had people had not seen before, they found some old canisters somewhere of film. I don’t know if you've seen this. I think it's just called Apollo 11.
No, I haven't seen that. That sounds like something that I ought to look at.
Perspectives on space flight (24:50)
My listeners love when I read . . . Because you mentioned the idea of: Why do we go to space? If it's merely about exploration, I suppose we could just send robots and maybe eventually the robots will get better. So I want to just briefly read two different views of why we go to space.
Why should human beings explore space? Because space offers transcendence from which only human beings can benefit. The James Webb Space Telescope cannot articulate awe. A robot cannot go into the deep and come back with soulful renewal. To fully appreciate space, we need people to go there and embrace it for what it fully is. Space is not merely for humans, nor is space merely for space. Space is for divine communion.
That’s one view.
The second one is from Ayn Rand, who attended the Apollo 11 moon launch. This is what Ayn Rand wrote in 1969:
The next four days were torn out of the world’s usual context, like a breathing spell with a sweep of clean air piercing mankind’s lethargic suffocation. For thirty years or longer, the newspapers had featured nothing but disasters, catastrophes, betrayals, the shrinking stature of man, the sordid mess of a collapsing civilization; their voice had become a long, sustained whine, the megaphone a failure, like the sound of the Oriental bazaar where leprous beggars, of spirit or matter, compete for attention by displaying their sores. Now, for once, the newspapers were announcing a human achievement, were reporting on a human triumph, were reminding us that man still exists and functions as a man. Those four days conveyed the sense that we were watching a magnificent work of art—a play dramatizing a single theme: the efficacy of man’s mind.
Is the answer for why we go to space, can it be found in either of those readings?
They're going to be found in both. I am a sucker for heroism, whether it's in war or in any other arena, and space offers a kind of celebration of the human spirit that is only found in endeavors that involve both great effort and also great risk. And the other aspect of transcendence, I'm also a sucker for saying the world is not only more complicated than we know, but more complicated than we can imagine. The universe is more complicated than we can imagine. And I resonate to the sentiment in the first quote.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
🦁 My chat (+transcript) with investment strategist Ed Yardeni on his optimism for a Roaring 2020s
vendredi 19 avril 2024 • Duration 24:40
As I often remind subscribers to Faster, Please!, predictions are hard, especially about the future. The economic boom of the 1990s came as a surprise to most economists. Equally surprising was that it ended so soon. Neither of these events caught Ed Yardeni off-guard. Some forecasters, Yardeni included, anticipated a new Roaring ’20s for this century… only to be interrupted by the pandemic. But is it too late for this prediction to become a reality? According to Yardeni, not at all.
Ed Yardeni is president of Yardeni Research, and he previously served as chief investment strategist at a number of investment companies, including Deutche Bank. He has additionally held positions at the Federal Reserve Bank of New York, Federal Reserve Board of Governors, and US Treasury Department. For more economic insights and investment guidance, visit yardeni.com.
In This Episode
* The ’90s Internet boom (1:25)
* The Digital Revolution (5:01)
* The new Roaring ’20s (9:00)
* A cautious Federal Reserve (14:24)
* Speedbumps to progress (18:18)
Below is a lightly edited transcript of our conversation
The ’90s Internet boom (1:25)
Pethokoukis: Statistically speaking, the PC Internet boom that you first started writing about back in the early ’90s ended in 2004, 2005. How surprising was that to economists, investors, policy makers? I, to this day, have a report, a 2000 report, from Lehman Brothers that predicted, as far as the eye could see, we would have rapid growth, rapid productivity growth for at least another decade. Now, of course, Lehman didn't make it another decade. Was that a surprise to people that we didn't have an endless productivity boom coming out of the ’90s?
Yardeni: I think it definitely was a surprise. I mean, it was surprising both ways. Not too many people expected to see a productivity boom in the second half of the 1990s, which is what we had. I did, as an economist on Wall Street. More importantly, Alan Greenspan was a big promoter of the idea that the technology revolution would in fact lead to better productivity growth and that that might mean better economic growth and lower inflation. And it didn't look that way for a while; then suddenly the Bureau of Economic Analysis went back and revised the data for the late 1990s and, lo and behold, it turned out that there was a productivity boom. And then it all kind of fizzled out, and it raises the question, why did that happen? Why was it such a short lived productivity boom? And the answer is—well, let me give you a personal anecdote.
I worked at Deutsche Bank in New York in the late 1990s, and I had to be very careful walking down the corridors of Deutsche Bank in midtown Manhattan not to trip over Dell boxes. Everybody was getting a Dell box, everybody was getting the Dell boxes loaded up with the Windows Office. And when you think back on what that was able to do in terms of productivity, if you had a lot of secretaries on Selectric typewriters, Word could obviously increase productivity. If you had a lot of bookkeepers doing spreadsheets, Excel could obviously increase productivity. But other than that, there wasn't really that much productivity to be had from the technology at the time. So again, where did that productivity boom come from? It couldn't have been just secretaries and bookkeepers. Now the answer is that the boxes themselves were measured as output, and so output per man hour increased dramatically. It doesn't take that many workers to produce Dell boxes and Windows Office and Windows software. So as a result of that, we had this big boom in the technology output that created its own productivity boom, but it didn't really have the widespread application to all sorts of business model the way today's evolution of the technology boom is, in fact, capable of doing.
What you've just described, I think, is the explanation by, for instance, Robert Gordon, Northwestern University, that we saw a revolution, but it was a narrow revolution.
It was the beginning! It was the beginning of a revolution. It was the Technology Revolution. It started in the 1990s and it's evolved, it's not over, it's ongoing. I think a big development in that revolution was the cloud. What the cloud allowed you to do was really increase productivity in technology itself, because you didn't need to have several hundred people in the IT department. Now, with the cloud, one person can upgrade the software on hundreds of computers, and now we're renting software so that it automatically upgrades, so that's been a big contribution to productivity.
The Digital Revolution (5:01)
So perhaps I spoke too soon. I talked about that boom—that ’90s boom—ending. Perhaps I should have said it was more of a pause, because it seems what we're seeing now, as you've described it, is a new phase of the Digital Revolution—perhaps a broader phase—and, to be clear, if I understand what you've been speaking about and writing about, this isn't an AI story, this predates what we're seeing in the data now, it predates ChatGPT, when do you date this new phase beginning—and you mentioned one catalyst perhaps being the cloud, so—when did it begin and, again, what are the data markers that you've been looking at?
I don't remember the exact date, but I think it was 2011 where my little investment advisory got ourselves on the Amazon cloud, and that's been a tremendous source of productivity for us, it saves us a lot of money. We used to have a couple of servers on a server farm in the old days, and every now and then it would go down and we'd have to reach somebody on the server farm and say, “Would you mind turning it on and off?” Remember the word “reboot?” I don't remember the word “reboot” being used in quite some time. Amazon's never gone down, as far as I can recall. I think they've always had their systems in Virginia, and they had a backup somewhere overseas, but it's always worked quite well for us.
But now we're finding with some of the other software that's available now, we can actually cut back on our Amazon costs and use some of these other technologies. There's lots of technologies that are very user-friendly, very powerful, and they apply themselves to all sorts of different businesses, and, as you said, it's not just AI. I think the cloud—let's put it around 2011 or so—was a huge development because it did allow companies to do information processing in a much more efficient way, and the software gets automatically updated, and with what it used to take hundreds of people in an IT department to do, now you can do it with just one, which is what we, in fact, have, just one person doing it all for us. But I would say that's as good a point as any. But along the way here, what's really changed is the power of the software that's available, and how cheap it is, and how you can rent it now instead of having to own it.
That's a fantastic example, and, of course, we want to see these sort of examples at some point reflected in the data. And going through some of your writings, one period that you were very focused on was, we may have seen a bottom, maybe at the end of 2015, before the pandemic, where we saw the slowest, I think 20-quarter average… annual average growth rate of productivity.
0.5 annual rate.
But by 2019, leading into the pandemic, it tripled. Is the story of that tripling, is it the cloud? And that certainly has to be one reason why you, among other people, thought that we might see a new Roaring ’20s, right into the teeth of the pandemic, unfortunately.
Well, it's not so unfortunate, I mean, clearly nobody saw the pandemic coming, but we weathered the storm very, very well, and I don't think we can come to any conclusion about productivity during the pandemic, it was all over the place. At first, when we were on lockdown, it actually soared because we were still producing a lot with fewer workers, and then it took a dive, but we're now back up to two percent. We had a really, really good year last year in productivity. The final three quarters of last year, we saw above-trend growth in productivity. And so we're already now back up to two percent, which, again, compared to 0.5, is certainly moving in the right direction, and I don't see any particular reason why that number couldn't go to three-and-a-half, four-and-a-half percent per year kind of growth—which sounds delusional unless you look back at the chart of productivity and see that that's actually what productivity booms do: They get up to something like three-and-a-half to four-and-a-half percent growth, not just on a one-quarter basis, but on a 20-quarter trailing basis at an annual rate.
The new Roaring ’20s (9:00)
This forecast predates the word “generative AI,” predates ChatGPT, and, in fact, if I understand your view, it's even broader than information technology. So tell me a bit about your broader Roaring ’20s thesis and the technological underpinnings of that.
One of the developments we've seen here, which has been somewhat disconcerting, is the challenge to globalization. I'm a big believer in free trade, and the free trade creates more economic growth, but, on the other hand, we have to be realistic and realize that China hasn't been playing by the rules of the game. And so now, as a result, we're seeing a lot of production moving out of China to other countries, and we're seeing a lot of on-shoring in the United States, so we're building state-of-the-art manufacturing facilities that are full of robots and automation that I think are going to bring manufacturing productivity back quite significantly.
Everybody seems to be of the opinion that the reason productivity is weak is because of services. It's actually manufacturing. What happened is, when China joined the World Trade Organization back at the end of 2001—December 11th, 2001, to be exact—manufacturers said hasta la vista to the United States, and we've had absolutely no increase in industrial production capacity since that time, since 2001. And so companies basically gave up on trying to do anything, either expand capacity or improve productivity of manufacturing here, when they could do it so much more cheaply over in China.
I think what's really the most important thing that's changed here is, demographically, we've run out of workers. Certainly even in China, we don't have a growth in the working-age population. We don't have a growth in the working-age population here. And when it comes to skilled labor, that's even more the case, so there's tremendous incentive and pressure on companies to figure out, well, how do we deal with an environment where our business is pretty good, but we can't find the workers to meet all the demand? And the answer has to be productivity. Technology is part of the solution. Managing for productivity is another part of the solution. Giving workers more skills to be more productive is a very good use of money, and it makes workers sticky, it makes them want to stay with you because you're going to have to pay them more because they’re more knowledgeable, and you want to pay them more because you want to keep them.
I think a big part of the productivity story really has to do with the demographic story. China, of course, accelerated all that with the One Child Policy that, as a result, I kind of view China as the world's largest nursing home. They just don't have the workforce that they used to have. Japan doesn't have the workforce. Korea, Taiwan, all these countries… If you want to find cheap, young labor, it's still in Africa and in India, but there are all sorts of issues with how you do business in these countries. It's not that easy. It's not as simple as just saying, “Well, let's just go there.” And so I think we are seeing a tremendous push to increase productivity to deal with the worldwide labor shortage.
We have three really good quarters of productivity growth and, as you mentioned, economists are always very cautious about those productivity numbers because of revisions, they're volatile. But if this is something real and sustainable, it should also reflect in other parts of the economy. We should see good capital investment numbers for here on out if this is a real thing.
I think not only capital investment, but also real wages. Productivity is fairy dust. I mean it's a win-win-win situation. With better-than-expected productivity, you get better-than-expected, real GDP, you get lower-than-expected unit labor costs, which, by the way, unit labor costs, which reflect hourly wages offset by productivity, they're under two percent—or they're around two percent, I should say more accurately—and that's highly correlated with the CPI, so the underlying inflation rate has already come down to where the Fed wants it to be. This is not a forecast, this is where we are right now with unit labor costs. So there's a very strong correlation between productivity growth and the growth of inflation-adjusted compensation. So you can take average hourly earnings, you can take hourly compensation…
There are a bunch of measures of wages, and divide them by the consumption deflator, and you'll see on a year-over-year basis that the correlation is extremely high. And, theoretically—it's the only thing I learned when I went to college in economics that ever made any sense to me, and that is—people in a competitive marketplace—it doesn't have to be perfectly competitive, but in a relatively competitive marketplace—people get paid their real wage. The productivity the workers have, they get paid in their real wages, and we've seen, for all the talk about how “standards of living have stagnated for decades,” if you look at average hourly earnings divided by the consumption deflator, it's been going up 1.4 percent since 1995. That's a doubling of the standard of living every 40 years. That's pretty good progress. And if productivity grows faster than that, you'll get even a better increase in real wages.
If we don't have workers, if there's a shortage of workers—though, obviously, immigration puts a whole different spin on these things—but for what we know now in terms of the workers that are available that are allowed to work, they are getting paid higher real wages. I know that prices have gone up, but people sometimes forget that wages have also gone up quite a bit. But again, it's fairy dust: You get better real growth, you get lower inflation, you get real wages going up, and you get better profit margins. Everybody wins.
A cautious Federal Reserve (14:24)
In the ’90s, we had a Fed chairman who was super cautious about assuming a productivity boom, but eventually saw the reality of it and acted accordingly. It seems to me that we have a very similar such situation where we have a Federal Reserve chairman who is certainly aware of these numbers, but seems to me, at this point, certainly reluctant to make decisions based on those numbers, but you would expect that to change.
Yeah, well, I mean if you just look at the summary of economic projections that the Federal Open Market Committee… that comes out on a quarterly basis reflecting the consensus of Fed Chair Powell's committee that determines monetary policy, they're looking for real GDP growth of less than two percent per year for the next couple of years, and they're obviously not anticipating any improvement in productivity. So I think you're right, I think Fed Chair Powell is very much aware that productivity can change everything; and, in fact, he's talked about productivity, he knows the equation. He says, “Look, it's okay to have wages growing three percent if inflation's two percent.” Then he implied, therefore, that productivity is growing one percent. So he's basically in the one percent camp, recognizing that, if productivity is more than that, then four percent wage growth is perfectly fine and acceptable and non-inflationary. But at this point he's, in terms of his pronouncements, he's sticking to the kind of standard line of economists, which is, maybe we'll get one percent, and if we get one percent and the Fed gets inflation down, let's say to only two-and-a-half percent, then wages can grow three-and-a-half percent, and right now wages are growing at a little bit above. I think we're growing more like four percent, so the wage numbers aren't there yet, but they could be the right numbers if, in fact, productivity is making a comeback.
If we hit productivity gains of the sort you've talked about—three percent, four percent by the end of the decade—that is a radically different-looking economy than what the Fed, or the CBO, or even a lot of Wall Street firms are talking about. So it's not just this statistic will be different; we're looking at really something very different. I would assume a much higher stock market; I'm not sure what interest rates look like, but what does that world look like in 2030?
These are all good questions, they’re the ones I'm grappling with. I mean, should interest rates be lower or should they be higher? It’s the so-called real interest rates, so if the economy can live with a Fed funds rate of, let's say five and a half percent—five and a quarter, five and a half percent, which is what it is now—and the bond yield at four and a half percent and the economy is doing perfectly fine and inflation's coming down, and it's all because productivity is making comeback, then those rates are fine. They're doing their job, they're allocating capital in a reasonable fashion, and capital is going to get allocated to where capital should go. You mentioned before that, in order to increase productivity, we are going to need more capital investments.
Here the Fed has raised interest rates dramatically, and most of the economists said, “Oh, that's going to lead to a big drop in capital,” because capital spending is dependent on interest rates, and that hasn't happened at all, really, because the technology capital spending—which now, in current dollars, technology capital spending accounts for about 50 percent of capital spending in nominal terms. You can't do it in real terms because there's an indexing problem. But in nominal terms, half of capital spending is technology. And by the way, that's an understatement because that's information technology, hardware, it's software and R&D. It doesn't even include industrial machinery, which is mostly technology, hardware and software these days. And even the trucking industry, the truck is sort of the device, and then there's a software that runs the device logistics. There's so many areas of the economy that have become very high-tech that people still think of as a low-tech industry.
Speedbumps to progress (18:18)
If this doesn't happen. Well, I suppose one thing we could say may have happened is that we've really overestimated these technologies and they're not as transformative. But let me give you two other things that people might point to as being—and you've written a bit about these—that could be speed bumps or barriers. One: debt, possible debt crisis. And two: this energy revolution, climate change transition, which we really have a lot of government involvement and a lot of government making decisions about allocating resources. So what is the risk that those two things could be a slow things down, speed bump, or what have you?
There's three issues that you're raising. One is sort of the private sector issue of whether a lot of this artificial intelligence and technology stuff is hype, and it's not going to have the impact on productivity. The other one, as you mentioned, is the two government issues, government’s meddling in the climate change policies, and then the government having this irresponsible fiscal excesses.
With regards to artificial intelligence, even though I should be a cheerleader on this, because I should say, “See, I told you so…” I have been telling people I told you so because I said, I'll tell you when the stock market started to discount the Roaring ’20s was November 30th, 2022 when Open AI introduced ChatGPT, and that's when these AI stocks went crazy. A week later I signed up for the $20 a month version of ChatGPT, figuring, “This is great! I'm not going to have to work anymore. This is going to do all my writing for me. I'll just ask it the question and say, ‘What do you think we should be writing about this? Go ahead and write about it.’” Well, it took me more time to correct all the errors for what it produced than it would've taken me just to write the damn thing.
So I kind of cooled off to chat GPT, and I come to the conclusion that, from what I see right now in terms of what is available to the public and what's tied to the internet, it's really autofill on speed and the steroids. You know when you type Word and sometimes it guesses what you're going to say next? That's what this thing does at the speed of light. But, you know, “haste makes waste,” as Benjamin Franklin used to say, and it makes a lot of mistakes. And, by the way, garbage in, garbage out. It could create even more garbage on the internet because I've seen situations where it starts quoting its own sources that would never have existed in the first place. So there's some really funky stuff when you have it in the public domain.
But I think that when you have it sort of segmented off and it only has the data that you need for your specific industry, and it's not polluted by other the open source ability to take any data, I think it may very well work very well. But it's basically just a really fast, lightning-speed calculations. So I think it has lots of potential in that regard, but I think there is a certain amount of hype. But look, so much money is being spent in this area. I can't believe it's all going to go for naught. I mean, we saw a lot of money spent in the late 1990s on internet and dotcoms and all that. The internet's still here, but the dotcoms are gone.
With regards to the government policies, I have this very simplistic view that it's amazing how well this country has done despite Washington. Washington just keeps meddling and meddling, it just keeps picking our pockets, keeps interfering, comes up with industrial policies that, to a large extent, don't work. And yet, the economy continues to do well because working stiffs like you and me and people listening in, that's what we do for a living: we work. We don't have time for politics. So the politicians have plenty of time to figure out how to pick our pockets. Well, we have to just figure out, “Okay, given their meddling, how do we make our businesses better, notwithstanding these challenges.” Maybe it's really more my hope that we somehow in the private sector figure out how to keep doing what we're doing so well, including increasing productivity, in the face of the challenges that the government poses with its policies.
But then, if we are successful in the private sector at creating good productivity growth that gives you better real GDP growth, that real growth is one way to reduce the debt burden. It doesn't make it go away and it would be a lot better if we didn't have it, but some of these projections of how this debt is going to eat us all up may be too pessimistic about their assumptions for economic growth. But look… I guess I had a happy childhood, so I tend to be an optimist, but I can't say anything good at all about this deficit problem. And we did get a little glimpse in August, September, October, of what happens when the bond market starts to worry about something like supply. It worried about it for three months, and then lower inflation and less supply of long-term bonds helped to rally the bond market. But here we are back at four and a half percent, and if we do have some more fears about inflation coming back, then we could very well have a debt crisis more imminently. People like Ray Dalio has been saying that we're under verge of getting it. I think it's an issue, but I don't think it's an issue that's going to be calamitous at this time.
The problems people talk about, you have the skepticism about free enterprise, or the skepticism about trade, and immigration. I would like to see what this country looks like in 2030 with the economic scenario you've just outlined: Strong real wage growth. Maybe it's too simplistic, but I think people being able to see in their everyday lives, big gains year after year, I think the national mood would be considerably different.
Well, I think, even now, if you look at real consumption per household, it's $128,000, it's at an all-time record high. And yeah, I guess the rich might be gluttons and might eat more than the rest of us, and maybe they have bigger and more houses and cars, but there just aren't enough of them to really explain how it could possibly be that real consumption per household is at an all-time record high. And I know that's materialistic, but I can't think of a better way to measure the standard of living than looking at real consumption per household: All-time record high.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Micro Reads
▶ Business/ Economics
Meta, in Its Biggest A.I. Push, Places Smart Assistants Across its Apps - NYT
Google streamlines structure to speed up AI efforts - FT
Tesla’s Layoffs Won’t Solve Its Growing Pains - Wired
▶ Policy
Put Growth Back on the Political Agenda - WSJ
Regulate AI? How US, EU and China Are Going About It - Bberg
Three ways the US could help universities compete with tech companies on AI innovation - MIT
▶ AI/Digital
The AI race is generating a dual reality - FT
Searching for the Next Big AI Breakthrough at the TED Conference - Bberg
These photos show AI used to reinterpret centuries-old graffiti - NS
Environmental Damage Could Cost You a Fifth of Your Income Over the Next 25 Years - Wired
AI now surpasses humans in almost all performance benchmarks - New Atlas
▶ Biotech/Health
A new understanding of tinnitus and deafness could help reverse both - New Scientist
Beyond Neuralink: Meet the other companies developing brain-computer interfaces - MIT
▶ Robotics
Hello, Electric Atlas: Boston Dynamics introduces a fully electric humanoid robot that “exceeds human performance” - IEEE Spectrum
▶ Space/Transportation
NASA may alter Artemis III to have Starship and Orion dock in low-Earth orbit - Ars
▶ Up Wing/Down Wing
Technological risks are not the end of the world - Science
▶ Substacks/Newsletters
Five things to be optimistic about in America today - Noahpinion
Who Governs the Internet? - Hyperdimensional
Meta is Surprisingly Relevant in Generative AI - AI Supremacy
Larry Summers isn’t worried about secular stagnation anymore - Slow Boring
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
⚡⚛ My chat (+transcript) with Steve Obenschain of LaserFusionX on laser fusion
jeudi 11 avril 2024 • Duration 14:34
As private and government interest in nuclear fusion technology grows, an array of startups have arisen to take on the challenge, each with their own unique approach. Among them: LaserFusionX. Today on Faster, Please!—The Podcast, I talk with CEO Stephen Obenschain about the viability of fusion energy, and what sets his approach apart.
Obenschain is the president of LaserFusionX. He was formerly head of the Plasma Physics Division branch at the U.S. Naval Research Laboratory.
In This Episode
* Viability of commercial fusion (0:58)
* The LaserFusionX approach (7:54)
* Funding the project (10:28)
* The vision (12:52)
Below is a lightly edited transcript of our conversation
Viability of commercial fusion (0:58)
Pethokoukis: Steve, welcome to the podcast.
Obenschain: Okay, I'm glad to talk with you. I understand you're very interested in high-tech future power sources, not so high tech right now are windmills…
Well, I guess they're trying to make those more high tech, as well. I recall that when the Energy Department, the National Ignition Laboratory [NIF], they had the—I guess that's over about maybe 15 months ago—and they said they had achieved a net gain nuclear fusion, using lasers, and the energy secretary made an announcement and it was a big deal because we had never done that before by any means. But I remember very specifically people were saying, “Listen, it's a great achievement that we've done this, but using lasers is not a path to creating a commercial nuclear reactor.” I remember that seemed to be on the news all the time. But yet you are running a company that wants to use lasers to create a commercial fusion reactor. One, did I get that right, and what are you doing to get lasers to be able to do that?
I don't know why people would come to that conclusion. I think we are competitive with the other approaches, which is magnetic fusion, where you use magnetic fields to confine a plasma and get to fusion temperatures. The federal government has supported laser fusion since about 1972, starting with the AEC [Atomic Energy Commission]. Originally it was an energy program, but it has migrated to being in support stockpiled stewardship because, with laser fusion, you can reach physics parameters similar to what occur in thermonuclear weapons.
Yeah. So that facility is about nuclear weapons testing research, not creating a reactor—a fusion reactor.
Yeah. All that being said, it does advance the physics of laser fusion energy, and what the National Ignition Facility did is got so-called ignition, where the fuel started a self-sustaining reaction where it was heating itself and increasing the amount of fusion energy. However, the gain was about three, and one of the reasons for that is they use so-called indirect drive, where the laser comes in, heats a small gold can, and the X-rays from that then that drive the pellet implosion, which means you lose about a factor of five in the efficiency. So it's limited gain you get that way.
Your way is different. It sort of cuts out the middleman.
Okay. The better way to go—which, we're not the only ones to do this—is direct drive, where the laser uniformly illuminates the target at the time that Livermore got started with indirect drive, we didn't have the technologies to uniformly illuminate a pellet. First at NRL [Naval Research Laboratories], and then later at University of Rochester in Japan, they developed techniques to uniformly illuminate the pellets. The second thing we're doing is using the argon fluoride laser. The argon fluoride laser has been used in lithography for many years because it's deep UV.
The unique thing we have been trying to do—this was when I was supervising the program at the Naval Research Laboratory—was to take it up to high energy. We started years ago with a similar Krypton fluoride laser, built the largest operating target shooter with that technology, demonstrated the high repetition rate operation that you need for energy and NIF will shoot a few times a day—you need five to 10 shots per second to do a power plant—demonstrated that on a krypton fluoride laser, and, more recently, we switched to the focus to argon fluoride, which is deeper UV and more efficient than the Krypton fluoride. And that basically—at NRL when I was supervising it—reached the energy record for that technology. But we've got a long ways to go to get it to the high energy needed for a power plant.
Now, what the immediate goal of my company is to get the funds and to build a beam line of argon fluoride that would have the energy and performance needed for a power plant. One of the advantages to laser fusion: you want have a situation where I'm building more than one of something, so for an implosion facility, you have many beam lines, so you build one and then you have the advantage of building more, and a learning curve as you go toward a power plant. We developed a phase program where first we build the beamline, then we build a NIF-like implosion facility only operating with the argon fluoride, demonstrate the high gain—which is a hundred plus for a power plant—and then, after doing that, do the physics in parallel, develop the other technology you need, like low-cost targets. (They can't be expensive. The NIF targets are probably tens of thousands. We can't spend that.) We're going 10 shots per second. All the technologies required for a pilot power plant build a pilot power plant, which, in my view could be maybe 400 megawatts electricity. However, its main function would be to develop the procedures, test the components, and so forth for the follow-on, mass-produced power plants. So one, when you build a pilot power plant, you want to operate it for a few years to get the kinks out before going to mass production. The vision is to go from the beginning of that to the end in about 16 years.
So the challenges are you have to generate enough heat, and you have to be able to do this over, and over, and over again.
Right. That's right. It has to be high reliability. For an implosion facility, a hundred-thousand-shot reliability is okay. For a power plant, it's got to be in the billion-shot class.
And at this point, the reason you think this is doable is what?
I think we have confidence in the pellet designs. I have a lot, and I have colleagues that have a lot of experience with building large excimer systems: KrF [Krypton Fluoride Excimer Laser], ArF [Argon Fluoride Excimer Laser]…
Those are lasers?
Yes. And we have credible conceptual designs for the facility.
There’s a lot of companies right now, and startups, with different approaches. I would assume you think this is the most viable approach, or has some other advantages over some of the other things we're seeing with Commonwealth Fusion Systems, which gets mentioned a lot, which is using a different approach. So is the advantage you think it's easier to get to a reactor? What are the advantages of this path?
The LaserFusionX approach (7:54)
Well, for one, it's different. It's different challenges from the Commonwealth Fusion Systems. There is overlap, and there should be collaboration. For example, you have to, theirs is also deuterium-tritium. However, the physics challenges are different. I think we're farther along in laser fusion to be able—it's a simpler situation than you have. It's very complex interactions in tokamak, and you also have things… have you ever heard of a disruption? Basically it's where all of the magnetic energy all of a sudden goes to the wall, and if you have something like what Commonwealth Fusion Systems—they’ve got to be careful they don't get that. If they do, it would blow a hole in the wall. We don't have that problem with laser fusion. I think we're further along in understanding the physics. Actually, the National Ignition Facility is ahead of the highest fusion gains they've gotten in facilities. I think that they're somewhere just below one or so with the jet. They're up at one and a half.
To what extent are the challenges of physics and science, and to what extent are the challenges engineering?
Well, the physics has to guide the precision you have on the laser. And I won't say we're 100 percent done in the physics, but we're far enough along to say, okay. That's one reason where I envision building an implosion facility before the pilot power plant so we can test the codes and get all the kinks out of that. Nothing's easy. You have to get the cost of the targets down. The laser, okay, we've demonstrated, for example, at NRL—
And NRL is…?
Naval Research Laboratory.
Naval Research Lab, right.
A hundred-shot operation of the KrF laser. We use spark gap for that. We need to go to solid state pulse power, got up to 10 million shots. We need to get from there to a billion shots. And some of that is just simply improving the components. It's straightforward, but you've got to put time into it. I think you need really smart people doing this, that are creative—not too creative, but where you need to be creative, you are creative, and I think if, basically, if you can get the support, for example, to build (a beam line is somewhere around a hundred million dollars). To build the implosion facility and pilot power plant, you're getting into the billion shot, billion dollar class and you have to get those resources and be sure enough that, okay, if the investors put this money in, they're going to get a return on it.
Funding the project (10:28)
I think people who are investing in this sector, I would assume they may be more familiar with some of the other approaches, so what is the level of investor interest and what is the level of Department of Energy interest?
Well, one of the challenges is that, historically, the Department of Energy has put money into two pots. One, laser fusion for stockpile stewardship, and magnetic fusion for energy. That's starting to change, but they don't have a lot of money involved yet, to put money into laser fusion or inertial fusion energy. And one of my challenges is not that the companies are aware of magnetic fusion, they don't understand the challenges of that, or laser fusion, or what's a good idea and a bad idea. And like Commonwealth Fusion systems I think has a good technical basis. If you go the next one down to Helion Energy, they're claiming they can burn helium three made from deuterium interactions, which violates textbook physics, so I'm very… I wonder about that.
Would it surprise you, at the end of the day, that there are multiple paths to a commercial fusion reactor?
Oh no. I think there are multiple paths to getting to where I get fusion burn, and maybe I make electricity. I think ultimately the real challenge for us is: Can we go reasonably fast? At 16 years, I'm considered somewhat slower than others. The ones that are saying five years I think are delusional. The ones that are saying 50 years, or say never, I don't think understand that yeah, we're pretty far along in this.
How big, or rather, how small, theoretically, could one of these reactors be? I know there's been talk about using nuclear fusion as a way to provide power for these new data centers that gobble up so much power that they're using AI for. Would this be the kind of reactor that would power a city power, a big factory power, a data center, all of the above?
I think you can get down, at least with our approach, to a couple hundred megawatts. However, my own vision is you're probably better off having power stations for some of the nuclear—with these, the big nuclear plants have multiple reactors at one place, and you'd get the advantage, for example, in our case, to just simply have one target factory and so forth. I don't think we're going to be able to compete. I don't know how small modular reactors go—a hundred megawatts or so, I would guess, and probably can't get down there, but one of my own goals is to get the size down as much as possible, but I think we're talking about hundreds of megawatts.
The vision (12:52)
What's the big vision? Why are you doing this?
Why am I doing it?
Yeah, what's the vision? What drives you and where do you think this goes over the next two decades?
I may have the best route to get there. If I thought one of these other ones were going to get there, no problem… but all of us have challenges, and I think we can get there. I think from a standing start. As far as getting investment, I've just had pre-seed money, I don't have the big bucks yet. I’ve brought on people that are more experienced than me at extracting money from VCs and investors. (I was told you know a few billionaires.) Basically, for me, I need a few tens of millions to get started—like I'd say, about a hundred million to build the beamline. And then after that… actually I have a conference call on Friday with a representative of the investment bank industry that is very dubious about fusion.
I mean, you can understand the skepticism, as a technology. What do they say? “It's the future of energy and always will be.”
But the really good thing, I think, about the private investment is that the public investment has been too much focus on big machines which will give you physics, but have pretty much zero chance of being a direct path to fusion energy. You know, $25 billion and I make 500 megawatts thermal, occasionally, and we show that to a power plant executive, they're going to say, “You're kidding me.” We hope to get down cost for the power plants in the few-billion-dollar range.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
☢ My chat (+transcript) with Spencer Weart on the history of nuclear energy fear
jeudi 28 mars 2024 • Duration 31:41
In a world facing climate change and clean energy challenges, it’s starting to look like a nuclear energy renaissance is starting to happen. That is, if we can overcome our irrational fear of nuclear. In this episode of Faster, Please! - The Podcast, I talk with Dr. Spencer Weart about the cultural influences that shaped generations of anxiety around nuclear power, and how that tide may be turning.
Weart holds advanced degrees in both Astrophysics and History. For over three decades, he served as Director of the Center for History of Physics at the American Institute of Physics. He is the author of two children’s science books and has written or co-edited seven other books. Among his most recent is The Rise of Nuclear Fear, published in 2012.
In This Episode
* A history of radiation (1:05)
* The rise of nuclear fear (7:01)
* Anti-bomb to anti-nuclear (11:52)
* Today’s anti-nuclear voices (20:21)
* Changing generational attitudes (24:01)
* Nuclear fear in today’s media (28:58)
Below is a lightly edited transcript of our conversation
A history of radiation (1:05)
Pethokoukis: To what extent, when radiation was discovered at the turn of the century—and then, of course, the discovery of nuclear fission—to what extent were we already as a society primed by our cultural history to worry about radiation and nuclear power?
Weart: Totally. Because you say radiation was discovered, presumably you're referring first to the discovery of X-rays and then, shortly after that, the discovery of what they called “atomic radiation,” we now call it “nuclear radiation.” But, of course, before that, there was the very exciting discovery of infrared radiation. And before that, people have always known about radiation: the rays, the heat from the sun; and they've always had a very powerful cultural significance. You think of the halos of rays of light going out from holy figures in Buddhism and Christian iconography, or you think of the ancient Egyptians with the life-giving rays of the sun bestowing life on things because actually, of course, radiation of the sun is life-giving, it does contain a vital life force. So it's not a mistake to think of radiation as some kind of super magical, powerful thing.
And then of course there's also death rays. Death rays actually did become very popular in the literature after the discovery of X-rays because X-rays could, in fact, cause great damage to people, and then so could atomic rays, so, already by the early 20th century there were lots of kids' books and exciting adventure fiction featuring death rays. But you go back before that, there's the evil eye. There's rays radiating out from the evil eye could cause harm. Then there's astrology, the rays from the stars could influence human destiny. So as soon as you mention radiation, there's an enormous complex of things that come out, which was very easily linked to atomic radiation because of all the other characteristics of atomic discoveries.
And yet, certainly in the first half or first third of the 20th century, there was, people saw radiation as having great promise, even to create a Golden Age. Tell me a bit about that.
It came out as soon as radiation was discovered. Whenever there's a new physics discovery, almost the first thing that people think about is medical applications. And that happened with electricity and with X-rays—of course, x-rays do have great medical applications—and nuclear radiation (I'll call it “nuclear,” even though they called it “atomic” back then). Nuclear radiation did turn out to be radon and radium and so forth that Curie discovered did turn out to be useful for curing certain types of skin cancers and so forth.
But people went much beyond that because there was all this magical stuff associated with it. We have to remember that very early on it was discovered that nuclear radiation is the product of the transmutation of elements: uranium and radium and so forth and even other elements.
Like alchemy.
Yeah, transmutation was alchemy. It was immediately recognized that, oh, the nuclear physicists were the new alchemist and they were happy to talk of themselves as that. But of course, as soon as you have something powerful, as I said, the first thing, when you have a new discovery, that you think about is medicine. The second thing you think about 10 seconds later is weapons, so nuclear death rates were very early imagined. And the atomic bomb—the first atomic bomb actually was sort of a device carried by a terrorist in the 1901 novel. And then in 1915, H.G. Wells conceived of the idea of an atomic warfare weapon that civilization destroyed, but then followed by transmutation and of course humans destroy civilization, then we’ll rise again in atomic powered cars. We love utopia powered by nuclear energy. So all these things were there together, the good side and the bad side. On one side you had people saying that this is the 1930s mind. This is before nuclear fission was discovered. This was entirely science fiction.
Would you call that a period of general sort of pro-progress science and technology enthusiasm?
Well, it was, except… this was certainly the case in the 1900s. People thought that radium could cure all ills. Nuclear energy was seen as the elixir of life, talking about the old alchemists and so forth. There were all these wonderful things it could do and by the time it got to the First World War and the Great Depression, people were a little less happy about technology. So in addition to the wonders of atomic power plants and so forth, there were also things like… my favorite is a movie in which Boris Karloff doesn't play the mad scientist’s monster, he plays the mad scientist who discovers a new kind of radium rays, and of course he means to use it for good and he uses it… always using it to irradiate the young women to cure them, because, of course, radiation carries not only life force, but if you dig down deep into the radium side that has this sort of sexual thing. So these 1930s science fiction images of nuclear or mad scientists irradiating young women having a definite violation aspect. In this movie Boris Karloff gets too big a dose of radiation and goes mad and it turns him into a monster and goes around glowing in the dark—maybe the origin of the glowing in the dark idea—and then killing people with the touch of his radioactive hand. So it was all there together, both magical good and magical evil. Very, very strongly mythologized and Freudenized. The writers at the time read their Freud and they were happy to put in all these ideas of bad parents. And the mad scientist is the bad parent out to rape… well, I probably shouldn't go too far with this because… You have to see the pictures to really appreciate how deep this stuff goes.
Would you say that, overall, pre-Hiroshima, that the general public attitude was sort of positive about the potential of radiation and, eventually, atomic fission? Was it overall positive?
Yes, I would say it was generally positive, but with very deep roots. The positivity was mingled, when you go down deep enough, with all sorts of negative or ambiguous things: ideas of mad scientists as sort of the bad parent or the authority figure, the mean, merciless dictator, all of these things and the evil eye death ray kind of thing. They're all there sort of broiling around at a very deep level, a very deep psychological level and a very deep cultural level. And on the surface side, I would say it was generally positive and the overall idea was positive.
The rise of nuclear fear (7:01)
So if those things were sort of bubbling around, was it the atomic bombings of Japan that brought that stuff to a boil? Was that the key moment, or did that happen afterward? Was that the key inflection point?
It came afterwards. When Hiroshima happened, all the commentators from President Truman on down, the feeling was, “Oh, oh, it's actually real!” All the stuff that we thought was things that teenage boys read in their pulp fiction or in horror movies, all this stuff is actually real, so that was a shock.
And so it went two ways. One of course was the actual image of Hiroshima. And then when atomic bombs started to proliferate, when the Russians got the atomic bombs and we worried about them bombing our homes, then all this stuff that was sort of underground and seemed mythological—atomic war and the end of the world, and so forth—all became a scientific reality.
But at the same time, the other side also was coming out very strongly, and this was partly done deliberately. The government—well, the American government, the British government, the French government, the Soviet government—all got very worried about how upset their publics were and how frightened they were by atomic bombs. So they made a very strong effort to promote what they called “Atoms for Peace:” nuclear reactors, nuclear-powered ships, nuclear-powered everything. We use radiation. Radiation has a life force, right? So we'll radiate seeds and we'll get these new kinds of petunias and better crops.
Both of these things came out and there was a strong mixture of positivity and negativity, mostly connected with nuclear war, originally. It originally was connected with atomic explosions. And then this phase ended, this sort of 1950s Atoms for Peace thing ended with the hydrogen bomb, all of a sudden, there was a very big shift.
Is that just because it was just obviously a much more powerful explosive, or was it the Bravo incident which you write about in the book?
Yeah. There's two things going on here. First place is the hydrogen bomb is a thousand times more powerful than an atomic bomb. So this whole business of “duck and cover,” which, I was born in 1942, I did the “duck and cover” in school and so forth, that made sense with an atomic bomb. Okay, oh, the atomic bomb goes off in New York City, I'm in the suburbs, I duck under the desk. In a hydrogen bomb, you're inside the fireball. The whole idea of hiding from it is useless. So there's that one overwhelming thing. And the second thing with hydrogen bombs is that besides burning a city, they produce an enormous amount of fallout. Now, the fallout from the Hiroshima bomb actually didn't do much damage and the atomic bomb tests that people conducted in Nevada, they actually did do damage, but people didn't know it at the time because the atomic authorities were kind of hiding it. The Atomic Energy Commission had what they called—everybody at the time, called it—a “father-knows-best attitude,” which later turned out to be the bad father, the dangerous father.
But with the hydrogen bombs coming along, you couldn't hide the fallout. It was just enormous. If you were a thousand miles away, you had to take shelter from the fallout. And so there was a big rush for a couple of years to build fallout shoulders. And then people realized, “No, what's the use of staying in a fallout shoulder for two weeks, and then when you come out, what are you going to get?” It was at this was point that the positivity got just overwhelmed. Particularly the positivity about radiation got overwhelmed.
Radiation can be useful. Radiation is very medically useful. In fact, medical radiation and use of radioactive isotopes and nuclear rays saves, I don’t know, millions of lives a year. In a single year it saves far more lives than I've ever died from nuclear radiation. But people then were sort of overwhelmed by the idea of nuclear war and of nuclear fallout, and this had a very strong political component.
Anti-bomb to anti-nuclear (11:52)
So tell me about the political component and then tell me how people sort of went from fearing radiation from nuclear war to fearing a nuclear reactor, which is not a bomb.
After the hydrogen bomb an anti-war movement appeared, and it began in Japan, and it began in an interesting way. The first hydrogen bomb test polluted some fishermen who were nearby and they made them very sick and a pool of Marshall Islanders, Pacific Islanders, and made them very sick, and it caused some deaths, and the commission didn't want to admit it. But it also came down in the Pacific and all the tuna in the Pacific, the Japanese got very upset. Tuna to the Japanese is hamburger to Americans. Okay, it's a sacred thing. And the idea that you could hold a Geiger counter to it and there might be radioactivity in it was very frightening. And, of course, the Japanese had a natural worry about atomic warfare in the first place, so a movement began against fallout from nuclear weapons. It was against the testing of nuclear weapons.
What they really didn't want—and this was true as the movement spread entirely around the world—what they mainly didn't want was to be bombed. The actual aim of the anti-nuclear movement, which ended up mobilizing millions and millions of people coming out into the streets, a very major movement, which had a very strong effect on politics and even in the Soviet Union. So what the leaders of the movement decided is they were going to focus on the fallout from bomb tests. The idea was to stop the bomb tests as a way of slowing down the nuclear arms race. If we could stop the tests, at least they won't be making more bombs. That's the first part, because it was a backyard issue. We can tell people the fallout is going into their backyard. My favorite is a kid says, “Oh, my mother says you shouldn't eat snow because there might be a piece of the bomb in it.” Okay, that's what radioactive material is now, it's a piece of the bomb. And so it was very powerful. It's in mother's milk, it's in your children's teeth. So it was a very powerful thing.
And in order to do this, however, there was a certain little scientific difficulty, which is that the radioactivity in fallout, by the time it's thousands of miles away, is extremely low. Now, we do not know the effects of extremely low radiation. If you give a dose of unit one to one person, that person will die. If you give a dose of one millionth of a unit to one million people, will one person die? Well, that can be argued.
And, in fact, the scientific evidence suggests that when you get to very, very low levels, that is, to the levels that are sort of normally in an environment, the levels that you get when you take one flight in the airplane or you go to some places in China where there's natural radioactivity, or if you live inside a brick house, these very low levels of radiation don't seem to be especially harmful. Life evolved for 5 billion years in the presence of low levels of radioactivity. So there's a scientific argument about this, and there's still a scientific uncertainty, but the scientists, feeling very bad about atomic weapons, decided, “We will say that, scientifically, very low levels of radioactivity experienced over millions of people are a bad thing.” And that's been the sort of official view of the anti-nuclear, anti-bomb scientists to this day. And so that became established. That was the point in which radiation, which is, as I said, is something we've lived with for three billion years, was established—this force of nature was established as just definitely an evil thing. It's a piece of the bomb. We don't want to have anything to do with it.
And if it's an evil thing, then whether that radiation is generated for military use, or peaceful use, it's a bad thing, and there's just inherent risk. We cannot control this demon.
It’s the mad scientist’s monster, it's the evil eye, it's the death ray. And, again, there's politics here because after the Cuban Missile Crisis and the tremendous excitement, and Kennedy and Khrushchev said, “We have to do something, our populaces are terrified now. This is very bad for us as leaders of our countries, to have our populace terrified that the things that we as leaders are doing are going to do…” Well it's very simple. We put the bomb test underground. Go on testing the bombs, we don't stop the arms race, we put the bomb test underground, so there's no fault. And the whole anti-nuclear movement just collapsed. They'd made this their issue. They made a good background issue. They say, we're going to stop fault. They did stop fault. So the thing went away. So what happened to these people? Well, meanwhile, atoms for peace was progressing.
Nuclear reactors were beginning to come online, and some of the people who had been anti-atomic bomb began to worry about low level radiation for reactors. It's the same issue. And for reactor issues, this tiny, tiny amount of radioactivity, but that's over millions of people. Well, we've already decided it is a bad thing. And so an anti-nuclear reactor movement began up, and it made, through a very substantial extent, the same arguments about low level radioactivity and the same organizations and the same individuals, in many cases, who’d been agitating against atomic war. I would argue that this may be a case of psychological formation known as “displacement.” You can't deal with something: nuclear war, you deny. We're just going to go into denial about if the bombs are there. We're not going to think about, which is still the case, by the way. We’re still largely in denial of the fact that the president of the United States and the president in the president of Russia, by their sole power, can press a button, so to speak, and can launch nuclear war. Each of these two men—well, I guess it’s also true of Xi now, he seems to be pretty much in power in China—there's three people now who've been launched a nuclear war on their own say-so and launch hundreds and hundreds of missiles essentially destroy civilization. We're all in denial about that, and people have been in denial about that since about 1965.
But if you're locked in a room with a guy with a flame thrower and somebody lights a match, you're going to get upset. And that seems to be what happened with the anti-nuclear reactor movement. And that's now become embedded, for example, the Green Party in Germany began as an anti-bomb party, converted to an anti-reactor party. What they actually are, if you get down to it, is an anti-additional low levels of radiation. When radiation is at a certain level, we don't want to add one percent in any place on earth from any reactor to it, and that's become their DNA, it’s in their DNA. So the Green Party in Germany, it can't escape from their original orientation because of the same anti-bomb…
So we see this transfer from nuclear weapons to nuclear reactors with radiation as sort of the common… But then in the ’70s, it's also then sort of the anti-reactor position then seemed to get mixed up with a broader anti-modernity, anti-industrial society sort of attitude.
Right, but actually this began more in Europe and the Europeans were very big on this, the whole 1960s thing, and really it's a 1960s phenomenon—the Baby Boomer, the 1968 generation, perhaps—that don't like nuclear. There is a feature of nuclear reactors, and this is an inherent feature of nuclear actors, is you need a lot of capital. If you're in a socialist country like the Soviet Union, you still need a lot of capital, it's just going to be under some big organization. In fact, the government always has to be involved, especially when people are worried about the safety of it, and you're going to need government regulations, so you're going to have a big government, you're going to have big corporations, and, because nuclear weapons are involved, you're also going to have secrecy. So, no matter what, you're dealing with these sort of secret, paternalistic authorities, which the kids of 1968 hated the whole idea of paternalistic authorities with their immense powers, and secrets, and God knows what they're up to with their machinations.
Whereas, the original idea was, “Well, solar power is dispersed.” Okay, anybody can put up a solar panel, so that's very communitarian. So that became a very important part of the politics of it. Less so now, I would say.
Today’s anti-nuclear voices (20:21)
Let me ask you about the politics of now because I understand that, and then obviously Three Mile Island was perhaps the capstone event, but yet, today, maybe the attitudes toward nuclear are changing and there's talk of nuclear renaissance, and in Europe—though not Germany—there's a lot of talk about building new reactors, keeping reactors open. Is the anti-nuclear sentiment today… in what ways is it different? Is it more about cost, or nuclear waste? It's not necessarily a fear of sort of “bigness,” we seem to generally like technology in this country.
That was the thing of the ’60s. That's not the thing now. In the United States and Western Europe, cost is a big feature because we can't seem to be able to build these things on time and in budget, but then we can't build a subway or a highway or a railroad on timer and budget, either. So these big projects we're not very good at these days, and that is a problem for the nuclear reactors. So the hope is to build smaller nuclear reactors so we don't run into this giant project syndrome that the United States and Western Europe seem to have problems with. But there's a lot of other things going on here.
Certainly the younger generation doesn't have the same feelings that the older generation did. Nuclear energy for the young folks, it's the symptoms. It's a postmodern thing. The three-eyed fish is not a scary thing. It's kind of a postmodern reference to the stuff that your parents were afraid of. So it's all ironic. The game Fallout, which is enormously important, made a billion dollars of sale (well, four, I think it was a billion dollars of sales in the first 24 hours after it was released) these are big cultural phenomena, so it's the post-apocalyptic wasteland, but it's a reference to the scary post-apocalyptic wasteland. Like I say, we're in denial about the actual. Radioactive mutant monsters? Of course there are radioactive monsters. When I give this lecture, I show a picture of one, he's wearing shades, he’s is kind of cool. It's all ironic and distancing, and so on and so on. The younger generation doesn't have that thing, but they have sort of an automatic response, which has just been built into the culture, an automatic response that, “Oh, there's something bad about radiation, I'm not actually viscerally afraid of it the way my parents were, but I just automatically think it's bad. And I'll give you an important example, okay, I'm going to give a life and death example.
After the Fukushima accident, when the tsunami overcame this thing—the Japanese had done very bad job there—the Japanese evacuate a lot of people from around there. Two thousand people died in the immediate evacuation, mostly the older people were yanked out of their homes or retirement homes or hospitals and so forth. Since then, a lot of the people have not been allowed to go back. They've been displaced. There's a lot of morbidity and mortality among these people whose communities have disrupted. This was totally unnecessary. If they had just left everybody in place and maybe handed out some iodine pills, nobody would have died. The kind of reactions that people have to these things… But that's not the worst mortality from Fukushima, the worst mortality from Fukushima is that the Japanese and the Germans shut down the nuclear power plants and burned coal instead, and the death rate—the deaths from burning the coal instead of the nuclear reactors—is now estimated at about 400,000 people. 400,000 people died from—oh, sorry, I’m off by an order magnitude: 40,000 people. Anyway, many tens of thousands of people have died from coal smoke that didn't have to die if people hadn't panicked.
Changing generational attitudes (24:01)
It is significant. It's research I mentioned in my book, and I've actually had some of the economists who've done some of that research on this podcast, and it's a lesson that the Japanese seem to have learned, whether it's to have less pollution or meet various environmental objectives, they seem to have re-embraced nuclear. Given, perhaps, how younger people today, younger voters maybe don't have that sort of deeper repulsion toward radiation that their parents did. Do you think, one, maybe putting the economics aside, that from a public perceptions standpoint, are you positive or negative about a nuclear renaissance in this country and can any optimism survive any sort of nuclear accident almost no matter how small?
It's going to be difficult because, like I say, the reaction to Fukushima shows that the government reaction and the media reaction shows that there's still an enormous amount of this stuff going on, both in the older people and also just by habit, by automatic response from the younger people. What's the worst power accident that's happened recently? Most people wouldn't realize it was the breaking of hydroelectric dams in Libya. They killed tens of thousands of people. Over 10,000 people died when a hydroelectric dam broke. A hydroelectric dams, that's renewable, that's supposed to be great stuff, right? Nobody talks about that. No nuclear reactor has ever killed 10,000 people, or a thousand people, or a hundred people, even. But hydroelectric dams, this isn't the first time a hydroelectric dam has broken and killed 10,000 people, either. It seems to happen every 20, 30 years or so, but people aren't afraid.
So yes, it's very serious. Nevertheless, there is another thing which is becoming very prominent in many people's minds, and which has, in fact, led to quite a substantial number of environmentalists who were originally opposed to nuclear actors who were saying, “We must have nuclear reactors,” and you know what this is: This is climate change. This, as you may know, is the other thing I've spent 25 years of my life on, is climate change. And so I'm now just going to give you a very brief little homily.
Under the current agreement, Paris Agreement as extended, if all the countries keep their pledges (that's a big “if”), they keep their pledges, some countries may do better than the pledges, but the estimate from the IPCC is that there will warm up to 2.7 degrees Celsius above the pre-industrial. We're now at about 1.4, so that's getting about twice as far as we are now. 2.7 degrees Celsius, in a world at that level, it will be rather difficult to maintain a prosperous and liberal civilization. Right now we have maybe a third of the world lives in a prosperous, liberal, fairly liberal free society. We would like that to be a hundred percent by the year 2100, but if we get up to 2.7 degrees C, which is the trajectory we're on now, then it's going to be extremely difficult to maintain that even for the third of the people who have it now.
But there's another feature which the climate people mostly don't like to talk about. You actually have to read the footnotes in the IPCC report to get this. You have to look at the graphs and get the numbers off the graphs. People, when they say, everybody says 2.7 degrees C, that's what the IPCC says is the most likely outcome, but there are large error bars on that. It could be 4.5 C degrees Celsius. What's the probability of going above 4.5 if you read it off the graphs? Five percent. And I have quotes from two separate very senior climate scientists saying, “Well, you wouldn't get on an airplane if it had a five percent chance of crashing.” This is why people are fighting to keep it down below two degrees. Once we get above two degrees, the probability of the airplane crashing becomes fairly high.
Is the consensus middle path or sort of these more extreme predictions, are they scary enough that environmental groups, which still are anti-nuclear, will change and there'll be a broader environmental pro-nuclear shift?
It definitely has made a difference to some prominent individuals. I'm not going to name names, but they're quite a substantial number of people and increasing numbers of people who are… The scientists are terrified, and the climate scientists are just, they have a hard time sleeping at night, so they worry about their kids. I had experience because I lived for 25 years studying nuclear war and all that stuff, so I guess I have a little thick skin when I think about the climate, but it's even scarier than nuclear war, simple fact of the matter, because nuclear war was a question of, can we avoid it? But climate change is something we're on track for now. That's where we're actually heading.
Nuclear fear in today’s media (28:58)
Let me finish up with this question, since you talk so much in the book about culture and the images that we sort of feed to ourselves. So I can think of two, perhaps, relevant bits of media over the past few years. I was wondering if you've seen either and if you had any general thoughts. One was the fine Chernobyl miniseries, which may have been on HBO, it was a four-part series on Chernobyl. And the film Oppenheimer. Have you seen either, and maybe give some context on how you look at those?
I'm not going to comment on Oppenheimer, that's very complicated. Chernobyl, they did a wonderful job of reproducing the Soviet thing. Everybody was smoking all the time. I was in the Soviet Union, you know? I'll just give you one example. They showed a helicopter going over and they showed it crashing. And the implication there is, “Oh, somehow magical radiation from the reactor crashed the helicopter.” Well, there actually was a helicopter crash, and it crashed because it ran into a crane. So that's just dishonest. That's just dishonest. And unfortunately, this is the way that the media is still, to a substantial extent, treating radiation.
There came a point in that miniseries, which, overall, I thought was excellent, when you finally found out what the actual death toll was, I think many viewers were surprised because if you watched every one of those episodes where they were talking about just how dangerous this meltdown was and the potential deaths, if the reactors exploded, you would've thought that many, many tens of thousands or a hundred thousand people had died—and they didn't! It was almost anti-climactic to find out how few people actually died. And if this is the first you had ever heard of Chernobyl, I think it was probably fairly surprising to people.
People die all the time in coal mine accidents. I have no idea what the death toll is. It's terrible. But coal is familiar, okay, as one of the people said in 1946 when they were talking about reactors, “Well, it wasn't 10,000 tons of coal they dropped on Hiroshima.” We have these associations with nuclear things that we just don't have with traditional things. And the associations, as we've discussed, go very far back into death rays, mad scientists, bad fathers, sexual implications of things, all kinds of magical and mysterious things that get associated with nuclear energy that they've never been associated with the more traditional forms of energy production.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
🌐 My chat (+transcript) with John Bailey on the potential for AI in education
jeudi 7 mars 2024 • Duration 22:10
Education was among the first victims of AI panic. Concerns over cheating quickly made the news. But AI optimists like John Bailey are taking a whole different approach. Today on Faster, Please! — The Podcast, I talk with Bailey about what it would mean to raise kids with a personalized AI coach — one that could elevate the efficacy of teachers, tutors, and career advisors to new heights.
John Bailey is a colleague and senior fellow at AEI. He formerly served as special assistant to the president for domestic policy at the White house, as well as deputy policy director to the US secretary of commerce. He has additionally acted as the Director of Educational Technology for the Pennsylvania Department of Education, and subsequently as Director of Educational Technology for the US Department of Education.
In This Episode
* An opportunity for educators (1:27)
* Does AI mean fewer teachers, or better teachers? (5:59)
* A solution to COVID learning loss (9:31)
* The personalized educational assistant (12:31)
* The issue of cheating (17:49)
* Adoption by teachers (21:02)
Below is a lightly edited transcript of our conversation
Education was among the first victims of AI panic. Concerns over cheating quickly made the news. But AI optimists like John Bailey are taking a whole different approach. Today on Faster, Please! — The Podcast, I talk with Bailey about what it would mean to raise kids with a personalized AI coach — one that could elevate the efficacy of teachers, tutors, and career advisors to new heights.
John Bailey is a colleague and senior fellow at AEI. He formerly served as special assistant to the president for domestic policy at the White house, as well as deputy policy director to the US secretary of commerce. He has additionally acted as the Director of Educational Technology for the Pennsylvania Department of Education, and subsequently as Director of Educational Technology for the US Department of Education.
An opportunity for educators (1:27)
Pethokoukis: John, welcome to the podcast.
Bailey: Oh my gosh, it's so great to be with you.
We’d actually chatted last summer a bit on a panel about AI and education, and this is a fast moving, evolving technology. People are constantly thinking of new things to do with it. They're gauging its strengths and weaknesses. As you're thinking about any downsides of AI in education, has that changed since last summer? Are you more or less enthusiastic? How would you gauge your evolving views?
I think I grow more excited and enthusiastic by the day, and I say that with a little humility because I do think the education space, especially for the last 20 years or so, has been riddled with a lot of promises around personalized learning, how technology was going to change your revolutionize education and teaching and learning, and it rarely did. It was over promise and under-delivered. This, though, feels like it might be one of the first times we're underestimating some of the AI capabilities and I think I'm excited for a couple different reasons.
I just see this as it is developing its potential to develop tutoring and, just in time, professional development for teachers, and being an assistant to just make teaching more joyful again and remove some of the drudgery. I think that's untapped area and it seems to be coming alive more and more every day. But then, also, I'm very excited about some of the ways these new tools are analyzing data and you just think about school leaders, you think about principals and superintendents, and state policy makers, and the ability of being able to just have conversations with data, not running pivot tables or Excel formulas and looking for patterns and helping to understand trends. I think the bar for that has just been dramatically lowered and that's great. That's great for decision-making and it's great for having a more informed conversation.
You're right. You talked about the promise of technology, and I know that when my kids were in high school, if there were certain classes which were supposedly more tech adept, they would bring out a cart with iPads. And I think as parents we are supposed to be like, “Wow, every kid's going to have an iPad that's going to be absolutely amazing!” And I'm not sure if that made the teachers more productive, I'm not sure, in the end, if the kids learned any better.
This technology, as you just said, could be different. And the one area I want to first focus on is, it would be awesome if we had a top-10-percent teacher in every classroom. And I know that, at least some of the early studies, not education studies, but looking at studies of using generative AI in, perhaps, customer service. One effect they notice is kind of raising the lower-performing group and having them do better. And so I immediately think about the ability to raise… boy, if we could just have the lowest-performing teachers do as well as the middle-performing teachers, that would seem to be an amazing improvement.
I totally agree with you. Yeah, I think that was the BCG study that found when consultants used gen AI—I think, in that case, it was ChatGPT—everyone improved, but the folks that had the most dramatic improvement were the lowest performers in the consulting world. And here you could imagine something very similar for teachers that are teaching out of field—that happens a lot in science and mathematics. It's with new teachers, and the ability of helping them perform better… also, the ability, I think, of combining what they know with also what science and research is saying is the best practice. That's been very difficult.
One of the examples I give is the Department of Ed has these guides called the What Works Clearinghouse Practice Guides, and this is what evaluation of research, and studies, and evaluation has to say, “This is the best way of teaching math, or the best way of teaching reading,” but these are dense documents, they're like 137 PDF pages. If you're asking a new teacher teaching out of field to read 137 pages of a PDF and apply it to their lesson that day, that's incredibly difficult. But it can happen in a matter of seconds now with an AI assistant that can read that practice guide, read your lesson, and make sure that you're getting just-in-time professional development, you're getting an assistant with your worksheets, with your class activities and everything. And so I totally agree with you, I think this is a way of helping to make sure that teachers are able to perform better and to really be an assistant to teachers no matter where they are in terms of their skill level.
Does AI mean fewer teachers, or better teachers? (5:59)
I recall a story, and I forget which sort of tech CEO was talking to a bunch of teachers, and he said, “The good news: in the future, all teachers will make a million dollars a year… bad news is we're only going to need like 10 percent of you” because each teacher would be so empowered by—this was pre-AI—by technology that they would just be so much more productive.
The future you're talking about isn't necessarily a future of fewer teachers, it's just sort of the good part of it, which is more productive teachers, and any field where there's a huge human element is always tough to make more productive. Is the future you're talking about just… it's not necessarily fewer teachers, it's just more productive teachers?
I think that's exactly right. I don't think this is about technology replacing teachers, I think it's about complimenting them. We see numerous studies that ask teachers how they spend their time and, on average, teachers are spending less than half of their time on instruction. A lot of it is on planning, a lot of it is on paperwork. I mean, even if we had AI that could take away some of that drudgery and free up teachers' times, so they could be more thoughtful about their planning or spend more time with students, that would be a gift.
But also I think the best analog on this is a little bit in the healthcare space. If you think of teachers as a doctor, doctors are your most precious commodity in a healthcare system, you want to maximize their time, and what you're seeing is that now, especially because of technology and because of some tools, you can push a lot of decisions to be more subclinical. And so initially that was with nurses and nurse practitioners so that could free up doctor's time. Now you're seeing a whole new category, too, where AI can help provide some initial feedback or responses, and then if you need more help and assistance, you’d go up to that nurse practitioner, and if you need more help and assistance, then you go and you get the doctor. And I bet we're going to see a bunch of subclinical tools and assistance that come out in education, too. Some cases it's going to be an AI tutor, but then kids are going to need a human tutor. That's great. And in some cases they're going to need more time with their teacher, and that's great, too. I think this is about maximizing time and giving kids exactly what they need when they need it.
This just sort of popped in my head when you mentioned the medical example. Might we see a future where you have a real job with a career path called “teacher assistant,” where you might have a teacher in charge, like a doctor, of, maybe, multiple classes, and you have sort of an AI-empowered teaching assistant as sort of a new middle-worker, much like a nurse or a physician's assistant?
I think you could, I mean, already we're seeing you have teacher assistants, especially in higher education, but I think we're going to see more of those in K-12. We have some K-12 systems that have master teachers and then teachers that are a little bit less-skilled or newer that are learning on the job. I think you have paraprofessionals, folks that don't necessarily have a certification that are helping. This can make a paraprofessional much more effective. We see this in tutoring that not every single tutor is a licensed teacher, but how do you make sure a tutor is getting just-in-time help and support to make them even more effective?
So I agree with you, I think we're going to see a whole category of sort of new professions emerge here. All in service by the way, again, of student learning, but also of trying to really help support that teacher that's gone through their licensure that is years of experience and have gone through some higher education as well. So I think it's a complimentary, I don't think it's replacing,
A solution to COVID learning loss (9:31)
You know, we're talking about tutoring, and the thing that popped in my head was, with the pandemic and schools being hybrid or shut down and kids having to learn online and maybe they don't have great internet connections and all that, that there's this learning-loss issue, which seems to be reflected in various national testing, and people are wondering, “Well great, maybe we could just catch these kids up through tutoring.” Of course, we don't have a nationwide tutoring plan to make up for that learning loss and I'm wondering, have people talked about this as a solution to try to catch up all these kids who fell behind?
I know you and I, I think, share a similar philosophy of where… in DC right now, so much of the philosophy around AI is, it's doomerism. It's that this is a thing to contain and to minimize the harms instead of focusing on how do we maximize the benefits? And if there's been ever a time when we need federal policymakers and state policy makers to call on these AI titans to help tackle a national crisis, the learning crisis coming out of the pandemic is definitely one of those. And I think there's a way to do tutoring differently here than we have in the past. In the past, a lot of tech-based tutoring was rule-based. You would ask a question that was programmed, Siri would give a response, it would give a pre-programed answer in return. It was not very warm. And I think what we're finding is, first of all, there's been two studies, one published in JAMA, another one with Microsoft and Google, that found that in the healthcare space, not only could these AI systems be not just technically accurate, but their answers, when compared to human doctors, were rated as more empathetic. And I think that's amazing to think about when empathy becomes something you can program and maximize, what does it mean to have an empathetic tutor that's available for every kid that can encourage them?
And for me, I think the thing that I realized that this is fundamentally different was about a year ago. I wanted to just see: Could ChatGPT create an adaptive tutor? And the prompt was just so simple. You just tell it, “I want you to be an adaptive tutor. I want you to teach a student in any subject at any grade, in any language, and I want you to take that lesson and connect it to any interest a student has, and then I want you to give a short quiz. If they get it right, move on. If they get it wrong, just explain it using simpler language.” That literally is the prompt. If you type in, “John. Sixth grade. Fractions. Star Wars,” every example is based on Star Wars. If you say, “Taylor Swift,” every example is on Taylor Swift. If you say, “football,” every example is on football.
There's no product in the market right now, and no human tutor, that can take every lesson and connect it to whatever interest a student has, and that is amazing for engagement. And it also helps take these abstract concepts that so often trip up kids and it connects it to something they're interested in, so you increase engagement, you increase understanding, and that's all with just three paragraphs of human language. And if that's what I can do, I'd love to sort of see our policymakers challenge these AI companies to help build something that's better to help tackle the learning loss.
The personalized educational assistant (12:31)
And that's three paragraphs that you asked of a AI tutor where that AI is as bad as it's ever going to be. Oftentimes, when people sort of talk about the promise of AI and education, they'll say like, “In the future,” which may be in six months, “kids will have AI companions from a young age with which they will be interacting.” So by the time they get to school, they will have a companion who knows them very well, knows their interests, knows how they learn, all these things. Is that kind of information something that you can see schools using at some point to better teach kids on a more individualized basis? Has there been any thought about that? Because right now, a kid gets to school and all teacher knows is maybe how the kid did it in kindergarten or preschool and their age and their face, but now, theoretically, you could have a tremendous amount of information about that kid's strengths and weaknesses.
Oh my gosh, yeah, I think you're right. Some of this we talked about in the future, that was a prompt I constructed, I think for ChatGPT4 last March, which feels like eons ago in AI timing. And I think you're right. I think once these AI systems have memory and can learn more about someone, and in this case a student, that's amazing, to just sort of think that there could be an AI assistant that literally grows up with the child and learns about their interests and how they're struggling in class or what they're thriving in class. It can be encouraging when it needs to be encouraging, it can help explain something when the child needs something explained, it could do a deeper dive on a tutoring session. Again, that sounds like science fiction, but I think that's two, three years away. I don't think that's too far.
Speaking of science fiction, because I know you're a science fiction fan, a lot of what we're describing now feels like the 1995 Sci-Fi novel, The Diamond Age and that talked about this, it talked about Nell, who was a young girl who came in a possession of a highly advanced book. It was called the Young Lady’s Illustrated Primer, and it would help with tutoring and with social codes and with a lot of different support and encouragement. And at the time when Neil wrote that in ’95, that felt like science fiction and it really feels like we've come to the moment now—you have tablet computers, you have phones that can access these super-intelligent AI systems that are empathetic, and if we could get them to be slightly more technically accurate and grounded in science and practice and rigorous research, I don’t know, that feels really powerful. It feels like something we should be leaning into more than leaning away from
John, that reference made this podcast an early candidate for Top Podcasts of 2024. Wonderful. That was really playing to your host. Again, as you're saying that, it occurs to me that one area that this could be super helpful really is sort of career advice when kids are wondering, “What I should do, should I go to college?” and boy, to have a career counselor's advice supplemented by a lifetime of an AI interacting with this kid… Counselors will always say, “Well, I'm sure your parents know you better than I do.” Well, I'll tell you, a career counselor plus a lifetime AI, you may know that kid pretty well.
Let's just take instruction off the table. Let's say we don't want AI to help teach kids, we don't want AI to replace teachers. AI as navigators I think is another untapped area, and that could be navigators as parents are trying to navigate a school choice system or an education savings account. It could be as kids and high school students are navigating what their post-college plan should be, but these systems are really good with that.
I remember I played with a prompt a couple months ago, but it was that, I said, “My name is John. I play football. Here's my GPA. I want to go to school in Colorado and here's my SAT score. What college might work well for me?” And it did an amazing job with even that rudimentary prompt of giving me a couple different suggestions in why that might be. And I think if we were more sophisticated there, we might be able to open up more pathways for students or prevent them from going down some dead ends that just might not be the right path for them.
There's a medical example of this that was really powerfully illustrative for me, which is, I had a friend who, quite sadly a couple of months ago was diagnosed with breast cancer. And this is an unfolding diagnosis. You get the initial, then there's scans and there's biopsies and reports, and then second and third and fourth opinions, it's very confusing. And what most patients need there isn't a doctor, they need a navigator. They need someone who could just make sense of the reports that can explain this Techno Latin that kind of gets put into the medical jargon, and they need someone to just say, what are the next questions I need to ask as I find my path on this journey?
And so I built her a GPT that had her reports and all she could do was ask it questions, and the first question she said is, “Summarize my doctor notes, identify they agree and where they disagree.” Then, the way I constructed the prompt is that after every response, it should give her three questions to ask the doctor, and all of a sudden she felt empowered in a situation where she felt very disempowered with navigating a very complex, and in that case, a life-threatening journey. Here, how can't we use that to take all the student work, and their assessments, their hobbies, and start helping them be empowered with figuring out where they should be pursuing a job or college or some other post-secondary pathway.
The issue of cheating (17:49)
You know I have a big family, a lot of kids, and I've certainly had conversations with, say, my daughters about career, and I'll get something like, “Ugh, you just don't understand.” And I'll say, “Well, help me, make me understand.” She's like, “Oh, you just don't understand.” Now I'm like, “Hey, AI, help me understand, what does she want to do? Can you give me some insights into her career?”
But we've talked about some of the upsides here and we briefly mentioned, immediately this technology attracted criticism. People worried about a whole host of things from bias in the technology to kids using it to cheat. There was this initial wave of concerns. Now that we're 15 months, maybe, or so since people became aware of this technology, which of the concerns do you find to be the persistent ones that you think a lot about? Are you as worried, perhaps, about issues of kids cheating, on having an AI write the paper for them, which was an early concern? What are the concerns that sort of stuck with you that you feel really need to be addressed?
The issue of cheating is present with every new technology, and this was true when the internet came out, it was true when Wikipedia came out, it was true when the iPhones came out. You found iPhone bans. If you go back and look at the news cycle in 2009, 2010, schools were banning iPhones; and then they figure out a way to manage it. I think we're going to figure out a way to manage the cheating and the plagiarism.
I think what worries me is a couple different things. One is, the education community talks often about bias, and when they usually talk about bias, in this case, they're talking about racial bias in these systems. Very important to address that head on. But also we need to tackle political bias. I think we just saw that recently with Gemini that, often, sometimes these systems can surface a little bit of center-left perspective and thinking on different types of subjects. How do we fine-tune that so you're getting it a little bit more neutral. Then also, in the education setting, it's pedagogical bias. Like when you're asking it to do a lesson plan or tutoring session, what's the pedagogy that's actually informing the output of that? And those are all going to be very important, I think, to solve.
The best case scenario, AI gets used to free up teacher time and teachers can spend more time in their judgment working on their lesson plans and their worksheets and more time with kids. There's also a scenario where some teachers may fall asleep at the wheel a little bit. It's like what you're seeing with self-driving cars, that you're supposed to keep your hands on the wheel and supposed to be at least actively supervising it, but it is so tempting to just sort of trust it and to sort of tune out. And I can imagine there's a group of teachers that will just take the first output from these AI systems and just run with it, and so it's not actually developing more intellectual muscle, it's atrophying that a little bit.
Then lastly, I think, what I worry about with kids—this is a little bit on the horizon, this is the downside to the empathy—what happens when kids just want to keep talking to their friendly, empathetic, AI companion and assistant and do that at the sacrifice of talking with their friends, and I think we're seeing this with the crisis of loneliness that we're seeing in the country as kids are on their phones and on social media. This could exaggerate that a lot more unless we're very intentional now about how to make sure kids aren't spending all their time with their AI assistant, but also in the real life and the real world with their friends.
Adoption by teachers (21:02)
Will teachers be excited about this? Are there teachers groups, teachers unions who are… I am sure they've expressed concerns, but will this tool be well accepted into our classrooms?
I think that the unions have been cautiously supportive of this right now. I hear a lot of excitement from teachers because I think what teachers see is that this isn't just one more thing, this is something that is a tool that they can use in their job that provides immediate, tangible benefits. And if you're doing something that, again, removes some drudgery of some of the administrative tasks or helps you with figuring out that one worksheet that's going to resonate with that one kid, that's just powerful. And I think the more software and systems that come out that tap that and make that even more accessible for teachers, I think the more excitement there is going to be. So I'm bullish on this. I think teachers are going to find this as a help and not as a threat. I think the initial threat around plagiarism, totally understandable, but I think there's going to be a lot of other tools that make teachers' lives better.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
🌐 My chat (+transcript) with James Walker of microreactor startup NANO Nuclear Energy
vendredi 1 mars 2024 • Duration 17:10
Readers and listeners of Faster, Please! know how incredible the untapped potential of nuclear power truly is. As our society (hopefully) begins to warm to the idea of nuclear as an abundant, sustainable, and safe source of energy, a new generation of engineers and entrepreneurs is developing a whole new model of nuclear power: the microreactor.
Here on this episode of Faster, Please! — The Podcast, I talk with James Walker, a nuclear physicist and CEO of NANO Nuclear Energy about the countless applications of his company’s under-development, mobile, and easily-deployable nuclear reactors.
In This Episode
* Why the microreactor? (1:14)
* The NANO design plan (7:11)
* The industry environment (11:42)
* The future of the microreactor (13:45
Below is a lightly edited transcript of our conversation
Why the microreactor? (1:14)
Pethokoukis : James, welcome to the podcast.
Walker: I would say the way NANO got going is probably of interest, then. When we first entered the nuclear space, and my background is a nuclear physicist, nuclear engineer, so I knew that there's a very high bar to entry in nuclear and there's a lot of well-established players in the space. But, really, when we actually took a look at the whole landscape, most of the development was in the SMR space, the Kairos, the Terra Powers, the NuScales, and we could see what they were doing: They were aiming for a much more manufactural reactor that could deploy a lot faster. It was going to be a lot smaller, fewer mechanical components, smaller operating staff to bring down costs. So that all made a lot of sense, but what I think was missing in the market—and there are a few companies involved in this—was that the microreactor space looked to be the larger potential market. And I say that because microreactors are more readily deployable to places like remote mining sites, remote habitation, disaster relief areas, military bases, island communities… you put them on maritime vessels to replace bunk fuel, charging stations for EV vehicles... Essentially hundreds of thousands of potential locations competing against diesel generators, which, up until now, up until microreactors, had no competition. So the big transformative change here is—obviously SMRs are going to contribute that, but—micro reactors can completely reshape the energy landscape and that's why it's exciting. That's the big change.
You gave some examples, so I want you to give me a couple more examples, but I'll say that I was thinking the other day about the expansion, partially due to AI, of these big data centers around the country. Is that the kind of thing—and you can give me other examples, as well—of where a much smaller microreactor might be a good fit for it, and also tell me, just how big are these reactors?
AI centers and data centers are particularly a big focus of tech at the moment. Microsoft even have people deliberately going out and speaking to nuclear companies about being able to charge these new stations because they want these things to be green, but they also want them in locations which aren't readily accessible to the grid. And a lot of the time, some of the power requirements of these things might be bigger than the town next to them where they've got these things. So their own microreactor or SMR system is actually a really good way of solving this where it's zero carbon-emitting energy, you can put it anywhere, and it is the most consistent form of energy. Now you can out-compete diesel in that front, it can go outcompete, wind or solar. It really has no competitors. So they are leaning in that direction and a lot of the big drive in nuclear at the moment is coming from industry. So that's the big change, I think. It's not strictly now a government-pushed initiative.
What's the difference between these and the SMR reactors, which my listeners and readers might be a little bit more familiar with?
SMRs, the small modular reactors, obviously if you think of a large conventional nuclear power station, you're thinking dozens and dozens of acres of land being occupied by essentially a big facility. An SMR brings that down by an order of magnitude. You still need to probably have an area about 10 city blocks, but the reactor itself is much, much smaller, occupied by a much smaller footprint than that.
Microreactors are much smaller, again, so if you take our design as an example, the whole system, the core and the turbine that produces the electricity, all fits within an ISO container. If you think of the standard shipping container you see on the back of a ship or you see on the back of a truck or a train, that's where you're really looking at. And the reason for that is that we're trying to make it as deployable and as mobile as possible. So conventional transportation—infrastructure, trucks, trains, ships—get these things anywhere in the world. Helicopter them in, if you really want. And once they're down there you've got 10, 15, 20 years of power consistently without that constant need to import fuel like you would with the diesel generator. That's the real big advantage of these things. Obviously SMRs don't have that ability, but they are more powerful machines. So you're powering cities, or bit towns, and that kind of thing. They are catering to different markets. They're not exactly competitors, they're very complimentary.
But even for big grid systems, micro reactors could play a big part because they could be intermittently placed within a grid system so that you have backup power systems all the time that's not reliant on one major area to produce power for the entire grid system. It can always draw power from wherever it needs. And there's a big advantage to micro correctors there.
Other examples of where microreactors could be used: We know that the military is very interested because they have an obligation to be able to self-power for at least two weeks. And obviously micros can take you well beyond that for, like, 50 years, so that easily meets their requirements. They're looking to get rid of diesel and replace them with microreactors and they're putting money in that space.
I would say a big market is going to be things like island communities that predominantly run on diesel at the moment, and that means it's expensive and it's polluting, and they're constantly bringing in diesel on a daily basis. Countries like the Philippines, Indonesia, where they have the majority of their population on these island communities that all run on diesel, you would essentially be taking hundreds of millions of people off diesel generator and putting them onto nuclear if you could bring in that technology to these areas.
And the US actually has an enormous population on island communities that run on diesel, too, that could be replaced with microreactors, and you could then have a zero carbon-emitting solution to energy requirements and less energy insecurity.
The NANO design plan (7:11)
Would they need to be refueled and how many people would it take? How many technical people would you need to operate one of them?
The idea here with our reactors is that we don't want to refuel at-site. What we would likely do is just decommission that reactor and remove it and we would just bring in a replacement. It's this less messy, there's no refueling process, it's easier to license that way. The interesting part about this is that we actually would probably only need a couple people on site while the reactor is running, and the reason for that is because obviously we need someone for physical security and maybe a mechanic on site who can just do some sort of physical intervention to modify the mechanical equipment.
The way these will likely work is that you'll have a central location where it monitors the behavior of dozens of reactors that are deployed at any one time. And you have all your nuclear engineers and your operators in that space and they monitor everything.
So you don't need a nuclear engineer at each site. And that way these things are very deployable and, to be honest, everybody who's going to work on these things are going to be quite bored. There's not going to be a lot to do because reactors are mostly self-regulating systems, and the intervention that's needed on a daily basis is very minimal. So even for the hub, it's mostly just an observation exercise to check on transient behavior as it's operating and then maybe some tweaks here and there, and that's essentially all that would need to be done for these things. And then you can bring down your OpEx costs very considerably.
So just a bit about the technology itself: You're working on two different reactors? Can you explain the differences in reactors and where they are in the development-deployment stage?
We have two expert technical teams working on two different reactor designs, and that's partly so we can de-risk our own operations. So we know that even if one meets critical problems, the other one will be able to go on, so we're just doubling our chances of success. The MO we gave to both of them was the same: It has to be modular, it needs to be passively cooling, it needs to be able to be shipped anywhere in the world, so it needs to be fit within an ISO container. And we gave both teams that MO. They both came up with very innovative and novel solutions to that problem.
So the Zeus reactor, which draws from the scientists and engineers down in California, their solution was just completely remove the coolants and use a thermal conduction. And if you do that, you can remove all the mechanical systems in the reactor. You reduce the size, you reduce the pumps, and then you have something that's very, very simple and size shrinks right down and you can get it in that ISO container system. That's very innovative, that's the Zeus reactor.
The Odin team, their solution was, “Well if you could introduce some initial heat into the system for a salt-based system and the uranium is providing that natural heat, and you create a natural circulation so you can remove pumps and you can remove circulatory systems and that way, again, you can shrink the reactor right down.”
So two very different solutions to the same problem, and that's how they differ. Odin does have a coolant that has a natural circulation that moves it around and Zeus has removed the coolant completely, which is more novel, I would say, and relies on a thermal conduction mechanism where the uranium just gets hot and it conducts through a solid core to the periphery where heat just gets removed by a naturally circulated air just going around.
Is there a difference with how much power each kind could potentially generate from a shipping container sized unit?
There was, originally, but I think the constraints of having to confine it to a shipping container almost got them into the same ballpark. So they're now both about, well, I'd say Zeus is maybe four megawatt thermal, Odin, it might be five megawatt thermal, but the power of the electric, once the conversion goes through, it brings them out to that one, one-and-a-half megawatt electric power output.
And what can that power?
A thousand homes for 20 years, mine sites, oil and gas sites for bringing the oil to the surface, remote communities, military bases…
Plenty of power for that kind of thing.
Plenty of power for that kind of thing. And also a big upside would be places where there's communities that completely are removed from the grid, desalination plans, medical facilities. Suddenly that all becomes very possible. You can unlock an enormous amount of wealth from landlocked resources, which just aren't economic because of fuel requirements to mine those things. So you can unlock trillions of dollars of value in resources just by having microreactors come into these remote locations.
The industry environment (11:42)
Whenever I talk with an expert about this topic, we eventually get to these two questions: One question is sort of, what is this technology’s timeline? So there’s that technology question. And then the second issue: What’s the regulatory environment like for you folks?
You're going to see SMRs come online first. They're going to get licensed first. They've got a bit of a head start. Microreactors, at the moment, all of the main contenders, including us, are basically at the same point. We're going into physical and test work that's looking at about a two-year process to collect all the data and licensing. Licensing is actually the longest-lead item that's about just under four years. That takes us all out to about 2030 where, before you have a commercial deployment of a microreactor, you're able to go anywhere we want.
I would imagine SMRs, it's going to be several years before that. But then once microreactors can deploy, you'll see many more of them being deployed than SMRs.
Would they be regulated by the Nuclear Regulatory Commission (NRC)? Is that who the chief regulator is?
Yeah, the NRC deals with all commercial ventures. So if it's defense or public, then you obviously would be DOE or DOD. NRC manages commercial ventures, so they're going to be in charge of the licensing for all micro and SMRs. I would say to your comment about the regulatory environment, I assume there are going to be adjustments made to the way these things are licensed because they are a very different product to a big conventional civil power plant, which is gigawatts or multiple gigawatts down to one megawatt. It's a very different device, very different operating system. I anticipate there will be changes. If there are not, that might complicate the deployment of microreactors.
We do know they are aware of the need to modify the regulatory framework around these new systems. So we're hoping obviously in time for when we go to licensing process, and all the other microreactors are probably hoping the same, that that framework is in place so we can be assessed on their own criteria.
The future of the microreactor (13:45)
Are you viewing this as primarily initially as an American market or as a European market, as an Asian market? What do you see as the potential market for this? Once we're up and running,
The first market will be the American market, and that's going to hit things like mining sites, military bases, data centers, AI centers, things removed off from the grid; but then you can expand very quickly in this state to something like charging stations for your EV vehicles in the middle of nowhere. If you bring diesel generators in to power those things, it defeats the point. And you can't just put wind and solar farms wherever you want because they're very locationally dependent on weather systems. But microreactors actually mean you can suddenly electrify the entire country. So you can periodically cite charging stations or EV vehicles throughout the whole country, and that'll be tens of thousands of potential essentially recharging stations that you can then drive your EV vehicle across the country because there could be periodic charging stations for all these vehicles. So it'll begin with that way.
And we'll see a similar thing in continents like Europe that have more sophisticated grid systems. But then as this expands into places like Southeast Asia and Indonesia, the Phillippines, Thailand, big island community countries where microreactors are replacing diesel generators and making them more green. And then in places like Africa, large swathes of population cut off from grid completely, and then you'll see them deploying into those areas for desalination, medical facilities, and then ultimately mining projects.
Big picture then, what’s the dream? What does the technology and the company look like in 2035 or 2040?
So I would say 2035, what we want to do is we want to be really deploying thousands of these things across the world. Not just the States and North America, but internationally. There's essentially an unlimited market for these. We won't sell the reactors, but we will sell the power. So we'll be an operator for all these companies, industry partners, mining companies. We hope to be putting these things on ships and replacing bunker fuel and maritime vessels.
We won't be hitting the main grid systems, exactly. I think SMRs will pick up a lot of slack there, but for the first time, we'll be in a position to really start taking our microreactors, and the cost of these things by 2035 will have fallen to such a point that they will be more economic than diesel generators in the middle of nowhere that rely on a constant importation of diesel and the associated costs with that, it could be very transformative. It could create an enormous amount of wealth, it could improve the health of the planet across the board, for locations that are cut off, cut off. And for NANO, I already believe we'll be a massive company anyway, but there'll be a lot of blue-sky potential for expanding into other industries.
You're designing, you're developing, would you be the manufacturer, ultimately, of these reactors?
Yes, we'll be the manufacturer of these things. As I mentioned though, we won't sell them because people won't be interested in a big upfront capital cost with the associated operating liability. So we will just sell power. You need 10 megawatts for 20 years? We’ll supply that. You need 16 megawatts for five years? We'll supply that, too. And that'll be the business model.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
🌐 My chat (+transcript) with defense policy analyst Todd Harrison on the US Space Force
vendredi 9 février 2024 • Duration 25:04
The US Space Force, the newest branch of the American military, takes national defense to a new frontier. Here on Faster, Please! — The Podcast, I sit down with AEI senior fellow Todd Harrison to discuss the state of the Space Force and its evolving mission.
Harrison has served as senior vice president and head of research at Metrea, a defense consulting firm, been a senior fellow for defense budget strategies at the Center for Strategic and Budgetary Assessments, directed the Defense Budget Analysis and Aerospace Security Project at the Center for Strategic and International Studies, and served as a captain in the US Air Force Reserve.
In This Episode
* Creating the Space Force (0:53)
* A New Kind of Warfare (9:15)
* Defining the Mission (11:40)
* Conflict and Competition in Space (15:34)
* The Danger of Space Debris (20:11)
Below is a lightly edited transcript of our conversation
Creating the Space Force (0:53)
Pethokoukis: I was recently looking at an image that showed the increase in the number of satellites around the earth, and it's been a massive increase; I imagine a lot of it has to do with SpaceX putting up satellites, and it's really almost like—I think to an extent that most people don't understand; between government, military, and a lot of commercial satellites—it's really like the earth is surrounded by this information shell. And when looking at that, I couldn't help but think, “Yeah, it kind of seems like we would need a Space Force or something to keep an eye on that and protect that.” And I know there was a lot of controversy, if I'm not mistaken, like, “Why do we need this extra branch of government?” Is that controversy about why we need a Space Force, is that still an active issue and what are your thoughts?
Harrison: To start with where you started, yes. The number of satellites in space has been growing literally exponentially in the past few years. I'll just throw a few numbers out there: In 2023 alone, about 2,800 new satellites were launched, and in that one year it increased the total number of satellites on the orbit by 22 percent, just in one year. And all the projections are that the number of satellites, number of launches, are going to keep growing at a pace like that for the foreseeable future, for the next several years. A lot is going into space, and we know from all other domains that where commerce goes conflict will follow. And we are seeing that in space as well.
Like the Navy protecting the shipping lanes.
Yeah, exactly. So we know that to a certain extent that's inevitable. There will be points of contention, points of conflict, but we've already seen that in space just with the military dimension of our space. Back in 2007, I think a lot of the world woke up to the fact that space is a contested environment when the Chinese tested an anti-satellite weapon, which, by the way, produced thousands of pieces of space debris that are still in orbit today. More than 2,600 pieces of debris are still in orbit from that one Chinese ASAT test. And, of course, that was just one demonstration of counter-space capabilities. Space has been a contested war fighting domain, really, since the beginning of the Space Age. The first anti-satellite test was in 1959, and so it has become increasingly important for economic reasons, but also for military reasons. Now, when the Space Force debate kicked into high gear, I think it took a lot of people who weren't involved in military space, I think it took a lot of people by surprise that we were having this debate.
Yeah, it really seemed like it came out of nowhere, I think probably for 99 percent of people who aren't professionals tracking the issue.
In reality, that debate, it started in the 1990s, and there was a senator from up in New Hampshire who had written a journal article basically talking about, “Hey, we need to separate space into its own military service.” You had the Air Force chief of staff at the time in the mid-1990s, General Ron Fogleman. He said that the Air Force should eventually become an Air and Space Force, and then one day a Space and Air Force. So you had the seeds of it happening in the ’90s. Then you had Congress wanting to look at, “Okay, how do we do this? How do we reorganize military space?” They created a commission that was led by Donald Rumsfeld before he became Secretary of Defense for the second time. That commission issued its report in 2001, and it recommended a bunch of reforms, but it said in the midterm, in five to 10 years we should create a separate military service for space, something like a Space Corps.
Nothing happened, even though Rumsfeld then became Secretary of Defense. We kind of took our focus off of it for a while, there were a few other studies that went on, and then in 2016, two members of Congress, a Republican and a Democrat, Mike Rogers and Jim Cooper, who were on the House Armed Services Committee, they took this issue up. They got so fed up with the oversight of looking at how the Air Force was shortchanging space in many ways in terms of personnel and training and funding and modernization, that they then put a provision into the 2017 National Defense Authorization Act that would've created a Space Corps, they called it: a separate military service for space. And that bill actually passed the full House of Representatives.
The Senate did not have a similar provision in their bill, so it died. It didn't make it into law—but then, all of a sudden, a couple of years later, President Trump, pretty much out of the blue floats this idea of creating a Space Force, and he did it at a rally that was at a Marine Corps base out in California, and, for some reason, it caught on with Trump. And then you already had the votes, a bipartisan group in the House of Representatives who had already pushed this, and so it started to gain momentum.
It was very controversial at the time. The secretary of the Air Force at that time was adamantly opposed to it. Eventually, Trump forced it on the civilian establishment at DoD, and Congress ultimately enacted it, and the Space Force became a military service in December… I think December 20th, 2019. Now, there was some question, will the Biden administration keep it?
Is this here to stay?
It is written into law, so a president cannot unilaterally take it away, and, at this point, it's got its own roots in the ground and the Space Force is not going anywhere.
A little bit off topic, but was there a similar debate when they separated the Air Force out of the Army?
There was, yeah, and it lasted for a long time. So you had folks like Billy Mitchell who were in the Army Air Corps way back before World War II—I think in the late ’20s, early ’30s—they were advocating for a separate military service for Air. And I believe Billy Mitchell actually got court marshaled because he disobeyed orders from a superior about advocating for this with Congress.
And so the idea of a separate service for Air pretty much died out until World War II hit. And, of course, that was a war that we were brought into it by an attack that came from the air, and that really brought air power into full effect in terms of a major component of military power. So then, at the end of World War II, the Air Power advocates got together, they created the Air Force Association to advocate for a separate military service and they got it in the National Security Reform Act in 1947, I think the Air Force actually stood up in 1948.
It took longer, I would argue, a lot more advocacy and it took a World War, a crisis, to show us how important Air was to the military in order for us to actually create an Air Force. Now, I think, thankfully, we did that in advance of a crisis in terms of creating the Space Force.
Right now, what the Space Force does, is it tracking satellites, tracking and space debris, is it a monitoring and tracking service? It's not a fighting service yet?
Well, yes and no. A lot of what the Space Force does on a day-to-day basis is they provide space-enabling capabilities to the other military services. So if you want to get intelligence, reconnaissance, surveillance from space, you can go to the Space Force. Separately, we have intel space that's run through the National Reconnaissance Office—that has not changed its organization. If you want to get GPS, the Space Force runs our GPS constellation of satellites, and they're responsible for defending it against all forms of attack, which it is attacked daily. If you want satellite communications, the Space Force delivers that. If you want missile warning… So the Space Force delivers lots of enabling capabilities for other parts of the military. At the same time, it is tasked with defending those capabilities, and it's not just against kinetic forms of attack where an adversary is literally trying to shoot a satellite out of the sky.
A New Kind of Warfare (9:15)
I guess that's the first thing that popped in my mind. Too much science fiction maybe, but…
Well, that is real, that's a real threat. The truth is there's not a lot you can do to actively protect against that—at least, we don't have a lot of capabilities right now—but the forms of attack we see on a daily basis are cyber, electromagnetic, and other forms of non-kinetic attack like lazing the sensors on a satellite. You could temporarily, or even permanently, blind the sensors on a satellite with a laser from an aircraft or from a ground station.
I'll give you an example: When Russia invaded Ukraine, at the very beginning of the invasion, one of the first attacks they launched was a space attack. It was cyber, and it was against a commercial space capability. What they did is they exploited a vulnerability, previously unknown, in ViaSat modems. ViaSat's, a commercial satellite communications company, they had some sort of a vulnerability in their modems. The Russians, through a cyber attack, basically bricked all those modems. They locked them out. The Ukrainian military relied on ViaSat for satellite communications, so it locked up all of their terminals right at the beginning. They could not communicate using Satcom. Incidentally, it locked up lots of ViaSat terminals across Europe in that same attack. So we see this happening all the time. Russian forces are constantly jamming GPS signals. That makes weapons and drones much less effective. They can't use GPS for targeting once they go into a GPS-denied environment.
But the Space Force has ways to overcome that. We have protected military GPS signals, we have ways of increasing the strength of those signals to overcome jamming. There's lots of things you can do with counter-space and then counter to the counter-space.
The problem is that we kind of sat on our laurels and admired our advantage in space for a couple of decades and did not make a concerted effort to improve the protection of our space systems and develop our own capability to deny others the advantage of space because others didn't have that same advantage for a long time.
Well, that has changed, and the creation of the Space Force, I think, has really set us in a positive new direction to get serious about space defense and to get serious about denying others the advantage of space if we need to.
Defining the Mission (11:40)
The Chief of Space Operation at the Space Force recently published a short white paper, which I guess begins to lay out kind of a doctrine, like, “What is the mission? How do we accomplish this mission?” Probably the first sort of Big Think piece maybe since Space Force became a branch. What did that white paper say? What do you make of it?
Yeah, so I think one of the criticisms of military space for a while has been that we didn't really have space strategy, space doctrine, we didn't have a theory of space power that was well developed. I would argue we had some of those, but it's fair to say that they have not been that well developed. Well, one of the reasons you need a military service is to actually get the expertise that is dedicated to this domain to think through those things and really develop them and flesh them out, and so that's what this white paper did, and I think it did a pretty good job of it, developing a theory of space power. He calls it a “theory of success for competitive endurance in the space domain.”
And one of the things I thought was really great that they highlight in the paper, that a lot of US government officials in the past have been reluctant to talk about, is the fact that we are under attack on a daily basis—gray zone-type aggression in the space domain—and we've got to start pushing back against that. And we've got to actually be willing and able to exercise our own defensive and counter-space capabilities, even in the competition phase before we actually get to overt conflict, because our adversaries are doing it already. They're doing it to us. We need to be able to brush them back. We're not talking about escalating and starting a conflict or anything like that, but when someone jams our satellite communication systems or GPS, they need to feel some consequences. Maybe something similar happens to their own space capabilities, or maybe we employ capabilities that show them we can overcome what you're doing. So I thought that was a good part of the theory of success is you can't just sit by and let an adversary degrade your space capabilities in the competition phase.
How much of the focus of Space Force currently, and maybe as that paper discussed what the department's mission is, focused on the military capabilities, protecting military capabilities, the military capabilities of other nations, versus what you mentioned earlier was this really expanding commercial element which is only going to grow in importance?
Today, the vast majority of the Space Force's focus is on the military side of providing that enabling military capability that makes all of our forces more effective, protecting that capability, and then, to a lesser extent, being able to interfere with our adversaries’ ability to use space for their own advantage.
They are just now starting to really grapple with, “Okay, is there a role for the Space Force in protecting space commerce, protecting commercial space capabilities that may be economically important, that may be strategically important to us and our allies, but are not directly part of a military capability?” They're starting to think through that now, and it really is the Space Force taking on a role in the future that is more like the Navy. The Navy does fight and win wars, of course, but the Navy also has a role in patrolling the seas and ensuring the free flow of commerce like we see the US Navy doing right now over in the Red Sea: They're helping protect ships that need to transit through that area when Houthi Rebels are targeting them. Do we need that kind of capability and space? Yeah, I think we do. It is not a huge priority now, but it is going to be a growing priority in the future.
Conflict and Competition in Space (15:34)
I don't know if such things even currently exist, but if you have satellites that can kill other satellites, do those exist and does the Space Force run them?
Satellites that can kill other satellites, absolutely. That is a thing that exists. A lot of stuff is kept classified. What we know that's unclassified is, back in the 1960s and early ’70s, the Soviets conducted many tests—a couple of dozen tests—of what they call a co-orbital anti-satellite system, that is a satellite that can kill another satellite, and there's still debris in space from some of those tests back in the ’60s and ’70s.
We also know, unclassified, that China and Russia have on-orbit systems that appear to be able to rendezvous with other satellites, get very close. We've seen the Russians deploy a satellite that appeared to fire a projectile at another Russian satellite—looks like a test of some sort of a co-orbital weapon. So yes, those capabilities are out there. They do exist. We've never seen a capability like that used in conflict, though, not yet, but we know they exist
Looking forward a decade… One can imagine a lot more satellites, multiple space platforms, maybe some run by the private sector, maybe others not. One could imagine permanent or semi-permanent installations on the moon from different countries. Are plans being made to protect those things, and would the Space Force be the one protecting them? If you have a conflict between the Chinese military installation on the moon and the American, would that be in the Space Force domain? Again, it seems like science fiction, but I don't think it's going to seem like science fiction before too long.
Well, that's right. We're not at that point today, but are we going to be at that point in 10, 20, 30 years? Perhaps. There are folks in the Space Force, like in the chief scientist’s office that have thought about these things; they publish some papers on it. There's no real effort going into that right now other than thinking about it from an academic perspective. Should that be in the mandate of the Space Force? Well, I think it already is, it's just there's not a need for it yet, and so it's something to keep an eye on.
Now, there are some rules, if you will, international agreements that would suggest, “Okay, some of these things should not happen.” Doesn't mean they won't; but, for example, the main treaty that governs how nations operate in space is the Outer Space Treaty of 1967. The Outer Space Treaty specifically says that you can't claim territory in space or on any celestial body like the moon or Mars, and it specifically says you cannot put a military installation on any celestial body.
So, should China put a military base on the moon, they would be clearly violating the Outer Space Treaty. If China puts a scientific installation that happens to have some military capabilities on it, but they don't call it that, well, you know, what are we going to do? Are we going to call them before the United Nations and complain? Or if China says, “Hey, we've put a military installation in this key part of the lunar South Pole where we all believe that there is ice water, and if anyone tries to land anywhere near us, you're going to interfere with our operations, you might kick up dust on us, so we are establishing a keep-out zone of some very large area around this installation.”
I think that there are some concerns that we could be headed in that direction, and that's one of the reasons NASA is pushing forward with the Artemis program to return humans to the moon and a set of international agreements called the Artemis Accords, where we've gotten, I think, more than 20 nations now to agree to a way of operating in the lunar environment and, to a certain extent, in Earth orbit as well, which will help make sure that the norms that develop in space, especially in deep space operating on the moon, are norms that are conducive to free and open societies and free markets. And so I give credit to former NASA administrator, Jim Breidenstein and the Trump administration; he came up with the Artemis Accords. I think it was wonderful. I would love to see us go even further, but NASA is still pursuing that and still signing up more countries to the Artemis Accords, and when they sign up to that, they can be part of our effort to go back to moon and the Artemis program, and right now we are on track to get there and put humans back on the moon before China. I just hope we keep it that way.
The Danger of Space Debris (20:11)
Let me finish up with a question based on something you've mentioned several times during our conversation, which is space debris and space junk. I see more and more articles about the concerns. How concerned are you about this? How should I think about that issue?
Yeah, it is a concern, and, I mean, the physics of the space domain are just fundamentally different than what we see in other domains. So, in space, depending on what orbit you're in, if something breaks up into pieces, those pieces keep orbiting Earth indefinitely. If you are below about 600 kilometers, those pieces of debris, there's a tiny amount of atmospheric drag, and, depending on your mass and your surface area and solar weather and stuff, eventually things 600 kilometers and below are going to reenter the Earth atmosphere and burn up in weeks, months, years.
Once you get above about 600 kilometers, things start staying up there much longer. And when you get out to geostationary orbit, which is 36,000 kilometers above the surface of the earth, those things aren't coming down, ever, not on their own. They're staying up there. So the problem is, imagine every time there was a shipwreck, or a car wreck, or a plane crash, that all of the debris kept moving around the earth forever. Eventually it adds up. And space, it's a very large volume, yes, but this stuff is whizzing by, if you're in low-earth orbit, you're going around 17,000 miles per hour constantly. And so you've got close approach after close approach, day after day, and then you run the risk of debris hitting debris, or debris hitting other satellites, and then creating more debris, and then increasing the odds that this happens again and again, the movie Gravity gave a dramatic effect to this.
I was thinking about that scene as you're explaining this.
Yeah. The timeline was very compressed in that movie, but something like that, the Kessler Syndrome, is theoretically possible in the space domain, so we do have to watch out for it. Debris is collecting, particularly in low Earth orbit above 600 kilometers, and ASAT tests are not helpful at all to that. So one of the things the Biden administration did is they instituted a unilateral moratorium on antisatellite testing by the United States. Well, it's easy for us to do. We didn't need to do any anti-satellite tests anymore because we already know we can do that. We have effective capabilities and we wouldn't want to use kinetic anti-satellite attacks anyway, ’cause it would hurt our own systems.
We have been going around trying to get other countries to sign up to that as well, to a moratorium on ASAT testing. It's a good first step, but really you need Russia and China. They need to sign up to not do that anymore. And India, India conducted a kinetic ASAT test back in, I think, 2019. So those are the countries we really need to get on board with that.
But there's a lot of accidental debris production that happens as well. When countries leave a spent rocket body up in orbit and then something happens. You know, a lot of times they leave their fuel tanks pressurized or they leave batteries on there, after five, 10 years in orbit, sometimes these things explode randomly, and then that creates a debris field. So there's more that we can do to kind of reach international agreements about just being smart stewards of the space domain. There are companies out there that are trying to work on technologies to clean up space debris. It's very hard. That is not something that's on the immediate horizon, but those are all efforts that should be ongoing. It is something to be concerned about.
And actually, to circle back to the chief of space operations and his theory of success in his white paper, that's one of the tensions that he highlights in there, is that we want to use space for military advantage, including being able to deny other countries the ability to use space. But at the same time, we want to be good stewards of the space domain and so there's an inherent tension in between those two objectives, and that's the needle that the Space Force is trying to thread.
I have one final question, and you may have no answer for it: If we were to track a large space object headed toward Earth, whose job would it be to stop it?
So it would be NASA's job to spot it, to find objects like near-Earth orbit asteroids. Whose job is it to stop it? I think we would be figuring that out on the fly. First of all, we would have to figure out, can we stop it? Is there a way to stop it? And it would probably require some sort of an international effort, because we all have a common stake in that, but yeah, it is not in anyone's job jar.
Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe