AI + a16z – Details, episodes & analysis
Podcast details
Technical and general information from the podcast's RSS feed.

AI + a16z
a16z
Frequency: 1 episode/9d. Total Eps: 53

Recent rankings
Latest chart positions across Apple Podcasts and Spotify rankings.
Apple Podcasts
🇨🇦 Canada - technology
28/07/2025#85🇬🇧 Great Britain - technology
28/07/2025#49🇺🇸 USA - technology
28/07/2025#45🇨🇦 Canada - technology
27/07/2025#48🇬🇧 Great Britain - technology
27/07/2025#34🇺🇸 USA - technology
27/07/2025#46🇨🇦 Canada - technology
26/07/2025#39🇬🇧 Great Britain - technology
26/07/2025#34🇺🇸 USA - technology
26/07/2025#46🇨🇦 Canada - technology
25/07/2025#38
Spotify
No recent rankings available
Shared links between episodes and podcasts
Links found in episode descriptions and other podcasts that share them.
See all- https://www.netlify.com/
69 shares
- https://www.getdbt.com/
59 shares
- https://a16z.com/ai/
54 shares
- https://twitter.com/AnjneyMidha
13 shares
- https://twitter.com/derrickharris
13 shares
- https://twitter.com/appenz
6 shares
RSS feed quality and score
Technical evaluation of the podcast's RSS feed quality and structure.
See allScore global : 63%
Publication history
Monthly episode publishing history over the past years.
AI, SQL, and the End of Big Data
Episode 21
vendredi 30 août 2024 • Duration 33:08
In this episode of AI + a16z, a16z General Partner Jennifer Li joins MotherDuck Cofounder and CEO Jordan Tigani to discuss DuckDB's spiking popularity as the era of big data wanes, as well as the applicability of SQL-based systems for AI workloads and the prospect of text-to-SQL for analyzing data.
Here's an excerpt of Jordan discussing an early win when it comes to applying generative AI to data analysis:
"Everybody forgets syntax for various SQL calls. And it's just like in coding. So there's some people that memorize . . . all of the code base, and so they don't need auto-complete. They don't need any copilot. . . . They don't need an ID; they can just type in Notepad. But for the rest of us, I think these tools are super useful. And I think we have seen that these tools have already changed how people are interacting with their data, how they're writing their SQL queries.
"One of the things that we've done . . . is we focused on improving the experience of writing queries. Something we found is actually really useful is when somebody runs a query and there's an error, we basically feed the line of the error into GPT 4 and ask it to fix it. And it turns out to be really good.
". . . It's a great way of letting you stay in the flow of writing your queries and having true interactivity."
Learn more:
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
The Researcher to Founder Journey, and the Power of Open Models
Episode 20
vendredi 16 août 2024 • Duration 37:16
In this episode of the AI + a16z podcast, Black Forest Labs founders Robin Rombach, Andreas Blattmann, and Patrick Esser sit down with a16z general partner Anjney Midha to discuss their journey from PhD researchers to Stability AI, and now to launching their own company building state-of-the-art image and video models. They also delve into the topic of openness in AI, explaining the benefits of releasing open models and sharing research findings with the field.
Learn more:
Keep the code to AI open, say two entrepreneurs
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
The Future of Image Models Is Multimodal
Episode 11
vendredi 7 juin 2024 • Duration 37:17
In this episode, Ideogram CEO Mohammad Norouzi joins a16z General Partner Jennifer Li, as well as Derrick Harris, to share his story of growing up in Iran, helping build influential text-to-image models at Google, and ultimately cofounding and running Ideogram. He also breaks down the differences between transformer models and diffusion models, as well as the transition from researcher to startup CEO.
Here's an excerpt where Mohammad discusses the reaction to the original transformer architecture paper, "Attention Is All You Need," within Google's AI team:
"I think [lead author Asish Vaswani] knew right after the paper was submitted that this is a very important piece of the technology. And he was telling me in the hallway how it works and how much improvement it gives to translation. Translation was a testbed for the transformer paper at the time, and it helped in two ways. One is the speed of training and the other is the quality of translation.
"To be fair, I don't think anybody had a very crystal clear idea of how big this would become. And I guess the interesting thing is, now, it's the founding architecture for computer vision, too, not only for language. And then we also went far beyond language translation as a task, and we are talking about general-purpose assistants and the idea of building general-purpose intelligent machines. And it's really humbling to see how big of a role the transformer is playing into this."
Learn more:
Investing in Ideogram
Denoising Diffusion Probabilistic Models
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
ARCHIVE: Open Models (with Arthur Mensch) and Video Models (with Stefano Ermon)
Episode 10
vendredi 24 mai 2024 • Duration 01:05:43
For this holiday weekend (in the United States) episode, we've stitched together two archived episodes from the a16z Podcast, both featuring General Partner Anjney Midha. In the first half, from December, he speaks with Mistral cofounder and CEO Arthur Mensch about the importance of open foundation models, as well as Mistral's approach to building them. In the second half (at 34:40), from February, he speaks with Stanford's Stefano Ermon about the state of the art in video models, including how OpenAI's Sora might work under the hood.
Here's a sample of what Arthur had to say about the debate over how to regulate AI models:
"I think the battle is for the neutrality of the technology. Like a technology, by a sense, is something neutral. You can use it for bad purposes. You can use it for good purposes. If you look at what an LLM does, it's not really different from a programming language. . . .
"So we should regulate the function, the mathematics behind it. But, really, you never use a large language model itself. You always use it in an application, in a way, with a user interface. And so, that's the one thing you want to regulate. And what it means is that companies like us, like foundational model companies, will obviously make the model as controllable as possible so that the applications on top of it can be compliant, can be safe. We'll also build the tools that allow you to measure the compliance and the safety of the application, because that's super useful for the application makers. It's actually needed.
"But there's no point in regulating something that is neutral in itself, that is just a mathematical tool. I think that's the one thing that we've been hammering a lot, which is good, but there's still a lot of effort in making this strong distinction, which is super important to understand what's going on."
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Open Models and Maturation: Assessing the Generative AI Market
Episode 9
vendredi 17 mai 2024 • Duration 40:21
a16z partners Guido Appenzeller and Matt Bornstein join Derrick Harris to discuss the state of the generative AI market, about 18 months after it really kicked into high gear with the release of ChatGPT — everything from the emergence of powerful open source LLMs to the excitement around AI-generated music.
If there's one major lesson to learn, it's that although we've made some very impressive technological strides and companies are generating meaningful revenue, this is still a a very fluid space. As Matt puts it during the discussion:
"For nearly all AI applications and most model providers, growth is kind of a sawtooth pattern, meaning when there's a big new amazing thing announced, you see very fast growth. And when it's been a while since the last release, growth kind of can flatten off. And you can imagine retention can be all over the place, too . . .
"I think every time we're in a flat period, people start to think, 'Oh, it's mature now, the, the gold rush is over. What happens next?' But then a new spike almost always comes, or at least has over the last 18 months or so. So a lot of this depends on your time horizon, and I think we're still in this period of, like, if you think growth has slowed, wait a month and see it change."
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Security Founders Talk Shop About Generative AI
Episode 8
mercredi 15 mai 2024 • Duration 22:31
In this bonus episode, recorded live at our San Francisco office, security-startup founders Dean De Beer (Command Zero), Kevin Tian (Doppel), and Travis McPeak (Resourcely) share their thoughts on generative AI, as well as their experiences building with LLMs and dealing with LLM-based threats.
Here's a sample of what Dean had to say about the myriad considerations when choosing, and operating, a large language model:
"The more advanced your use case is, the more requirements you have, the more data you attach to it, the more complex your prompts — ll this is going to change your inference time.
"I liken this to perceived waiting time for an elevator. There's data scientists at places like Otis that actually work on that problem. You know, no one wants to wait 45 seconds for an elevator, but taking the stairs will take them half an hour if they're going to the top floor of . . . something. Same thing here: If I can generate an outcome in 90 seconds, it's still too long from the user's perspective, even if them building out and figuring out the data and building that report [would have] took them four hours . . . two days."
Follow everyone:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
How to Think About Foundation Models for Cybersecurity
Episode 7
vendredi 10 mai 2024 • Duration 37:06
In this episode of the AI + a16z podcast, a16z General Partner Zane Lackey and a16z Partner Joel de la Garza sit down with Derrick Harris to discuss how generative AI — LLMs, in particular — and foundation models could effect profound change in cybersecurity. After years of AI-washing by security vendors, they explain why the hype is legitimate this time as AI provides a real opportunity to help security teams cut through the noise and automate away the types of drudgery that lead to mistakes.
"Often when you're running a security team, you're not only drowning in noise, but you're drowning in just the volume of things going on," Zane explains. "And so I think a lot of security teams are excited about, 'Can we utilize AI and LLMs to really take at least some of that off of our plate?'
"I think it's still very much an open question of how far they go in helping us, but even taking some meaningful percentage off of our plate in terms of overall work is going to really help security teams overall."
Follow everyone:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Securing the Software Supply Chain with LLMs
Episode 6
vendredi 3 mai 2024 • Duration 38:57
Socket Founder and CEO Feross Aboukhadijeh joins a16z's Joel de la Garza and Derrick Harris to discuss the open-source software supply chain. Feross and Joel share their thoughts and insights on topics ranging from the recent XZutils attack to how large language models can help overcome understaffed security teams and overwhelmed developers.
Despite some increasingly sophisticated attacks making headlines and compromising countless systems, they're optimistic that LLMs, in particular, could be a turning point for security blue teams. As Feross sums up one possibility:
"The way we think about gen AI on the defensive side is that it's not as good as a human looking at the code, but it's something. . . . Our challenge is that we want to scan all the open source code that exists out there. That is not something you can pay humans to do. That is not scalable at all. But, with the right techniques, with the right pre-filtering stages, you can actually put a lot of that stuff through LLMs and out the other side will pop a list of of risky packages.
"And then that's a much smaller number that you can have humans take a look at. And so we're using it as a tool . . . to find the needle in the haystack, what is worth looking at. It's not perfect, but it can help cut down on the noise and it can even make this problem tractable, which previously wasn't even tractable."
More about Socket and cybersecurity:
Follow everyone :
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
ARCHIVE: GPT-3 Hype
Episode 5
mercredi 1 mai 2024 • Duration 33:29
In this episode, though, we’re traveling back in time to distant — in AI years, at least — past of 2020. Because amid all the news over the past 18 or so months, it’s easy to forget that generative AI — and LLMs, in particular — have been around for a while. OpenAI released its GPT-2 paper in late 2018, which excited the AI research community, and in 2020 made GPT-3 (as well as other capabilities) publicly available for the first time via its API. This episode dates back to that point in time (it was published in July 2020), when GPT-3 piqued the interest of the broader developer community and people really started testing what was possible.
And although it doesn’t predict the precambrian explosion of multimodal models, regulatory and copyright debate, and entrepreneurial activity that would hit a couple of years later — and who could have? — it does set the table for some of the bigger — and still unanswered — questions about what tools like LLMs actually mean from a business perspective. And, perhaps more importantly, what they ultimately mean for how we define intelligence.
So set your wayback machine to the seemingly long ago summer of 2020 and enjoy a16z’s Sonal Chokshi and Frank Chen discussing the advent of commercially available LLMs.
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Vector Databases and the Power of RAG
Episode 4
vendredi 26 avril 2024 • Duration 36:41
Pinecone Founder and CEO Edo Liberty joins a16z's Satish Talluri and Derrick Harris to discuss the promises, challenges, and opportunities for vector databases and retrieval augmented generation (RAG). He also shares insights and highlights from a decades-long career in machine learning, which includes stints running research teams at both Yahoo and Amazon Web Services.
Because he's been at this a long time, and despite its utility, Edo understands that RAG — like most of today's popular AI concepts — is still very much a progress:
"I think RAG today is where transformers were in 2017. It's clunky and weird and hard to get right. And it has a lot of sharp edges, but it already does something amazing. Sometimes, most of the time, the very early adopters and the very advanced users are already picking it up and running with it and lovingly deal with all the sharp edges ...
"Making progress on RAG, making progress on information retrieval, and making progress on making AI more knowledgeable and less hallucinatory and more dependable, is a complete greenfield today. There's an infinite amount of innovation that will have to go into it."
More about Pinecone and RAG:
Retrieval Augmented Generation (RAG)
Emerging Architectures for LLM Applications
Follow everyone on X:
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.