AI + a16z – Détails, épisodes et analyse

Détails du podcast

Informations techniques et générales issues du flux RSS du podcast.

AI + a16z

AI + a16z

a16z

Technology
Business

Fréquence : 1 épisode/9j. Total Éps: 53

Simplecast
Artificial intelligence is changing everything from art to enterprise IT, and a16z is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.
Site
RSS
Apple

Classements récents

Dernières positions dans les classements Apple Podcasts et Spotify.

Apple Podcasts
  • 🇨🇦 Canada - technology

    28/07/2025
    #85
  • 🇬🇧 Grande Bretagne - technology

    28/07/2025
    #49
  • 🇺🇸 États-Unis - technology

    28/07/2025
    #45
  • 🇨🇦 Canada - technology

    27/07/2025
    #48
  • 🇬🇧 Grande Bretagne - technology

    27/07/2025
    #34
  • 🇺🇸 États-Unis - technology

    27/07/2025
    #46
  • 🇨🇦 Canada - technology

    26/07/2025
    #39
  • 🇬🇧 Grande Bretagne - technology

    26/07/2025
    #34
  • 🇺🇸 États-Unis - technology

    26/07/2025
    #46
  • 🇨🇦 Canada - technology

    25/07/2025
    #38
Spotify

    Aucun classement récent disponible



Qualité et score du flux RSS

Évaluation technique de la qualité et de la structure du flux RSS.

See all
Qualité du flux RSS
À améliorer

Score global : 63%


Historique des publications

Répartition mensuelle des publications d'épisodes au fil des années.

Episodes published by month in

Derniers épisodes publiés

Liste des épisodes récents, avec titres, durées et descriptions.

See all

AI, SQL, and the End of Big Data

Épisode 21

vendredi 30 août 2024Durée 33:08

In this episode of AI + a16z, a16z General Partner Jennifer Li joins MotherDuck Cofounder and CEO Jordan Tigani to discuss DuckDB's spiking popularity as the era of big data wanes, as well as the applicability of SQL-based systems for AI workloads and the prospect of text-to-SQL for analyzing data.

Here's an excerpt of Jordan discussing an early win when it comes to applying generative AI to data analysis:

"Everybody forgets syntax for various SQL calls. And it's just like  in coding. So there's some people that memorize . . . all of the code base, and so they don't need auto-complete. They don't need any copilot. . . . They don't need an ID; they can just type in Notepad. But for the rest of us, I think these tools are super useful. And I think we have seen that these tools have already changed how people are interacting with their data, how they're writing their SQL queries.

"One of the things that we've done . . .  is we focused on improving the experience of writing queries. Something we found is actually really useful is when somebody runs a query and there's an error, we basically feed the line of the error into GPT 4 and ask it to fix it. And it turns out to be really good. 

". . . It's a great way of letting you stay in the flow of writing your queries and having true interactivity."

Learn more:

Small Data SF conference

DuckDB

Follow everyone on X:

Jordan Tigani

Jennifer Li

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

The Researcher to Founder Journey, and the Power of Open Models

Épisode 20

vendredi 16 août 2024Durée 37:16

In this episode of the AI + a16z podcast, Black Forest Labs founders Robin Rombach, Andreas Blattmann, and Patrick Esser sit down with a16z general partner Anjney Midha to discuss their journey from PhD researchers to Stability AI, and now to launching their own company building state-of-the-art image and video models. They also delve into the topic of openness in AI, explaining the benefits of releasing open models and sharing research findings with the field.

Learn more:

Flux

Keep the code to AI open, say two entrepreneurs

Follow everyone on X:

Robin Rombach

Andreas Blattmann

Patrick Esser

Anjney Midha

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

The Future of Image Models Is Multimodal

Épisode 11

vendredi 7 juin 2024Durée 37:17

In this episode, Ideogram CEO Mohammad Norouzi joins a16z General Partner Jennifer Li, as well as Derrick Harris, to share his story of growing up in Iran, helping build influential text-to-image models at Google, and ultimately cofounding and running Ideogram. He also breaks down the differences between transformer models and diffusion models, as well as the transition from researcher to startup CEO.

Here's an excerpt where Mohammad discusses the reaction to the original transformer architecture paper, "Attention Is All You Need," within Google's AI team:

"I think [lead author Asish Vaswani] knew right after the paper was submitted that this is a very important piece of the technology. And he was telling me in the hallway how it works and how much improvement it gives to translation. Translation was a testbed for the transformer paper at the time, and it helped in two ways. One is the speed of training and the other is the quality of translation. 

"To be fair, I don't think anybody had a very crystal clear idea of how big this would become. And I guess the interesting thing is, now, it's the founding architecture for computer vision, too, not only for language. And then we also went far beyond language translation as a task, and we are talking about general-purpose assistants and the idea of building general-purpose intelligent machines. And it's really humbling to see how big of a role the transformer is playing into this."

Learn more:
Investing in Ideogram

Imagen

Denoising Diffusion Probabilistic Models

Follow everyone on X:

Mohammad Norouzi

Jennifer Li

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

ARCHIVE: Open Models (with Arthur Mensch) and Video Models (with Stefano Ermon)

Épisode 10

vendredi 24 mai 2024Durée 01:05:43

For this holiday weekend (in the United States) episode, we've stitched together two archived episodes from the a16z Podcast, both featuring General Partner Anjney Midha. In the first half, from December, he speaks with Mistral cofounder and CEO Arthur Mensch about the importance of open foundation models, as well as Mistral's approach to building them. In the second half (at 34:40), from February, he speaks with Stanford's Stefano Ermon about the state of the art in video models, including how OpenAI's Sora might work under the hood.

Here's a sample of what Arthur had to say about the debate over how to regulate AI models:

"I think the battle is for the neutrality of the technology. Like a technology, by a sense, is something neutral. You can use it for bad purposes. You can use it for good purposes. If you look at what an LLM does, it's not really different from a programming language. . . .

"So we should regulate the function, the mathematics behind it. But, really, you never use a large language model itself. You  always use it in an application, in a way, with a user interface. And so,  that's the one thing you want to regulate. And what it means is that companies like us, like foundational model companies, will obviously make the model as controllable as possible so that the applications on top of it can be compliant, can be safe. We'll also build the tools that allow you to measure the compliance and the safety of the application, because that's super useful for the application makers. It's actually needed.  

"But there's no point in regulating something that is neutral in itself, that is just a mathematical tool. I think that's the one thing that we've been hammering a lot, which is good, but there's still a lot of effort in making this strong distinction, which is super important to understand what's going on."

Follow everyone on X:

Anjney Midha

Arthur Mensch

Stefano Ermon

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

Open Models and Maturation: Assessing the Generative AI Market

Épisode 9

vendredi 17 mai 2024Durée 40:21

a16z partners Guido Appenzeller and Matt Bornstein join Derrick Harris to discuss the state of the generative AI market, about 18 months after it really kicked into high gear with the release of ChatGPT — everything from the emergence of powerful open source LLMs to the excitement around AI-generated music.

If there's one major lesson to learn, it's that although we've made some very impressive technological strides and companies are generating meaningful revenue, this is still a a very fluid space. As Matt puts it during the discussion:

"For nearly all AI applications and most model providers,  growth is kind of a sawtooth pattern, meaning when there's a big new amazing thing announced, you see very fast growth.  And when it's been a while since the last release, growth kind of can flatten off. And you can imagine retention can be  all over the place, too . . .

"I think every time we're in a flat period, people start to think, 'Oh, it's mature now,  the, the gold rush is over. What happens next?' But then a new spike almost always comes, or at least has over the last 18 months or so. So a lot of this depends on your time horizon, and I think we're still in this period of, like, if you think growth has slowed, wait a month  and see it change."

Follow everyone on X:

Guido Appenzeller

Matt Bornstein

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

Security Founders Talk Shop About Generative AI

Épisode 8

mercredi 15 mai 2024Durée 22:31

In this bonus episode, recorded live at our San Francisco office, security-startup founders Dean De Beer (Command Zero), Kevin Tian (Doppel), and Travis McPeak (Resourcely) share their thoughts on generative AI, as well as their experiences building with LLMs and dealing with LLM-based threats.

Here's a sample of what Dean had to say about the myriad considerations when choosing, and operating, a large language model:

"The more advanced your use case is, the more requirements you have, the more data you attach to it, the more complex your prompts — ll this is going to change your inference time. 

"I liken this to perceived waiting time for an elevator. There's data scientists at places like Otis that actually work on that problem. You know, no one wants to wait 45 seconds for an elevator, but taking the stairs will take them half an hour if they're going to the top floor of . . . something. Same thing here: If I can generate an outcome in 90 seconds, it's still too long from the user's perspective, even if them building out and figuring out the data and building that report [would have] took them four hours . . . two days."

Follow everyone:

Dean De Beer

Kevin Tian

Travis McPeak

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

How to Think About Foundation Models for Cybersecurity

Épisode 7

vendredi 10 mai 2024Durée 37:06

In this episode of the AI + a16z podcast, a16z General Partner Zane Lackey and a16z Partner Joel de la Garza sit down with Derrick Harris to discuss how generative AI — LLMs, in particular — and foundation models could effect profound change in cybersecurity. After years of AI-washing by security vendors, they explain why the hype is legitimate this time as AI provides  a real opportunity to help security teams cut through the noise and automate away the types of drudgery that lead to mistakes.

"Often when you're running a security team, you're not only drowning in noise, but you're drowning in just the volume of things going on," Zane explains. "And so I think a lot of security teams are excited about, 'Can we utilize AI and LLMs to really take at least some of that off of our plate?'

"I think it's still very much an open question of how far they go in helping us, but even taking some meaningful percentage off of our plate in terms of overall work is going to really help security teams overall."

Follow everyone:

Zane Lackey

Joel de la Garza

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

Securing the Software Supply Chain with LLMs

Épisode 6

vendredi 3 mai 2024Durée 38:57

Socket Founder and CEO Feross Aboukhadijeh joins a16z's Joel de la Garza and Derrick Harris to discuss the open-source software supply chain. Feross and Joel share their thoughts and insights on topics ranging from the recent XZutils attack to how large language models can help overcome understaffed security teams and overwhelmed developers. 

Despite some increasingly sophisticated attacks making headlines and compromising countless systems, they're optimistic that LLMs, in particular, could be a turning point for security blue teams. As Feross sums up one possibility:

"The way we think about gen AI on the defensive side is that it's not as good as a human looking at the code, but it's something. . . . Our challenge is that we want to scan all the open source code that exists out there. That is not something you can pay humans to do. That is not scalable at all. But, with the right techniques, with the right pre-filtering stages, you can actually put a lot of that stuff through LLMs and out the other side will pop a list of of risky packages.

"And then that's a much smaller number that you can have humans take a look at. And so we're using it as a tool . . . to find the needle in the haystack, what is worth looking at. It's not perfect, but it can help cut down on the noise and it can even make this problem tractable, which previously wasn't even tractable."

More about Socket and  cybersecurity:

Socket

Investing in Socket

Hiring a CISO

Follow everyone :

Feross Aboukhadijeh

Joel de la Garza

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

ARCHIVE: GPT-3 Hype

Épisode 5

mercredi 1 mai 2024Durée 33:29

In this episode, though, we’re traveling back in time to distant — in AI years, at least — past of 2020. Because amid all the news over the past 18 or so months, it’s easy to forget that generative AI — and LLMs, in particular — have been around for a while. OpenAI released its GPT-2 paper in late 2018, which excited the AI research community, and in 2020 made GPT-3 (as well as other capabilities) publicly available for the first time via its API. This episode dates back to that point in time (it was published in July 2020), when GPT-3 piqued the interest of the broader developer community and people really started testing what was possible.

And although it doesn’t predict the precambrian explosion of multimodal models, regulatory and copyright debate, and entrepreneurial activity that would hit a couple of years later — and who could have? — it does set the table for some of the bigger — and still unanswered — questions about what tools like LLMs actually mean from a business perspective. And, perhaps more importantly, what they ultimately mean for how we define intelligence.

So set your wayback machine to the seemingly long ago summer of 2020 and enjoy a16z’s Sonal Chokshi and Frank Chen discussing the advent of commercially available LLMs.

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

Vector Databases and the Power of RAG

Épisode 4

vendredi 26 avril 2024Durée 36:41

Pinecone Founder and CEO Edo Liberty joins a16z's Satish Talluri and Derrick Harris to discuss the promises, challenges, and opportunities for vector databases and retrieval augmented generation (RAG). He also shares insights and highlights from a decades-long career in machine learning, which includes stints running research teams at both Yahoo and Amazon Web Services.

Because he's been at this a long time,  and despite its utility, Edo understands that RAG — like most of today's popular AI concepts — is still very much a progress:

"I think RAG  today is where transformers were in 2017. It's clunky and weird and hard to get right. And it  has a lot of sharp edges, but it already does something amazing. Sometimes, most of the time, the very early adopters and the very advanced users are already picking it up and running with it and lovingly deal with all the sharp edges ...

"Making progress on RAG, making progress on information retrieval, and making progress on making AI more knowledgeable and less hallucinatory and more dependable, is a complete greenfield today. There's an infinite amount of innovation that will have to go into it."

More about Pinecone and RAG:

Investing in Pinecone

Retrieval Augmented Generation (RAG)

Emerging Architectures for LLM Applications

Follow everyone on X:

Edo Liberty

Satish Talluri

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.


Podcasts Similaires Basées sur le Contenu

Découvrez des podcasts liées à AI + a16z. Explorez des podcasts avec des thèmes, sujets, et formats similaires. Ces similarités sont calculées grâce à des données tangibles, pas d'extrapolations !
UI Breakfast: UI/UX Design and Product Strategy
Похоже, я фотограф
The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
TRENDSPOTTING : l'émission du monde qui vient.
In Depth
Marketing Against The Grain
The Exit Five CMO Podcast (Hosted by Dave Gerhardt)
A Podcast Will Save This Relationship
The Family History AI Show
Thinking Elixir Podcast
© My Podcast Data