Training Data – Details, episodes & analysis

Podcast details

Technical and general information from the podcast's RSS feed.

Training Data

Training Data

Sequoia Capital

Technology
Business

Frequency: 1 episode/7d. Total Eps: 57

Megaphone
Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society. The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.
Site
RSS
Spotify
Apple

Recent rankings

Latest chart positions across Apple Podcasts and Spotify rankings.

Apple Podcasts
  • 🇨🇦 Canada - technology

    28/07/2025
    #86
  • 🇬🇧 Great Britain - technology

    28/07/2025
    #86
  • 🇺🇸 USA - technology

    28/07/2025
    #96
  • 🇨🇦 Canada - technology

    27/07/2025
    #84
  • 🇬🇧 Great Britain - technology

    27/07/2025
    #54
  • 🇫🇷 France - technology

    27/07/2025
    #77
  • 🇨🇦 Canada - technology

    26/07/2025
    #86
  • 🇬🇧 Great Britain - technology

    26/07/2025
    #42
  • 🇺🇸 USA - technology

    26/07/2025
    #84
  • 🇨🇦 Canada - technology

    25/07/2025
    #71
Spotify
  • 🇺🇸 USA - technology

    28/07/2025
    #42
  • 🇺🇸 USA - technology

    27/07/2025
    #43
  • 🇺🇸 USA - technology

    26/07/2025
    #44
  • 🇺🇸 USA - technology

    25/07/2025
    #46
  • 🇺🇸 USA - technology

    16/07/2025
    #50
  • 🇺🇸 USA - technology

    15/07/2025
    #47
  • 🇺🇸 USA - technology

    14/07/2025
    #45
  • 🇺🇸 USA - technology

    13/07/2025
    #40
  • 🇺🇸 USA - technology

    12/07/2025
    #41
  • 🇺🇸 USA - technology

    11/07/2025
    #41


RSS feed quality and score

Technical evaluation of the podcast's RSS feed quality and structure.

See all
RSS feed quality
To improve

Score global : 43%


Publication history

Monthly episode publishing history over the past years.

Episodes published by month in

Latest published episodes

Recent episodes with titles, durations, and descriptions.

See all

Crucible Moments Returns for S2: The ServiceNow Story ft. CEO Frank Slootman & Founder Fred Luddy

mardi 3 septembre 2024Duration 42:53

On Training Data, we learn from innovators pushing forward the frontier of AI’s capabilities. Today we’re bringing you something different. It’s the story of a company currently implementing AI at scale in the enterprise, and how it was built from a bootstrapped idea in the pre-AI era to a 150 billion dollar market cap giant.  It’s the Season 2 premiere of Sequoia’s other podcast, Crucible Moments, where we hear from the founders and leaders of some legendary companies about the crossroads and inflection points that shaped their journeys. In this episode, you’ll hear from Fred Luddy and Frank Slootman about building and scaling ServiceNow. Listen to Crucible Moments wherever you get your podcasts or go to: Spotify: https://open.spotify.com/show/40bWCUSan0boCn0GZJNpPn Apple: https://podcasts.apple.com/us/podcast/crucible-moments/id1705282398 Hosted by: Roelof Botha, Sequoia Capital Transcript: https://www.sequoiacap.com/podcast/crucible-moments-servicenow/

Sierra Co-Founder Clay Bavor on Making Customer-Facing AI Agents Delightful

mardi 27 août 2024Duration 01:12:31

Customer service is hands down the first killer app of generative AI for businesses. The reasons are simple: the costs of existing solutions are so high, the satisfaction so low and the margin for ROI so wide. But trusting your interactions with customers to hallucination-prone LLMs can be daunting. Enter Sierra. Co-founder Clay Bavor walks us through the sophisticated engineering challenges his team solved along the way to delivering AI agents for all aspects of the customer experience that are delightful, safe and reliable—and being deployed widely by Sierra’s customers. The Company’s AgentOS enables businesses to create branded AI agents to interact with customers, follow nuanced policies and even handle customer retention and upsell. Clay describes how companies can capture their brand voice, values and internal processes to create AI agents that truly represent the business. Hosted by: Ravi Gupta and Pat Grady, Sequoia Capital Mentioned in this episode: Bret Taylor: co-founder of Sierra Towards a Human-like Open-Domain Chatbot: 2020 Google paper that introduced Meena, a predecessor of ChatGPT (followed by LaMDA in 2021) PaLM: Scaling Language Modeling with Pathways: 2022 Google paper about their unreleased 540B parameter transformer model (GPT-3, at the time, had 175B)  Avocado chair: Images generated by OpenAI’s DALL·E model in 2022 Large Language Models Understand and Can be Enhanced by Emotional Stimuli: 2023 Microsoft paper on how models like GPT-4 can be manipulated into providing better results 𝛕-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains: 2024 paper authored by Sierra research team, led by Karthik Narasimhan (co-author of the 2022 ReACT paper and the 2023 Reflexion paper) 00:00:00 Introduction 00:01:21 Clay’s background 00:03:20 Google before the ChatGPT moment 00:07:31 What is Sierra? 00:12:03 What’s possible now that wasn’t possible 18 months ago? 00:17:11 AgentOS 00:23:45 The solution to many problems with AI is more AI 00:28:37 𝛕-bench 00:33:19 Engineering task vs research task 00:37:27 What tasks can you trust an agent with now? 00:43:21 What metrics will move? 00:46:22 The reality of deploying AI to customers today 00:53:33 The experience manager 01:03:54 Outcome-based pricing 01:05:55 Lightning Round

Factory’s Matan Grinberg and Eno Reyes Unleash the Droids on Software Development

mardi 25 juin 2024Duration 59:10

Archimedes said that with a large enough lever, you can move the world. For decades, software engineering has been that lever. And now, AI is compounding that lever. How will we use AI to apply 100 or 1000x leverage to the greatest lever to move the world? Matan Grinberg and Eno Reyes, co-founders of Factory, have chosen to do things differently than many of their peers in this white-hot space. They sell a fleet of “Droids,” purpose-built dev agents which accomplish different tasks in the software development lifecycle (like code review, testing, pull requests or writing code). Rather than training their own foundation model, their approach is to build something useful for engineering orgs today on top of the rapidly improving models, aligning with the developer and evolving with them.  Matan and Eno are optimistic about the effects of autonomy in software development and on building a company in the application layer. Their advice to founders, “The only way you can win is by executing faster and being more obsessed.” Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned:  Juan Maldacena, Institute for Advanced Study, string theorist that Matan cold called as an undergrad SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering, small-model open-source software engineering agent SWE-bench: Can Language Models Resolve Real-World GitHub Issues?, an evaluation framework for GitHub issues Monte Carlo tree search, a 2006 algorithm for solving decision making in games (and used in AlphaGo) Language agent tree search, a framework for LLM planning, acting and reasoning The Bitter Lesson, Rich Sutton’s essay on scaling in search and learning  Code churn, time to merge, cycle time, metrics Factory thinks are important to eng orgs Transcript: https://www.sequoiacap.com/podcast/training-data-factory/ 00:00 Introduction 01:36 Personal backgrounds 10:54 The compound lever 12:41 What is Factory?  16:29 Cognitive architectures  21:13 800 engineers at OpenAI are working on my margins  24:00 Jeff Dean doesn't understand your code base 25:40 Individual dev productivity vs system-wide optimization  30:04 Results: Factory in action  32:54 Learnings along the way  35:36 Fully autonomous Jeff Deans 37:56 Beacons of the upcoming age 40:04 How far are we?  43:02 Competition  45:32 Lightning round 49:34 Bonus round: Factory's SWE-bench results

LangChain’s Harrison Chase on Building the Orchestration Layer for AI Agents

mardi 18 juin 2024Duration 49:50

Last year, AutoGPT and Baby AGI captured our imaginations—agents quickly became the buzzword of the day…and then things went quiet. AutoGPT and Baby AGI may have marked a peak in the hype cycle, but this year has seen a wave of agentic breakouts on the product side, from Klarna’s customer support AI to Cognition’s Devin, etc. Harrison Chase of LangChain is focused on enabling the orchestration layer for agents. In this conversation, he explains what’s changed that’s allowing agents to improve performance and find traction.  Harrison shares what he’s optimistic about, where he sees promise for agents vs. what he thinks will be trained into models themselves, and discusses novel kinds of UX that he imagines might transform how we experience agents in the future.      Hosted by: Sonya Huang and Pat Grady, Sequoia Capital Mentioned:  ReAct: Synergizing Reasoning and Acting in Language Models, the first cognitive architecture for agents SWE-agent: Agent-Computer Interfaces Enable Automated Software Engineering, small-model open-source software engineering agent from researchers at Princeton Devin, autonomous software engineering from Cognition V0: Generative UI agent from Vercel GPT Researcher, a research agent  Language Model Cascades: 2022 paper by Google Brain and now OpenAI researcher David Dohan that was influential for Harrison in developing LangChain Transcript: https://www.sequoiacap.com/podcast/training-data-harrison-chase/ 00:00 Introduction 01:21 What are agents?  05:00 What is LangChain’s role in the agent ecosystem? 11:13 What is a cognitive architecture?  13:20 Is bespoke and hard coded the way the world is going, or a stop gap? 18:48 Focus on what makes your beer taste better 20:37 So what?  22:20 Where are agents getting traction? 25:35 Reflection, chain of thought, other techniques? 30:42 UX can influence the effectiveness of the architecture 35:30 What’s out of scope? 38:04 Fine tuning vs prompting? 42:17 Existing observability tools for LLMs vs needing a new architecture/approach 45:38 Lightning round

Introducing "Training Data"

mercredi 5 juin 2024Duration 01:26

Join us as we train our neural nets on the theme of the century: AI. Sequoia Capital partners Sonya Huang and Pat Grady host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies and their implications for technology, business and society. The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.

Phaidra’s Jim Gao on Building the Fourth Industrial Revolution with Reinforcement Learning

mardi 20 août 2024Duration 50:33

After AlphaGo beat Lee Sedol, a young mechanical engineer at Google thought of another game reinforcement learning could win: energy optimization at data centers. Jim Gao convinced his bosses at the Google data center team to let him work with the DeepMind team to try. The initial pilot resulted in a 40% energy savings and led he and his co-founders to start Phaidra to turn this technology into a product. Jim discusses the challenges of AI readiness in industrial settings and how we have to build on top of the control systems of the 70s and 80s to achieve the promise of the Fourth Industrial Revolution. He believes this new world of self-learning systems and self-improving infrastructure is a key factor in addressing global climate change. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned in this episode: Mustafa Suleyman: Co-founder of DeepMind and Inflection AI and currently CEO of Microsoft AI, known to his friends as “Moose” Joe Kava: Google VP of data centers who Jim sent his initial email to pitching the idea that would eventually become Phaidra Constrained optimization: the class of problem that reinforcement learning can be applied to in real world systems  Vedavyas Panneershelvam: co-founder and CTO of Phaidra; one of the original engineers on the AlphaGo project Katie Hoffman: co-founder, President and COO of Phaidra  Demis Hassabis: CEO of DeepMind

Fireworks Founder Lin Qiao on How Fast Inference and Small Models Will Benefit Businesses

mardi 13 août 2024Duration 39:18

In the first wave of the generative AI revolution, startups and enterprises built on top of the best closed-source models available, mostly from OpenAI. The AI customer journey moves from training to inference, and as these first products find PMF, many are hitting a wall on latency and cost. Fireworks Founder and CEO Lin Qiao led the PyTorch team at Meta that rebuilt the whole stack to meet the complex needs of the world’s largest B2C company. Meta moved PyTorch to its own non-profit foundation in 2022 and Lin started Fireworks with the mission to compress the timeframe of training and inference and democratize access to GenAI beyond the hyperscalers to let a diversity of AI applications thrive. Lin predicts when open and closed source models will converge and reveals her goal to build simple API access to the totality of knowledge. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned in this episode: Pytorch: the leading framework for building deep learning models, originated at Meta and now part of the Linux Foundation umbrella Caffe2 and ONNX: ML frameworks Meta used that PyTorch eventually replaced Conservation of complexity: the idea that that every computer application has inherent complexity that cannot be reduced but merely moved between the backend and frontend, originated by Xerox PARC researcher Larry Tesler  Mixture of Experts: a class of transformer models that route requests between different subsets of a model based on use case Fathom: a product the Fireworks team uses for video conference summarization  LMSYS Chatbot Arena: crowdsourced open platform for LLM evals hosted on Hugging Face  00:00 - Introduction 02:01 - What is Fireworks? 02:48 - Leading Pytorch 05:01 - What do researchers like about PyTorch? 07:50 - How Fireworks compares to open source 10:38 - Simplicity scales 12:51 - From training to inference 17:46 - Will open and closed source converge? 22:18 - Can you match OpenAI on the Fireworks stack? 26:53 - What is your vision for the Fireworks platform? 31:17 - Competition for Nvidia? 32:47 - Are returns to scale starting to slow down? 34:28 - Competition 36:32 - Lightning round

GitHub CEO Thomas Dohmke on Building Copilot, and the the Future of Software Development

mardi 6 août 2024Duration 01:07:34

GithHub invented collaborative coding and in the process changed how open source projects, startups and eventually enterprises write code. GitHub Copilot is the first blockbuster product built on top of OpenAI’s GPT models. It now accounts for more than 40 percent of GitHub revenue growth for an annual revenue run rate of $2 billion. Copilot itself is already a larger business than all of GitHub was when Microsoft acquired it in 2018. We talk to CEO Thomas Dohmke about how a small team at GitHub built on top of GPT-3 and quickly created a product that developers love—and can’t live without. Thomas describes how the product has grown from simple autocomplete to a fully featured workspace for enterprise teams. He also believes that tools like Copilot will bring the power of coding to a billion developers by 2030. Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital  Mentioned in this episode: Nat Friedman: Former Microsoft VP (and now investor) who came up with the idea that Microsoft should buy GitHub Oege de Moor: Github developer (and now founder of XBOW) who came up with the idea of using GPT-3 for code and went on to create Copilot Alex Graveley: principal engineer and Chief Architect for Copilot (now CEO of Minion.ai) who came up with the name Copilot (because his boss, Nat Firedman, is an amateur pilot) Productivity Assessment of Neural Code Completion: Original GitHub research paper on the impact of Copilot on Developer productivity Escaping a room in Minecraft with an AI-powered NPC: Recent Minecraft AI assistant demo from Microsoft With AI, anyone can be a coder now: TED2024 talk by Thomas Dohmke JFrog: The software supply chain platform that GitHub just partnered with 00:00:00 - Introduction 00:01:18 - Getting started with code 00:03:43 - Microsoft’s acquisition of GitHub 00:11:40 - Evolving Copilot beyond autocomplete 00:14:18 - In hindsight, you can always move faster 00:15:56 - Building on top of OpenAI 00:20:21 - The latest metrics 00:22:11 - The surprise of Copilot’s impact 00:25:11 - Teaching kids to code in the age of Copilot 00:26:38 - The momentum mindset 00:29:46 - Agents vs Copilots 00:32:06 - The Roadmap 00:37:31 - Making maintaining software easier 00:38:48 - The creative new world 00:42:38 - The AI 10x software engineer 00:45:12 - Creativity and systems engineering in AI 00:48:55 - What about COBOL? 00:50:23 - Will GitHub build its own models? 00:57:19 - Rapid incubation at GitHub Next 00:59:21 - The future of AI? 01:03:18 - Advice for founders 01:05:08 - Lightning round

Meta’s Joe Spisak on Llama 3.1 405B and the Democratization of Frontier Models

mardi 30 juillet 2024Duration 42:07

As head of Product Management for Generative AI at Meta, Joe Spisak leads the team behind Llama, which just released the new 3.1 405B model. We spoke with Joe just two days after the model’s release to ask what’s new, what it enables, and how Meta sees the role of open source in the AI ecosystem. Joe shares that where Llama 3.1 405B really focused is on pushing scale (it was trained on 15 trillion tokens using 16,000 GPUs) and he’s excited about the zero-shot tool use it will enable, as well as its role in distillation and generating synthetic data to teach smaller models. He tells us why he thinks even frontier models will ultimately commoditize—and why that’s a good thing for the startup ecosystem. Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital  Mentioned in this episode:  Llama 3.1 405B paper Open Source AI Is the Way Forward: Mark Zuckerberg essay released with Llama 3.1. Mistral Large 2 The Bitter Lesson by Rich Sutton 00:00 Introduction 01:28 The Llama 3.1 405B launch 05:02 The open source license 07:01 What's in it for Meta? 10:19 Why not open source? 11:16 Will frontier models commoditize? 12:41 What about startups? 16:29 The Mistral team 19:36 Are all frontier strategies comparable? 22:38 Is model development becoming more like software development? 26:34 Agentic reasoning 29:09 What future levers will unlock reasoning? 31:20 Will coding and math lead to unlocks? 33:09 Small models 34:08 7X more data 37:36 Are we going to hit a wall? 39:49 Lightning round

Klarna CEO Sebastian Siemiatkowski on Getting AI to Do the Work of 700 Customer Service Reps

mardi 23 juillet 2024Duration 51:35

In February, Sebastian Siemiatkowski boldly announced that Klarna’s new OpenAI-powered assistant handled two thirds of the Swedish fintech’s customer service chats in its first month. Not only were customer satisfaction metrics better, but by replacing 700 full-time contractors the bottom line impact is projected to be $40M. Since then, every company we talk to wants to know, “How do we get the Klarna customer support thing?” Co-founder and CEO Sebastian Siemiatkowski tells us how the Klarna team shipped this new product in record time—and how embracing AI internally with an experimental mindset is transforming the company. He discusses how AI development is proliferating inside the company, from customer support to marketing to internal knowledge to customer-facing experiences.  Sebastian also reflects on the impacts of AI on employment, society, and the arts while encouraging lawmakers to be open minded about the benefits. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned in this episode: DeepL: Language translation app that Sebastian says makes 10,000 translators in Brussels redundant The Klarna brand: The offbeat optimism that the company is now augmenting with AI Neo4j: The graph database management system that Klarna is using to build Kiki, their internal knowledge base 00:00 Introduction 01:57 Klarna’s business 03:00 Pitching OpenAI 08:51 How we built this 10:46 Will Klara ever completely replace its CS team with AI? 14:22 The benefits 17:25 If you had a policy magic wand… 21:12 What jobs will be most affected by AI? 23:58 How about marketing? 27:55 How creative are LLMs? 30:11 Klarna’s knowledge graph, Kiki 33:10 Reducing the number of enterprise systems 35:24 Build vs buy? 39:59 What’s next for Klarna with AI? 48:48 Lightning round

Related Shows Based on Content Similarities

Discover shows related to Training Data, based on actual content similarities. Explore podcasts with similar topics, themes, and formats, backed by real data.
Crucible Moments
© My Podcast Data