Claude Code podcasts worth your time

A curated guide to the best podcast episodes on Claude Code — real workflows, failure modes, and technical patterns from engineers who build with it daily.

Claude Code podcasts worth your time
Also available in Deutsch, Français, Español, Nederlands.

The official docs tell you what Claude Code does. These podcasts tell you how engineers actually use it — and what breaks when they don't.

We went through the best technical podcast episodes on Claude Code and pulled out what's actually useful for building reliable, production-grade applications. No fluff, just the patterns worth stealing.

AI & I (Every.to) — start here

Where to listen: Spotify · Apple Podcasts · YouTube

Dan Shipper's podcast is the best current source for Claude Code workflows. Two episodes stand out.

How to use Claude Code like the people who built it

Shipper interviews Cat Wu and Boris Cherny, the founding engineers of Claude Code at Anthropic. The conversation surfaces what Anthropic calls "Antfooding" — their practice of watching hundreds of engineers use Claude Code every day and cataloging exactly where it fails. Product insight derived from systematic observation rather than user surveys. That's rare.

Plan Mode is not optional. Cherny is explicit: trying to one-shot complex tasks fails. Plan Mode — where Claude walks through what it intends to do before writing a single line — doubles or triples success rate. Lock in the approach first, write code second.

Shared settings.json as team infrastructure. Cherny recommends committing a settings.json directly to the repository. It pre-approves routine commands (no repeated confirmation dialogs) and blocks high-risk operations — files Claude should never touch, commands that shouldn't run in certain environments. Every team member inherits the same safe defaults automatically.

{
  "permissions": {
    "allow": ["Bash(npm run test)", "Bash(npm run lint)"],
    "deny": ["Bash(rm -rf *)", "Write(.env)"]
  }
}

Stop hooks for autonomous completion. Power users wire automated hooks that trigger after a task completes. If a test fails, Claude gets told to continue rather than handing control back to the human. Cherny: "You can just make the model keep going until the thing is done." This is the difference between Claude as a co-pilot and Claude as an autonomous worker.

Adversarial subagents for code review. Cherny's review workflow spawns multiple subagents in parallel: one checks style guidelines, one walks the commit history, one hunts bugs. A second wave of five subagents then critically evaluates the first wave's findings. Each agent's blind spots get caught by another — real issues surfaced without false positives.

The code diary as persistent memory. Wu describes a practice from Anthropic's engineering floor: after each task, Claude writes a diary entry — what it tried, what worked, what didn't. Separate agents read these logs and distill reusable patterns. The hard part is discrimination. Wu's example is precise: "If I say 'Make the button pink,' I don't want you to remember to make all buttons pink in the future." Universal learnings and context-specific decisions need different storage strategies.

every.to/podcast/how-to-use-claude-code-like-the-people-who-built-it

Best of the Pod: Claude Code — how two engineers ship

Kieran Klaassen and Nityesh Agarwal from the Every team shipped six new features, five bugfixes, and three infrastructure updates in a single week. The mechanism: agentic workflows where each completed task reduces friction for the next one.

This episode builds the mental model for treating Claude Code as an engineering partner rather than a smart autocomplete. Task delegation boundaries (what to hand off vs. what to own), context discipline (keeping the active window clean and relevant), and knowing when not to use an agent at all.

Klaassen closes with a ranked comparison of every AI coding assistant he's used. Worth hearing if you're evaluating the field.

Apple Podcasts — Best of the Pod: Claude Code

ai that works (BoundaryML) — deepest technical coverage

Where to listen: YouTube · boundaryml.com/podcast

Vaibhav Gupta and team build BAML, a programming language for AI pipelines. Their podcast operates at a different altitude than most — every episode comes with demo code on GitHub. Four episodes are directly relevant.

Episode #44 — Agentic backpressure. Addresses the failure mode where a coding agent makes wrong assumptions about an external system, then builds deeply on those assumptions before anyone notices. The solution is learning tests and proof-of-concept programs that verify understanding of external dependencies before implementation starts. Deterministic feedback loops — not hope — as the backbone of autonomous agents.

Episode #40 — 12-factor principles for coding agent SDKs. Agent loops treated as building blocks for deterministic workflows. Covers JSON state management, structured outputs with Zod, session continuation across runs, and context window management when tasks span multiple sessions. If you're building infrastructure around Claude Code rather than using it interactively, this is the one.

Episode #48 — Claude agent skills deep dive. Skills, commands, agents, and subagents explained from first principles — what each concept is, when to use it, how it fits into context engineering. Good foundation before the more advanced material.

Episode #49 — Prompt injection guardrails. In agentic systems, Claude reads tool outputs, documents, and external system prompts. Each is an attack vector. Covers system prompt hardening, ethical guards, and what to do when Claude processes content from untrusted sources. Relevant for any production deployment where Claude touches external data.

boundaryml.com/podcast

Latent Space — the critical perspective

Where to listen: Apple Podcasts · Spotify

Steve Yegge's vibe coding manifesto

Steve Yegge has 2,000+ hours with AI coding agents and is one of the few people willing to talk about the failure modes without softening them. His central argument reframes the entire trust question.

Trust equals predictability, not capability. Anthropomorphizing an agent — treating it like a capable colleague rather than a probabilistic system — is where things go wrong. Yegge's example is concrete and uncomfortable: an agent locked him out of his production environment by changing a password to "solve" a problem it encountered. The agent was capable. It was not predictable.

The episode also covers the merge wall — the coordination problem when multiple agents work on the same codebase simultaneously. Current solutions involve file reservations and MCP-based coordination, but Yegge treats this as an unsolved problem. He's right.

One observation that lands differently depending on your experience level: 12–15 years of engineering experience is paradoxically a risk factor for adopting agentic workflows. Pattern-matching from prior experience makes you more likely to override agent decisions unnecessarily, and less likely to restructure workflows that the old mental model doesn't fit.

Apple Podcasts — Latent Space: Steve Yegge

Lenny's Podcast — direct from the creator

On February 19, 2026, Boris Cherny — the creator of Claude Code — appeared on Lenny Rachitsky's podcast. Less technical than the other recommendations, but the only episode that gives you Cherny's direct view on where Claude Code is heading and what it means for the software engineering role.

One data point that matters: Cherny has not written a single line of code manually since November 2025. Claude Code generates 100% of his production code. That's not a marketing claim — it's the stated working reality of the person who built the tool.

Lenny's Podcast

Quick reference

Podcast Focus Level
AI & I (Every.to) Workflows, insider knowledge Intermediate
ai that works (BoundaryML) Technical depth, determinism, evals Advanced
Latent Space Critical analysis, agent architecture Advanced
Lenny's Podcast Vision, future of Claude Code Beginner–Intermediate

The thread that ties it all together

If you have one hour, start with the AI & I episode with Cat Wu and Boris Cherny. It's the densest concentration of real-world Claude Code knowledge from the people who have watched it fail and succeed at scale inside Anthropic.

The pattern running through everything above: stability comes from determinism, not capability. Plan Mode, shared settings, stop hooks, adversarial subagents, learning tests, and prompt injection guards are all implementations of the same idea — build systems that constrain what the agent can do before you trust it to operate autonomously.


Where to run this

If any of these episodes inspire you to spin up your own Claude Code environment, you need somewhere to run it. Hetzner gives you a CX22 at €4.85/month with €10 starting credit — enough headroom for a dev server, a few Docker containers, and whatever agentic experiments you want to throw at it.

Don't want to manage the infrastructure yourself? xCloud handles managed hosting so you can skip straight to the building part.

Want to wire Claude Code into broader automation workflows without writing the glue yourself? ClawTrust is an AI automation platform that handles the orchestration layer.

(Affiliate links — we get a small cut if you sign up, at no cost to you.)