Here's what Daniel sent us this week. He wants to dig into the deeply technical side of voice agent UX — not TTS quality, not transcription accuracy, but the conversational dynamics underneath. Specifically: how agents handle interruptions when a user talks over them, how they detect when a user has actually finished speaking versus just pausing to think, what the latency budgets look like across the full pipeline and what happens when you blow them, how agents maintain conversation flow while they're off fetching data from some external system, and finally, the state of emotional and prosodic awareness in 2026 — whether voice agents can actually read the room. Five topics, all connected by the same core question: why does talking to a voice agent still feel slightly wrong, and what's actually being done about it?
This is one of those topics where the hard part is invisible to almost everyone using these systems. The perception is that voice AI is basically solved — transcription is good, voices sound great — but the failure modes Daniel is pointing at are where the real engineering is happening right now.
And I'd argue most developers building on top of these platforms don't fully understand the failure modes either. They're using Vapi or LiveKit or Pipecat and they've got the defaults set and they don't know what's actually going on underneath.
By the way, today's episode is powered by Claude Sonnet 4.6 — our script-writing AI of choice this week.
Alright, let's start with interruption handling because I think it's the most viscerally familiar problem. Everyone has had that experience of trying to cut off a voice agent and it just... keeps talking at you.
So the naive architecture is Voice Activity Detection — VAD — which listens for the presence of audio energy from the user and triggers a stop when it detects it. The detection itself is fast, fifty to a hundred milliseconds. The problem is VAD is completely dumb about what it's detecting. It cannot distinguish a genuine barge-in — the user actively trying to take over — from a backchannel acknowledgment like "uh-huh" or "right" or "okay." Those are fundamentally different conversational acts, and VAD treats them identically.
So the agent stops mid-sentence because you said "mm-hmm" to show you were listening.
Every time. And then it's waiting for your next input and you're waiting for it to continue and there's this awkward silence. That's the failure mode. Now, the three platforms Daniel mentioned handle this differently in ways that reveal a lot about their design philosophy. Vapi has the most publicly documented interruption system. They have a stopSpeakingPlan with two modes. The first is VAD-based — fast but dumb, as we said. The second is transcription-based, where the system waits for a configurable number of transcribed words before it decides to stop. So you set numWords to two, and the user has to actually say two recognizable words before the agent yields. That buys you two hundred to five hundred milliseconds of delay but it cuts false positives dramatically.
That's an interesting design choice. You're trading response speed for accuracy in the interruption detection itself.
And Vapi also has what they call acknowledgementPhrases — a list of words that, when detected, tell the system to ignore the interruption entirely. "Okay," "right," "uh-huh," "got it," "mm-hmm" — if you say any of those, the system treats it as a backchannel and keeps talking. That's a meaningful step toward solving the backchannel problem, even if it's a list-based heuristic rather than a learned model.
LiveKit takes a completely different approach, right? Less opinionated.
LiveKit is the opposite end of the spectrum. They describe themselves as the LEGOs of voice AI — they expose the full framework and let developers configure turn-taking to their specific needs. You can set allow_interruptions directly, you can call interrupt() explicitly in code, and you can build custom logic around when handoffs happen. They also use WebRTC Selective Forwarding Units rather than WebSockets, which gives you better packet loss handling and scalability. Their smart endpointing uses a sigmoid-curve wait function — a mathematical formula that returns wait time in milliseconds based on speech-completion probability. You can tune it aggressively for fast response or conservatively for careful response.
The sigmoid curve is interesting. It's not a fixed threshold, it's a continuous function. So at fifty percent confidence that the user has finished, you might wait two hundred milliseconds, but at ninety percent confidence you're already down to fifty milliseconds.
That's the aggressive configuration, yeah. The conservative version adds seconds of wait time at lower confidence levels. The point is that LiveKit gives you the knobs. Vapi abstracts them. And then Pipecat is the open-source option — BSD-2-clause license — and their approach is their SmartTurn model, now at version 3.2. It's a Whisper Tiny backbone with a linear classifier layer, about eight million parameters, available in a CPU version at eight megabytes quantized and a GPU version at thirty-two megabytes. It runs in as little as ten milliseconds on some CPUs.
Eight megabytes for turn detection. That's remarkably small.
It's small because the backbone is Whisper Tiny, which is already a tiny model. But the key thing SmartTurn does that pure VAD doesn't is it looks at prosodic cues — pitch, intonation, speaking rate — rather than just audio energy. It waits for two hundred milliseconds of silence from Silero VAD, evaluates whether a turn shift should occur, and if confidence is too low, it defers the decision. If silence persists for three seconds, it forces the transition anyway. That fallback is important — you don't want the agent waiting forever because it couldn't make up its mind.
So VAD is the trigger, SmartTurn is the judge.
Krisp has entered this space too with their VIVA SDK — a six-million-parameter audio-only turn-taking model optimized for CPU inference, operating on hundred-millisecond audio frames. They benchmarked it against SmartTurn versions one and two. Krisp's model achieves a balanced accuracy of 0.82 versus SmartTurn's 0.78, and more importantly, 0.9 seconds mean shift time versus 1.3 seconds for SmartTurn at the same false positive rate. Thirty percent faster at equivalent accuracy while being five to ten times smaller.
Okay so let's talk about why turn-taking is genuinely hard, because I think the backchannel problem is the tip of the iceberg. The deeper issue is what Speechmatics calls the "thinking pause" problem.
This is where it gets subtle. Consider someone saying "I understand your point, but..." and then pausing for a full second while they formulate the next thought. VAD-only systems call that an end of turn. A human listener intuitively keeps waiting. And the consequences in production are worse than just awkward. In finance, customers spelling out account numbers pause between digits — the agent cuts them off mid-sequence. In healthcare, patients recalling a patient ID from memory pause — same problem. Every premature interruption also drives up LLM API costs because you're reprocessing a misinterpreted partial utterance.
And the solution isn't just "wait longer," because if you wait too long you've got a different problem.
The field has converged on three approaches to turn detection. Audio-based approaches analyze prosodic features — pitch, energy, intonation. Fast and lightweight, works in real-time, but misses semantic context. Text-based approaches analyze the transcribed content for sentence boundaries, discourse markers, question markers. More accurate but more latency. And then the multimodal fusion approach, which is where Deepgram's Flux model lives.
Flux is interesting because it's architecturally different from bolting turn detection onto ASR after the fact.
The key innovation in Flux — launched late 2025 — is that the same model producing transcripts is also modeling conversational flow and turn detection. You're not running ASR, getting text, and then running a separate turn-detection model on that text. The turn detection is happening in the same forward pass. That eliminates a significant sequential delay. Their benchmark numbers: cuts agent response latency by two hundred to six hundred milliseconds compared to pipeline approaches, reduces false interruptions by about thirty percent, achieves p90 latency of one second. And they define two core conversation-native events — StartOfTurn and EndOfTurn — with a configurable confidence threshold called eot_threshold.
What's the default threshold?
0.7. You can go down to 0.5-0.6 for aggressive response — higher risk of cutting users off — or up to 0.9-1.0 if you want the agent to wait longer and be very sure the user is done. They also have something called eager end-of-turn detection, which fires an EagerEndOfTurn event 150 to 250 milliseconds earlier than the standard event, allowing speculative LLM calls. The cost is fifty to seventy percent more LLM calls. For latency-critical applications, that tradeoff often makes sense.
Let's talk latency budgets, because this is where the engineering gets genuinely unforgiving.
The magic number is three hundred milliseconds. Human conversation has a natural inter-turn pause of two hundred to three hundred milliseconds. Research shows pauses above about four hundred milliseconds are perceptible, and beyond 1.5 seconds you've fundamentally shifted the user's mental model from "conversation" to "query-response." Once that shift happens, no voice quality improvement rescues the experience. The budget breakdown from current production engineering: STT finalization takes fifty to a hundred milliseconds, LLM time-to-first-token takes a hundred to two hundred milliseconds, TTS time-to-first-byte takes fifty to eighty milliseconds, WebRTC transport adds twenty to fifty milliseconds. Total: two hundred twenty to three hundred eighty milliseconds. That's the window.
And the LLM piece is where you see the most variance by model choice.
Dramatically so. Groq-hosted Llama variants hit fifty to a hundred milliseconds time-to-first-token. GPT-4o-mini is a hundred twenty to two hundred milliseconds. Gemini Flash 1.5 is around three hundred milliseconds, which is already at the edge. GPT-4o is around seven hundred milliseconds. And frontier reasoning models — the extended thinking variants — are in seconds. So the capability-latency tradeoff is real and brutal. The models fast enough for voice tend to be smaller and less capable at complex reasoning.
Which is why streaming architecture is non-negotiable.
A naive sequential pipeline — wait for full STT, then run LLM, then run TTS — produces six hundred to two thousand milliseconds of latency. The production solution is streaming across all three stages simultaneously. Streaming STT emits partial transcripts in twenty-millisecond audio chunks, so the LLM starts processing before the user even finishes speaking. Streaming LLM sends tokens to TTS as they arrive. Streaming TTS begins synthesizing audio from the first sentence fragment while the LLM is still generating later paragraphs. Combined savings: three hundred to six hundred milliseconds over batch processing.
And then transport matters too. Phone calls are a different beast.
PSTN — traditional phone calls — adds a hundred fifty to seven hundred milliseconds of network transit. That's a penalty no model optimization recovers. You can have the fastest LLM on the planet and still blow your latency budget because the phone network ate it. Geographic co-location matters too. A user in Australia hitting a Virginia datacenter adds two hundred to three hundred milliseconds of round-trip before a single token is processed.
The FDB-v3 benchmark from April 2026 is worth getting into here because it shows the full picture across systems.
The Full-Duplex-Bench-v3 from National Taiwan University and NVIDIA tested six systems on multi-step tool use with real human audio. Not simple response latency — task completion latency including tool calls. GPT-Realtime completed tasks in 6.89 seconds. Gemini Live 2.5 at 7.26 seconds. Grok at 6.65 seconds. The cascaded pipeline — Whisper into GPT-4o into TTS — took 10.12 seconds, dominated by an 8.78-second first-word delay. And Gemini Live 3.1 was the fastest at 4.25 seconds.
But Gemini Live 3.1 has a problem.
A significant one. Despite being fastest at task completion, it has the worst turn-take rate — seventy-eight percent. It produces no speech at all in twenty-two percent of scenarios. And eighty-six percent of those silent cases still executed tool calls — the model found the right API to call but never generated speech. The paper calls it a "disconnect between reasoning and speech generation." It's concentrated in harder scenarios: zero percent of easy tasks, 23.5 percent of medium tasks, and 46.7 percent of hard tasks received no response.
So the fastest system is also the least reliable. Speed and reliability are directly in tension.
Which is the central tradeoff in architecture choice. And there's a counterintuitive finding that connects to this. The uncanny silence problem isn't about latency at all — it's about prosody. You can get latency under three hundred milliseconds, transcription accuracy is high, the voice sounds good in isolation, and users still report something feels off. Sesame AI's research paper from February 2025, "Crossing the Uncanny Valley of Conversational Voice," nails this. Their CMOS evaluation found that when human evaluators are shown generated versus real speech without any conversational context, they show no clear preference — naturalness is saturated, modern TTS matches human performance on that metric. But when you give evaluators ninety seconds of conversational context and ask which continuation feels more appropriate, they consistently favor the human recordings. The gap isn't in audio quality. It's in contextual prosodic appropriateness.
The model doesn't know how to speak a sentence given the emotional and conversational history.
That's the one-to-many problem. There are countless valid ways to speak any given sentence, but only some fit a given conversational moment. Without the emotional and conversational context, the model doesn't have the information to choose the right one. And there's also a counterintuitive point Speechmatics makes about response speed: agents that respond in two hundred milliseconds feel wrong — not impressive. Human conversations have a natural six hundred millisecond inter-turn pause. That slight delay signals that the listener is processing what was said. This is why Vapi's waitSeconds parameter exists — it's a deliberate artificial delay applied after all processing completes, before the assistant speaks. Default is 0.4 seconds. Healthcare applications push it to 0.6 to 0.8 seconds. Gaming applications go down to zero.
The field spent years optimizing for speed and is now deliberately adding latency back in. That's a great headline.
It really is. And the self-correction problem from FDB-v3 is equally sobering. They tested what happens when users correct themselves mid-utterance — "Book me a flight to New York — actually, make that Boston." Results across all systems: GPT-Realtime, the best performer, scored 0.588 pass rate on self-correction scenarios. Gemini Live 2.5 at 0.471. The cascaded pipeline at 0.176 — worse than most random baselines. The cascaded pipeline fails because Whisper finalizes the original transcription before the correction arrives, so the downstream LLM never receives the updated intent. Even the best end-to-end models fail on over forty percent of self-corrections. This is arguably the biggest unsolved problem in voice agent UX right now.
Okay, function calling. This is where I think most production deployments fall apart in ways that users notice but can't diagnose.
The core tension is that most useful voice agents need to call external systems mid-conversation — databases, CRMs, scheduling APIs. Those calls range from fifty milliseconds to five hundred milliseconds with high variance. The wrong pattern is treating that as a synchronous blocking operation. The right pattern is treating it as an event that needs to be masked. And the FDB-v3 paper gives us the most detailed empirical breakdown of how different systems actually handle this.
The filler rate numbers are striking.
Filler rate is the percentage of responses containing a content-free sentence before the substantive response — something like "Sure, let me look that up." GPT-Realtime: 16.9 percent filler rate, 96 percent turn-take rate, 13.5 percent interruption rate. That's the best overall balance — brief fillers used judiciously to cover tool-execution gaps. Gemini Live 2.5: 8.9 percent filler rate. Gemini Live 3.1: 31.7 percent. Grok: 44.3 percent. Cascaded pipeline: 26.9 percent. And then Ultravox at 88 percent.
Ultravox is a cautionary tale.
It's a perfect illustration of how a locally sensible heuristic becomes a global failure. Ultravox almost always emits a filler sentence before initiating the API call. First word latency looks decent — 3.88 seconds. But tool call latency is the worst of any system at 6.01 seconds, because the filler speech fires before the tool call even starts. And because it's speaking when users are still talking, it has a 47.9 percent interruption rate — nearly every other response is overlapping with a user utterance. Task completion ends up at 8.40 seconds, second worst overall.
So the filler speech is actively making the interruption problem worse.
Because the filler speech is happening during the window when users are still finishing their thought. You say "Let me check on that" and the user says "—oh, and also can you make it a window seat" and now you've got overlapping audio, the agent is confused about whether it's been interrupted, and the whole thing degrades. Grok's approach is the opposite extreme. It has the highest pre-emptive tool call rate — 41.6 percent — meaning it invokes APIs before the user finishes speaking. But it does this silently, while letting the user continue talking. So the tool call is running in the background, the user finishes their thought, and the agent already has the data it needs.
But pre-emptive tool calls have their own problem with self-correction.
That's exactly where Gemini Live 3.1's speed advantage becomes a liability. Its tool-call latency is negative 2.27 seconds — it invoked the API 2.27 seconds before the user finished speaking. If the user then corrects themselves, the API was already called with the original, uncorrected destination. The data is locked in. The fundamental tension is that the same early processing that makes agents fast frequently locks in outdated user intent. You can't update what you've already committed to.
What does good production practice actually look like here?
A few patterns that work. Pre-fetching predictable data — if you know that at call start you'll almost certainly need to load the customer's account information, fire that request at call initiation before the first user response arrives. Concurrent masking — acknowledge the request verbally while the API call runs in parallel, generating filler response audio to cover the wait. Threshold-based bridging — if a call exceeds a latency threshold, return a bridging statement rather than silence. Vapi also supports regex-based custom endpointing rules — so when the assistant has just asked for a phone number, you can extend the wait timeout to three seconds to give users time to recall it. That's a nice example of domain-specific configuration that generic defaults don't cover.
Let's close on emotional and prosodic awareness, because this is where the gap between "technically functional" and "actually feels good" lives.
Sesame's framework for this is useful. They call the goal "voice presence" — the quality that makes spoken interactions feel real, understood, and valued. Their breakdown: emotional intelligence, meaning reading and responding to emotional contexts; conversational dynamics, meaning natural timing and emphasis; contextual awareness, meaning adjusting tone to match the situation; and consistent personality. Their Conversational Speech Model — CSM — is a multimodal transformer processing interleaved text and audio tokens. Three sizes: one billion, three billion, and eight billion parameter backbones. Trained on roughly a million hours of predominantly English audio. Open-sourced under Apache 2.0.
And their key finding is that even with all of that, contextual prosodic appropriateness still falls short when evaluators have conversational history to compare against.
Their conclusion is honest about it: CSM can model text and speech content in a conversation, but not the structure of the conversation itself. Human conversations involve turn-taking, pauses, pacing, and dynamics that the model has to learn implicitly from data, and even a million hours of training data isn't enough to fully close that gap. Their view is that the future lies in fully duplex models that can learn these dynamics end-to-end.
Vapi's emotion detection layer is interesting in this context because it's a closed-box implementation.
Their proprietary Orchestration Layer includes emotion detection — analyzing emotional tone and passing it to the LLM as context. The LLM can then adjust its response tone based on that metadata. But developers can't inspect or customize it. That's a deliberate architectural choice. The Orchestration Layer — which also handles endpointing, interruption detection, backchanneling, and filler injection — is explicitly described as Vapi's core value proposition. It's the one component in their stack where you cannot bring your own infrastructure. Everything else in the Vapi stack is replaceable. The Orchestration Layer is not.
Which is interesting from a business perspective. The moat isn't the model or the voice or the transcription — it's the conversational orchestration.
And Speechmatics has a useful phrase for the failure mode that all of this is trying to prevent: the "uncanny valley of conversation." Where interactions feel just human enough to set expectations, but not sophisticated enough to meet them. The insight from their ML engineer Aaron Ng is that the most sophisticated thing a voice agent can learn isn't generating sub-two-hundred-millisecond responses — it's knowing when to stay silent. That's a reframing that I think the whole field is slowly converging on.
The backchannel problem is worth flagging as genuinely unsolved.
Krisp's roadmap explicitly lists backchannel prediction as a future release. Deepgram Flux's roadmap includes backchanneling identification. Vapi handles it with a static word list. Nobody has a fully learned, general solution for distinguishing "I'm listening, keep going" from "I want to speak now" across all the ways humans express those things. That's the frontier.
And the TTS prosody piece — the one-to-many problem. There are countless valid ways to speak a sentence and the model doesn't have enough context to choose.
ElevenLabs frames it as a key trend for this year — AI voice agents trained to recognize emotions in speech and adjust delivery accordingly. Detecting urgency in a service request, picking up hesitation in a sales inquiry. But even their approach is about adjusting pitch and pacing parameters rather than fundamentally solving the contextual appropriateness problem. The homograph disambiguation issue alone — knowing whether to pronounce "lead" as "leed" or "led" based on conversational context — is still an active engineering problem.
What's the practical takeaway for someone building on top of these platforms today?
A few things. First, don't leave interruption handling on defaults. The difference between VAD-only and transcription-based interruption detection with a sensible acknowledgementPhrases list is the difference between an agent that gets frustrated users and one that feels conversational. Second, the latency budget is more unforgiving than most developers realize — and the constraint is often the LLM choice, not the STT or TTS. If you're using GPT-4o for voice, you're already over budget before the audio even starts. Third, filler speech is a double-edged sword. Used judiciously — GPT-Realtime's sixteen percent rate — it covers tool-execution gaps naturally. Used aggressively — Ultravox's eighty-eight percent — it creates more interruption problems than it solves.
And the deliberate latency point. If your agent responds in under two hundred milliseconds, add a wait. It will feel better, not worse.
The waitSeconds parameter is one of the most counterintuitive features in voice agent engineering. You've done all this work to get fast and then you artificially slow it down. But the human conversation rhythm is real, and fighting it makes the interaction feel inhuman even when everything else is working.
The self-correction problem is the one I'd flag as most important for teams to understand going in. If your use case involves any scenario where users might revise what they're saying mid-sentence — which is most real conversations — you need to know that even the best systems fail on over forty percent of those cases. That's not a configuration problem, that's a fundamental architectural limitation right now.
And the cascaded pipeline's 0.176 pass rate on self-corrections is the strongest argument for moving toward end-to-end architectures, even with their other failure modes. The sequential bottleneck doesn't just add latency — it destroys correctness on the inputs that matter most.
Alright, that's a lot of ground covered. This topic is one of those where the more you dig in, the more you realize how much invisible engineering is holding every voice interaction together — or failing to.
Thanks as always to our producer Hilbert Flumingtop for keeping things running. Big thanks to Modal for providing the GPU credits that power this show. This has been My Weird Prompts. If you're enjoying the show, a quick review on your podcast app really does help us reach new listeners.
We'll see you on the next one.