#2448: How Cruise Ships Stay Online at Sea

How packet-level bonding and QoS keep thousands of passengers streaming while navigation systems stay safe.

0:000:00
Episode Details
Episode ID
MWP-2606
Published
Duration
28:48
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
deepseek-v4-pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

How Cruise Ships Actually Stay Connected at Sea**

Modern cruise ships face a unique networking challenge: they must keep thousands of passengers streaming video and making video calls, while ensuring navigation and safety systems never fail — all while moving hundreds of nautical miles per day. The engineering solution is more sophisticated than most people realize.

The Satellite Revolution

For decades, maritime internet relied on geostationary satellites orbiting 35,000 kilometers above Earth. The physics of that distance imposed a minimum 500-millisecond round-trip time — brutal for anything interactive. Speeds topped out at 10-20 Mbps. Then came Starlink. Low-earth-orbit satellites at just 550 kilometers dropped latency below 50 milliseconds and pushed real-world maritime speeds to 150-300 Mbps down. The cruise industry moved fast: Carnival Corporation completed fleet-wide rollout across 90+ ships by May 2024, and Starlink now serves over 600 oceangoing cruise ships.

Packet-Level Bonding

Starlink alone isn't the whole answer. Ships need guaranteed uptime across multiple connections — Starlink, traditional VSAT, cellular near shore, marina Wi-Fi at dock. Simple load balancing assigns each user session to one connection, meaning a dropped link kills the session. Peplink's SpeedFusion technology instead splits individual packets across all available WAN connections simultaneously. On the receiving end, a matching device reassembles packets in the correct order.

The engineering challenge is managing different link latencies. If packet one arrives in 50ms over Starlink and packet two arrives in 600ms over VSAT, the receiving end must buffer and reorder everything without degrading performance. Peplink's Dynamic Weighted Bonding continuously monitors real-time latency and bandwidth on each link, adjusting packet distribution to avoid bufferbloat while maximizing total utilization. Real-world deployments achieve 99.8% uptime, with ships barely touching their VSAT backup after installation.

The Surprising QoS Priority

Once you've bonded connections into a single pipe, you need Quality of Service to ensure critical systems get what they need. The highest QoS priority on most cruise ships? The casino. It's the biggest revenue driver, and casino transactions need real-time reliability — any lag means lost money. Safety and navigation systems actually need very little bandwidth (kilobytes for navigation updates), so they get guaranteed minimum allocations without impacting the total pipe. The real bandwidth competition is between streaming video, social media, cloud backups, and operational systems like point-of-sale and crew applications.

The Bandwidth Math

Historically, cruise ships averaged 30-50 Mbps total for 2,000-6,000+ passengers — that's 7-12 kilobits per second per person if everyone connected simultaneously. It works because aggressive caching stores popular content locally, per-user throttling prevents hogging, streaming video is capped at standard definition, and large downloads queue for off-peak hours. Passenger expectations are climbing: Euroconsult projects per-vessel bandwidth consumption will grow roughly tenfold by 2030, from 32-40 Mbps to 224-340 Mbps. Starlink's capacity makes that growth feasible — but the engineering of bonding and prioritization remains essential.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2448: How Cruise Ships Stay Online at Sea

Corn
Daniel sent us this one — he wants to know how cruise ships and military vessels actually manage internet connectivity at sea. The core tension is this: you've got navigation and safety systems that absolutely cannot fail, but you've also got thousands of passengers or crew who want to stream Netflix and make video calls. How do you bond multiple connections together, and then how do you slice up that limited bandwidth so recreational use never steps on essential systems?
Herman
This is one of those topics where the engineering is way more interesting than most people realize. And by the way, DeepSeek V four Pro is writing our script today, so let's see if it can keep up with the packet-level details.
Corn
Bold claim from a donkey who once tried to explain B. routing with hand puppets.
Herman
That was a very effective visual aid. But look, the fundamental problem at sea is that you're moving. On land, you can run fiber to a building, point a fixed wireless antenna, build cell towers. At sea, your endpoint is constantly changing position, sometimes by hundreds of nautical miles per day. So you're stuck with satellite, or when you're close enough to shore, cellular and marina Wi-Fi.
Corn
Satellite has historically been terrible for anything latency-sensitive. I remember trying to load a webpage on a ferry years ago and it felt like the packets were being delivered by carrier pigeon.
Herman
That's because geostationary satellites sit at roughly thirty-five thousand kilometers up. Round-trip time is physically constrained by the speed of light — you're looking at about five hundred milliseconds minimum, often more. Compare that to the fifteen to twenty milliseconds you get on a good fiber connection at home. For anything interactive, five hundred milliseconds is brutal.
Corn
Before we get into the bonding and prioritization stuff, let me ask the obvious question. Starlink and the other low-earth-orbit constellations — they've changed this whole picture, right?
Herman
using geostationary satellites was the backbone of maritime connectivity for decades. You'd get maybe ten to twenty megabits per second down, with that five-hundred-plus millisecond latency. Starlink's low-earth-orbit satellites are at roughly five hundred fifty kilometers. Latency drops below fifty milliseconds. Speeds in real-world maritime testing as of twenty twenty-six are hitting one hundred fifty to three hundred megabits down, twenty to sixty up.
Corn
That's not just incremental improvement. That's a different product entirely.
Herman
The cruise industry has moved fast on this. Carnival Corporation completed a full fleet rollout — over ninety ships — by May twenty twenty-four. Royal Caribbean Group finished by July twenty twenty-four. Norwegian Cruise Line Holdings was at ninety-nine percent by December twenty twenty-four. As of last year, Starlink was serving over six hundred oceangoing cruise ships.
Corn
Starlink alone isn't the whole answer, is it? Because you still have the problem of what happens when you're in a region where Starlink isn't available, or when the constellation is congested, or when you need guaranteed uptime for something critical.
Herman
And that's where connection bonding comes in. Daniel specifically mentioned Peplink, and they're one of the main players in this space. Their technology is called SpeedFusion, and it's worth understanding how it actually works under the hood.
Corn
Walk me through it.
Herman
Imagine you're on a cruise ship. You might have four or five different wide-area-network connections available at any given time. Starlink, a traditional V. link, maybe some cellular connections when you're near shore, marina Wi-Fi when you're docked. A naive approach would be simple load balancing — you assign each user session to one of those connections. User A goes over Starlink, user B goes over V. , and so on.
Corn
Which means if user A's connection drops, their session dies, and if user B is on the slow V. link, they're suffering while user A is streaming fine.
Herman
What SpeedFusion does instead is packet-level bonding. It takes a single data flow — say, your video stream — and splits the individual packets across all available W. On the receiving end, a matching Peplink device reassembles those packets in the correct order. So your one video stream is literally drawing bandwidth from Starlink, V. , and cellular at the same time.
Corn
That sounds like it introduces its own problems. Different links have different latencies. If packet one goes over Starlink at fifty milliseconds and packet two goes over V. at six hundred milliseconds, packet two arrives way later. How do you avoid everything getting jumbled?
Herman
That's the core engineering challenge. The receiving end has to buffer packets and reorder them before passing them up the stack. If you do it poorly, the faster link is constantly waiting for the slower one, and your effective speed drops to near the slowest link.
Corn
You'd actually be worse off than just using Starlink alone.
Herman
Peplink's answer is Dynamic Weighted Bonding, introduced in firmware version eight point two point zero. It continuously monitors real-time latency and available bandwidth on each link and adjusts how packets are distributed. If the V. link is dragging, it shifts more traffic to Starlink. If Starlink gets congested during peak hours, it rebalances. The goal is to avoid bufferbloat — when packets pile up in buffers waiting for a slow link and latency spikes — while maximizing total utilization.
Corn
The real-world deployments are impressive. There's a Dutch river cruise company that deployed Peplink routers — two M. four units, quad SIMs each — and they achieved ninety-nine point eight percent L. They barely touched their V. satellite traffic after the install. Customer satisfaction went up, revenue went up.
Herman
For ocean-going vessels, the scale is much bigger. Peplink has a case study called "Cruising with Passion" — ocean cruise ships using ten H. two Dome routers spread across the vessel, with an S. Pro at headquarters handling the backend. SIM Injectors supporting eight or more SIMs. This is handling three thousand plus passengers. On the superyacht side, you've got setups with H. five-gee units, four modems each, SIM Injectors pulling from sixteen different providers, and specialized Maritime forty-gee antennas.
Corn
The bonding piece handles aggregating all your connections into one big pipe. But that still leaves the second part of Daniel's question. Once you've got that pipe, how do you make sure the navigation system gets what it needs and the guy streaming four K YouTube in cabin three oh seven doesn't?
Herman
This is where it gets genuinely fascinating, because the priorities are not what most people would expect. Let me ask you something. On a cruise ship, what do you think gets the highest quality-of-service tier?
Corn
I'd assume bridge systems, navigation, safety communications.
Corn
I'm sorry?
Herman
The casino gets the highest Q. priority on most cruise ships. Because it's the biggest revenue driver. Casino transactions have to be real-time, they have to be reliable, and any lag or disconnection means lost money — both from the current transaction and from the player walking away frustrated. Engineers literally design the network prioritization around revenue.
Corn
That is both completely logical and deeply unsettling. How do you explain that to a passenger who assumes safety systems come first?
Herman
The honest answer is that safety and navigation systems don't actually need that much bandwidth. They need reliability and low latency, but the data volumes are tiny. A navigation update might be a few kilobytes. Bridge communications are voice-grade. You can give them guaranteed bandwidth without it making a dent in the total pipe. The real bandwidth hogs are streaming video, social media, cloud backups. So the Q. challenge isn't really about protecting navigation from Netflix — it's about managing the competition between different categories of recreational use, and between recreational use and operational systems that do need bandwidth, like point-of-sale, crew applications, and yes, the casino.
Corn
Walk me through how Q. actually works in a maritime context.
Herman
Quality of Service on maritime networks typically operates at multiple layers. At the network layer, you've got traffic classification — the router inspects packets and tags them based on source, destination, protocol, or application signature. Voice-over-I. traffic gets one tag. Streaming video gets another. Casino transaction data gets the highest priority tag.
Corn
Then those tags determine what happens when there's contention.
Herman
At the queue level, you've got priority queuing — high-priority packets go to the front of the line. You've also got guaranteed minimum bandwidth allocations. Navigation and safety systems might be allocated a guaranteed five percent of total bandwidth, with the ability to burst higher if needed. Crew operational systems get another slice. Then the remaining bandwidth — which is most of it — is divided among passenger traffic, with different tiers for different service classes.
Corn
If I pay for the premium internet package, my packets get a higher Q. tag than someone on the basic package.
Herman
And this is where S. — software-defined wide area networking — comes into play in more sophisticated deployments. is static — you configure rules and they stay in place. can dynamically adjust based on real-time conditions. If the ship is approaching a port and suddenly needs to exchange large amounts of operational data with shore systems, the S. controller can temporarily re-prioritize traffic across the entire fleet of onboard routers.
Corn
I've read that S. starts to struggle when you've got more than about fifteen connections to manage. And a modern cruise ship might have dozens.
Herman
That's correct, and it's one of the reasons connection aggregators like Peplink handle the backend management rather than trying to do everything through S. The Art of Network Engineering podcast did a great episode on this — they pointed out that cruise ship networking is essentially a floating enterprise network with the added complication of constant movement and satellite handoffs.
Corn
Let's talk about the bandwidth math, because I think this is where people's intuition breaks down. What does a cruise ship actually have to work with?
Herman
The numbers are brutal. Historically, the cruise industry averaged about thirty to fifty megabits per second per ship overall. That's the total pipe. With two thousand to six thousand plus passengers, if you do the division, that's a tiny fraction of a megabit per person on average.
Corn
Seven to twelve kilobits per second per person if everyone was online simultaneously. That's barely enough for a text message.
Herman
Yet it mostly works. Which is a testament to several things. First, not everyone is online at the same time. Second, aggressive caching — cruise ships cache popular content locally so repeated requests don't have to go over the satellite link. prevents any single user from hogging the pipe. Fourth, traffic shaping — streaming video is throttled to standard definition, cloud backups are rate-limited, large downloads are queued for off-peak hours.
Corn
Passenger expectations are climbing fast. Everyone's used to unlimited data at home. They board a ship and suddenly they can barely load Instagram.
Herman
The industry knows this. Euroconsult projects that per-vessel bandwidth consumption will grow from roughly thirty-two to forty megabits per second in twenty twenty to twenty twenty-one, to two hundred twenty-four to three hundred forty megabits per second by twenty thirty. That's roughly a tenfold increase.
Corn
Which is where Starlink really changes the equation. Three hundred megabits down on a single connection changes what's possible.
Herman
It's not a silver bullet. Starlink's maritime service uses a shared-bandwidth model — the satellites have limited capacity over any given geographic area, and during peak hours, speeds can drop. It's also blocked in Russia, China, and Iran. And there are parts of the ocean where coverage is still sparse. So the hybrid approach — bonding Starlink with traditional V. and cellular — really does seem to be the long-term answer.
Corn
Let me shift gears to the military side, because Daniel mentioned military vessels and the research here is wild. The Navy has been going through its own connectivity revolution.
Herman
Abraham Lincoln case study is remarkable. The Navy has a program called S. two — Sailor Edge Afloat and Ashore — and they used the Lincoln as a testbed during its twenty twenty-four deployment in the Red Sea. They brought on commercial providers, including Starlink and OneWeb.
Corn
What kind of performance did they actually get?
Herman
According to Captain Kevin White, who spoke at the W. conference — and this was reported by The War Zone — the Lincoln achieved one gigabit per second download and two hundred megabits per second upload. Over a five-and-a-half-month cruise, they transferred seven hundred eighty terabytes of data. That's averaging four to eight terabytes per day.
Corn
That's an absurd number for a ship at sea.
Herman
It's roughly fifty times greater than what the fleet was previously capable of. They were managing seven thousand I. addresses with just two full-time system administrators. And the tactical impact was real. F-35C Joint Strike Fighters received mission data file updates in what Captain White described as record time — incorporating over a hundred intelligence changes and multiple design improvements. This enhanced connectivity directly enabled, and I'm quoting here, "the first combat strikes in Yemen from F-35s.
Corn
This isn't just about morale and video calls. This is about combat effectiveness.
Herman
But the morale piece is also significant. The average age of embarked sailors on the Lincoln was twenty point eight. As Captain Pete Riebe, the ship's commanding officer, put it — the next generation of sailors grew up with a cell phone in their hand, and they're uncomfortable without it. If the Navy wants to compete for the best people, they need to offer bandwidth at sea.
Corn
That's a recruiting argument, not just a quality-of-life argument.
Herman
The human impact stories from that deployment are striking. Thirty-eight sailors witnessed the birth of their child via video call while deployed. Several crew members pursued doctorate and master's degrees. One officer commissioned his wife remotely from the ship. These are things that simply weren't possible before.
Corn
There's a flip side to all this. The operational security question. If you're streaming gigabit-level data to and from a warship, aren't you broadcasting your position to anyone who's paying attention?
Herman
This is the central tension. Captain Riebe was very explicit about this at the W. He said — and again I'm quoting — "They did not know our location from what we were using. Now, when we went deep into the weapons engagement zone, we turned it all off. We turned the email traffic off, we turned the WiFi off.
Corn
The solution is an off switch. Which works, but relies on human discipline.
Herman
That's where it gets problematic. There was a very public incident involving the U. Manchester, reported by Navy Times in September twenty twenty-four. Enlisted leaders on the ship conspired to install an unauthorized Starlink Wi-Fi setup. They ran it without authorization, without security oversight, in the West Pacific — which is exactly the kind of environment where operational security matters most.
Corn
Chiefs going rogue to get their internet fix. That's both hilarious and terrifying.
Herman
It reveals a genuine cultural tension. The Navy's official posture is extreme caution — turn everything off when you're in contested waters. But sailors, especially junior enlisted, have grown up with constant connectivity. The gap between official policy and human behavior is where you get incidents like the Manchester.
Corn
The Navy seems to recognize this, because S. two is becoming a formal program of record. It's being rebranded as Flank Speed Wireless, with funding and enterprise-level security hardening.
Herman
They're trying to make this a proper, funded, secure capability rather than an ad-hoc experiment. But the fundamental challenge remains — how do you harden a commercial internet connection to military standards? How do you ensure that the off switch gets flipped every single time it needs to be? How do you prevent a motivated chief from setting up their own hotspot?
Corn
Let me ask a more technical question about the civilian side. When you've got thousands of passengers on a cruise ship, all connecting to Wi-Fi, you've got a massive R. challenge before you even get to the wide-area-network bonding. How do you handle the sheer density of devices?
Herman
A cruise ship is essentially a floating stadium in terms of Wi-Fi density, except it's made of steel, which is terrible for radio frequency propagation. You need access points everywhere — in cabins, in corridors, in public spaces — and you need careful channel planning to avoid interference. The Peplink deployments use multiple routers spread across the vessel, each handling a section.
Corn
All of those access points are feeding into the same bonded wide-area-network connection.
Herman
So you've got this multi-layer challenge. At the physical layer, you're dealing with steel walls and thousands of devices. At the local network layer, you're managing access point handoffs as people move around the ship. At the wide-area-network layer, you're bonding multiple satellite and cellular connections with wildly different latency characteristics. And at the application layer, you're applying Q. policies that balance safety, operations, revenue, and passenger satisfaction.
Corn
It's impressive that it works at all. Let me ask about one thing that's been in the back of my mind. You mentioned that V. remains important even with Starlink in the picture. What does traditional geostationary satellite give you that L.
Herman
A few things. First, guaranteed bandwidth. providers like S. Engineering iDirect offer service level agreements with a hundred percent committed information rate and ninety-nine to ninety-nine point five percent availability. When you're running a cruise ship casino or a military operation, you want that guarantee.
Corn
Whereas Starlink is best-effort. You get whatever the constellation can give you at that moment.
Herman
Second, geographic coverage. Starlink's constellation is still being built out, and there are gaps, particularly in polar regions and some parts of the southern ocean. Geostationary satellites cover most of the populated oceans. Third, regulatory certainty. Starlink operates under a patchwork of national approvals, and some countries block it outright. using established providers has clearer regulatory standing in most jurisdictions.
Corn
The hybrid model — bond everything together and let the router figure out the best path — really is the engineering answer.
Herman
The router is getting smarter about this. Modern bonding systems don't just look at available bandwidth. They look at latency, jitter, packet loss, cost per megabyte on each link, and they make routing decisions in real time. A cellular connection near shore might be cheaper than satellite, so you route bulk data over cellular when it's available. Starlink might have lower latency, so you route voice calls over it. might have better uptime guarantees, so you keep critical operational traffic on it.
Corn
This is starting to sound like what financial trading firms do with their networks — dynamic path selection based on real-time metrics.
Herman
The principles are similar. And just like in finance, the difference between a good implementation and a great one is in how you handle edge cases. What happens when a link degrades gradually versus failing hard? What happens when latency spikes on one link but bandwidth remains available? How quickly can you re-route without dropping sessions?
Corn
I want to come back to something on the military side, because there's a detail in the Lincoln deployment that deserves more attention. They managed seven thousand I. addresses with two system administrators. That's an almost absurd ratio.
Herman
It speaks to the level of automation and software-defined management they've implemented. Traditional Navy networks required large teams of I. The fact that they could run what was essentially a floating I. with two people suggests they've invested heavily in orchestration and automated provisioning.
Corn
Or they've accepted more risk than they're advertising.
Herman
That's a fair point. And it connects back to the Manchester incident. If you can run the official network with two admins, how hard is it for a motivated chief to spin up an unauthorized one? The barrier to entry is low.
Corn
Let's talk about the future a bit. Where does this go? What happens when L. constellations are fully built out and we've got thousands more satellites in orbit?
Herman
The capacity picture changes dramatically. But so do the expectations. We're already seeing cruise lines move from "internet is a luxury" to "internet is included" pricing models, which means usage goes up. Passengers expect to be able to work remotely from a ship, which requires not just bandwidth but reliability. The line between ship connectivity and office connectivity blurs.
Corn
On the military side, the genie is out of the bottle. Sailors have experienced gigabit internet at sea. They're not going back to email-only deployments.
Herman
Captain Riebe essentially said as much. The Navy's challenge now is to secure what they've demonstrated is possible. Flank Speed Wireless is supposed to do that, but the details of how they're hardening these commercial connections aren't public for obvious reasons. I'd expect to see more emphasis on encryption at every layer, on emission control that's automated rather than relying on someone remembering to flip a switch, and on intrusion detection systems scaled for the bandwidth they're now handling.
Corn
The emission control point is interesting. If you're radiating from Starlink terminals, you're putting out a detectable R. signature even if the data is encrypted. Adversaries can geolocate you from that.
Herman
Which is exactly why the Lincoln turned everything off in the weapons engagement zone. But that's a binary solution — on or off. The ideal would be something more granular, where you can maintain connectivity for certain types of traffic while minimizing your electromagnetic footprint. That's a hard problem.
Corn
If I'm a listener and I want to take something practical from this — maybe I'm not running a cruise ship or an aircraft carrier — what's the takeaway?
Herman
The principles scale down surprisingly well. Connection bonding isn't just for ships. If you work from home and you've got cable internet plus a five-gee hotspot, you can use a Peplink router or similar to bond those connections. If your cable goes down, your video call doesn't drop because it's already running across both links. principles apply too — you can prioritize your work traffic over your kids' game downloads.
Corn
The broader lesson is that the engineering challenge isn't just about getting more bandwidth. It's about making the bandwidth you have work intelligently.
Herman
The cruise ship that makes thirty megabits feel usable for four thousand people is arguably more impressive than the one that just buys a bigger satellite pipe. Understanding traffic patterns, caching intelligently, shaping demand during peaks, prioritizing ruthlessly — those are the skills that matter.
Corn
There's also a lesson here about how commercial and military technology converge. Starlink started as a consumer service, got adopted by cruise ships, and is now being integrated into naval operations. The same bonding and Q. techniques apply across all three domains.
Herman
The human factors are the same too. Whether you're a cruise passenger or a sailor, you want to FaceTime your family. The technology has to serve that need while respecting the operational constraints of the environment.
Corn
One last thing I want to flag. The research mentions that bandwidth demand per cruise ship is projected to hit three hundred forty megabits per second by twenty thirty. But I wonder if that's actually conservative. Every time we've projected bandwidth demand, we've undershot.
Herman
Almost certainly conservative. If you'd told someone in twenty fifteen that a cruise ship in twenty twenty-five would need three hundred megabits, they'd have thought you were insane. Now it's the baseline. As soon as you give people more bandwidth, they find ways to use it. Higher resolution video, more cloud applications, more devices per person.
Corn
On the military side, the appetite for data is insatiable. F-35 mission data files, real-time intelligence feeds, drone video — these are bandwidth-intensive applications that are only going to grow.
Herman
Which makes the bonding approach even more important. You can't bet on any single technology or provider. You need a system that can integrate whatever connections are available and make intelligent decisions about how to use them.
Corn
Now: Hilbert's daily fun fact.
Herman
Octopuses have three hearts, and two of them stop beating when they swim.
Corn
For listeners who want to apply any of this, the most accessible starting point is probably looking at multi-W. routers for home or small business use. Peplink makes entry-level units that support bonding across two connections. It's not cheap — you're looking at several hundred dollars for the hardware plus the cost of a second internet connection — but if you depend on connectivity for work, the redundancy alone can justify it.
Herman
If you're just curious about the technology, there are some good resources. The Art of Network Engineering podcast episode on cruise ship networking is excellent and surprisingly accessible. The Peplink documentation on SpeedFusion is technical but well-written. And if you want to go deep on the naval side, The War Zone's coverage of the Lincoln deployment is the best public source I've found.
Corn
The one thing I'd add is that Q. is something you can play with even on a standard home router. Most consumer routers have some form of Q. settings, even if they're basic. Prioritizing your work laptop over your streaming devices is a five-minute configuration change that can make a real difference when your connection is congested.
Herman
I think the big open question for the next few years is whether L. constellations make traditional V. obsolete for maritime use, or whether the hybrid model persists indefinitely. My guess is hybrid, because redundancy is valuable and geostationary satellites still have advantages in certain scenarios. But the economics are shifting fast.
Corn
The other open question is how the military balances the undeniable operational benefits of high-bandwidth connectivity with the equally undeniable security risks. The Lincoln model of turning everything off in contested zones works, but it's fragile. One mistake, one chief who doesn't flip the switch, and you've got a problem.
Herman
Which is why I expect we'll see more investment in automated emission control — systems that can detect when you're in a threat environment and adjust connectivity posture without human intervention. But that's a hard engineering problem, and it'll take time.
Corn
Thanks as always to our producer Hilbert Flumingtop for keeping this show running. This has been My Weird Prompts. If you want more episodes, find us at myweirdprompts dot com or wherever you get your podcasts.
Herman
If you're listening on a cruise ship, now you know why your video call keeps buffering. It's not personal. It's packet-level bonding.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.