#2571: How S3 Billing Actually Works (And Why R2 Is Different)

Storage is the decoy cost. The real surprises come from request charges, egress fees, and early deletion penalties.

0:000:00
Episode Details
Episode ID
MWP-2729
Published
Duration
30:27
Audio
Direct link
Pipeline
V5
TTS Engine
chatterbox-regular
Script Writing Agent
deepseek-v4-pro

AI-Generated Content: This podcast is created using AI personas. Please verify any important information independently.

Cloud Storage Billing: What You Actually Pay For**

When you see a cloud storage pricing page, the per-gigabyte number is always in giant font. That's the decoy. Storage is almost never the largest line item on your bill. The real costs come from four other categories, and understanding them is the difference between a predictable monthly bill and a five-figure surprise.

The Four Cost Categories

Every S3-compatible storage service charges for four things, sometimes five. First, storage itself — dollars per gigabyte per month to hold your data. Second, request charges — every PUT, GET, LIST, or existence check is an API call that costs a fraction of a cent. Those fractions add up fast when you're doing millions of operations. Third, data transfer — specifically egress, or data leaving the provider's network. This is the big one. Fourth, depending on the storage class, there may be retrieval fees or early deletion penalties. Some providers also charge for management features like object tagging, versioning, or cross-region replication.

The Request Charge Trap

Request pricing looks absurdly small on paper. A thousand PUT requests cost about half a cent. Ten thousand GET requests cost about four-tenths of a cent. But scale matters. An application doing a hundred million GET requests a month pays forty dollars just in request charges — before paying a penny for storage or bandwidth. For read-heavy workloads like podcast hosting or serving website assets, this is where Cloudflare R2 made a disruptive move: it charges nothing for requests. No GET fees, no PUT fees, no LIST fees. That simplifies the mental model enormously for static asset workloads.

Egress: Where Horror Stories Come From

Egress is data leaving the provider's network, and it's where the nightmare scenarios live. AWS charges nine cents per gigabyte for the first ten terabytes of internet egress per month. That sounds trivial until you're serving hundreds of terabytes. The classic horror story: a developer puts a one-gigabyte file in a public S3 bucket, it gets posted on Hacker News, serves fifty terabytes of egress over a weekend, and wakes up to a $4,500 bill. The file itself cost less than three cents to store for the month. The egress did the damage.

Upload is almost always free. Providers make it frictionless to give them data. Getting it back out — that's where the meter runs. Cloudflare R2's pitch is fundamentally different: zero egress to the internet. They absorb the bandwidth cost because their global network and peering arrangements make it vastly cheaper for them than for you.

When S3 Makes Sense

Direct price comparisons between R2 and S3 miss the point. S3 isn't really selling storage — it's selling storage that talks to everything else in AWS with zero latency and no transfer costs within the region. If your application runs on EC2, uses SageMaker for machine learning, or processes data with EMR or Athena, that colocation matters. Moving storage to an external provider means pulling data across the public internet — slower, less reliable, and potentially incurring ingress costs on the compute side.

Storage Class Gotchas

The cheaper storage classes come with fine print. S3 Standard Infrequent Access costs half as much to store, but charges a retrieval fee per gigabyte when you access data. Glacier Flexible Retrieval costs about three-tenths of a cent per gigabyte per month — a terabyte for three dollars — but retrieval takes minutes to hours and carries a ninety-day minimum. Delete an object before ninety days, and you still pay for the full term. Glacier Deep Archive has a hundred-eighty-day minimum. What looks like a savings strategy can become a penalty if you need to reorganize data.

The Right Tool for the Job

For static assets — images, audio, video, documents — served to a global audience without complex compute integration, R2's model is hard to beat. Storage at 1.5 cents per gigabyte per month, no request charges, no egress fees. A thousand podcast episodes at fifty megabytes each costs about sixty cents a month, regardless of how many times they're downloaded. For dynamic applications that need tight integration with compute and event processing, S3's ecosystem is the real product. The key is knowing which job you're actually doing.

Downloads

Episode Audio

Download the full episode as an MP3 file

Download MP3
Transcript (TXT)

Plain text transcript file

Transcript (PDF)

Formatted PDF with styling

#2571: How S3 Billing Actually Works (And Why R2 Is Different)

Corn
Daniel sent us this one — he wants to understand the basic parameters of S3-style billing. Not the deep architecture, but what you're actually paying for. Storage, obviously, but then there's this whole world of egress fees, request charges, and those horror stories where someone sets up a bucket, something goes viral, and suddenly they're staring at a six-figure bill. He also mentioned Cloudflare R2, which we use for our own podcast audio, and he's curious whether every listen triggers a tiny charge or if we're just paying to hold the files. The core question is: how does this billing actually work, and what should you actually be vigilant about?
Herman
This is one of those topics where the per-gigabyte price is the decoy. It's the number everyone looks at, and it's almost never the number that actually determines your bill.
Corn
Right, because it's the one number they put in giant font on the pricing page.
Herman
And it's not that the number is dishonest. It's that storage is usually the smallest line item. The real story is in the other three or four things you're paying for simultaneously. And the horror stories Daniel mentioned — those almost always come from someone who understood the storage price and nothing else.
Corn
Before we get into the horror stories, let's actually lay out the categories. When you put an object in S3 or any S3-compatible storage, what are the buckets — no pun intended — of cost?
Herman
Four main ones. Sometimes five depending on the provider. First, storage itself — that's the dollars per gigabyte per month to hold your data. Second, requests — every time you put something in, take something out, list what's there, or even check if something exists, that's an API call and it counts. Third, data transfer — this is the big one, the egress fees. Fourth, depending on the storage class, there might be retrieval fees or early deletion penalties. And then fifth, for some providers, there are management features like object tagging, versioning, replication across regions — those can add costs too.
Corn
Let's go through those one at a time, because I think the request charges are the one that genuinely surprises people. You think you're paying for storage and bandwidth, and then you get a bill with a line item for millions of tiny operations.
Herman
The pricing on requests sounds absurdly small until you do the math. On standard S3, a PUT request — writing an object — costs about half a cent per thousand. A GET request is even cheaper, about four-tenths of a cent per ten thousand. So you read those numbers and think, fine, negligible. But then you build an application that does a hundred million GET requests a month, and suddenly that's forty dollars just in request charges before you've paid a penny for storage or bandwidth.
Corn
A hundred million GETs isn't insane. If you've got a popular podcast and you're serving audio files directly from a bucket, like we are, and each episode gets downloaded a few thousand times — that's a few thousand GETs per episode, multiplied by however many episodes people are streaming. It adds up.
Herman
Here's the thing about request charges — they're the category where Cloudflare R2 made a disruptive move. R2 charges nothing for requests. No charge for GETs, no charge for PUTs, no charge for LIST operations. For a workload that's read-heavy, like podcast hosting or serving assets for a website, that's a meaningful difference. Not enormous — we're talking tens or maybe low hundreds of dollars a month for most people — but it simplifies the mental model enormously.
Corn
By the way, today's episode is being written by DeepSeek V four Pro. Seemed like a good fit for a billing deep-dive.
Herman
Let's see if it keeps us on track.
Corn
Storage and requests — those are the predictable ones. Now let's talk about the thing Daniel actually flagged in his prompt: egress. What does "egress free" actually mean, and what are you missing if a provider doesn't offer it?
Herman
Egress is data leaving the provider's network. And this is where the pricing model gets complicated, because "egress" isn't one thing. There's egress to the internet — that's the big one, the one that shows up in horror stories. There's egress to other AWS services in the same region, which is usually free. There's egress to other regions, which is cheaper than internet egress but still not free. There's egress through a CDN like CloudFront, which has its own pricing structure. And then there's the direction that most people don't think about: upload is almost always free. You can push data into S3 all day and nobody charges you for the inbound bandwidth.
Corn
That asymmetry is interesting. They're basically saying, we'll make it frictionless for you to give us your data, but getting it back out — that's where the meter runs.
Herman
The meter runs at rates that look small. AWS standard internet egress is nine cents per gigabyte for the first ten terabytes a month, dropping to eight and a half cents, then seven cents, then five cents as you go up in volume. Nine cents a gig sounds trivial. But if you're serving video files or high-quality audio and you're doing hundreds of terabytes of egress, nine cents times a hundred thousand gigabytes is nine thousand dollars. And that's before you hit the volume discounts.
Corn
This is where the horror stories come from, right? The classic one is someone puts a large file in a public S3 bucket, it gets linked somewhere, goes viral, and they wake up to a bill that's five figures.
Herman
There's a well-known case from a few years ago — a developer put a one-gigabyte file in a public bucket, it got posted on Hacker News, served something like fifty terabytes of egress in a weekend, and the bill was around four thousand five hundred dollars. And that's a one-gigabyte file. The file itself cost less than three cents a month to store. The egress is what destroyed the budget.
Corn
The nightmare scenario is even worse if you don't have billing alerts set up. AWS does have free tier usage alerts and you can set budget alarms, but they're not on by default in a way that catches this. You have to configure them.
Herman
And AWS has gotten better about this — they introduced a free tier that covers a certain amount of egress, they've got budget actions now, they'll actually stop your resources if you configure it. But the default posture is still: you can spend an unlimited amount of money unless you actively put a ceiling in place. Cloudflare R2's pitch on egress is fundamentally different. They simply don't charge for egress to the internet. The data transfer out is zero. They absorb it because they've got this enormous global network and their peering arrangements make bandwidth vastly cheaper for them than it is for you.
Corn
When Daniel asks whether every time someone listens to our podcast, it incurs a charge — on R2, the answer is no. The egress is free. The GET request is free. We're literally just paying for storage.
Herman
And storage on R2 is one and a half cents per gigabyte per month, with a ten-gigabyte free tier. So if we've got, let's say, a thousand episodes at roughly fifty megabytes each for decent quality audio, that's fifty gigabytes of storage. The first ten gigs are free, the remaining forty gigs cost us sixty cents a month. That's it. No matter how many people listen, no matter how many times they download, the bill is sixty cents a month plus whatever tax.
Corn
That's almost absurdly cheap. And it makes you realize how much of the traditional cloud provider pricing is built around the assumption that you're running an application, not just serving static assets.
Herman
That's the key insight about R2's business model. They're not trying to be your application backend. They're saying: if you've got static assets — images, audio, video, documents, website files — put them here, pay us for storage, and we'll handle the distribution for free because our network already exists and the marginal cost to us is close to zero. If you need object storage as part of a dynamic application with compute, Lambda functions, complex event processing — that's where you probably still want S3, because the integration with the rest of AWS is the real product.
Corn
Let's talk about that integration point, because I think that's where the "S3 is expensive" narrative gets unfair. People compare R2's price to S3's price per gigabyte, and R2 looks cheaper. But S3 isn't really selling you storage. It's selling you storage that talks to everything else in AWS with zero latency and no transfer costs within the region.
Herman
This is the point I always want to make when people do direct price comparisons. If your application runs on EC2, if you're using SageMaker for machine learning, if you're doing data processing with EMR or Athena, the fact that S3 is colocated with all of that infrastructure means you're not paying for data movement between services. The moment you move your storage to an external provider, even one with free internet egress, you're now pulling data across the public internet into your compute environment. That's slower, it's less reliable, and depending on your cloud provider's ingress policies, you might be paying for inbound bandwidth on the compute side.
Corn
The real comparison isn't R2 versus S3 as isolated storage products. It's R2 plus whatever else you're using versus S3 plus the AWS ecosystem. And for a lot of use cases, the ecosystem wins.
Herman
For a lot of use cases, yes. For our use case — serving static podcast files to a global audience — the ecosystem doesn't matter. We don't need S3's event notifications, we don't need Lambda triggers, we don't need to query our audio files with Athena. We just need a URL that serves a file quickly and reliably. And R2 does that with zero egress and zero request charges. It's the right tool for this specific job.
Corn
Let's circle back to the billing categories we haven't covered yet. You mentioned retrieval fees and early deletion. Those are mostly relevant for the cheaper storage classes, right?
Herman
Yes, and this is where I think the fine print actually does bite people. S3 has multiple storage classes. Standard is the default — you pay about two point three cents per gigabyte per month, no retrieval fees, you can delete whenever. Then there's S3 Intelligent Tiering, which automatically moves objects between frequent and infrequent access tiers based on usage patterns, and that's great for unpredictable workloads. Then you've got S3 Standard Infrequent Access, which is about half the storage cost but you pay a retrieval fee per gigabyte when you access the data. Then there's S3 One Zone Infrequent Access, even cheaper but stored in only one availability zone, so if that zone fails, your data is gone.
Corn
Then Glacier, which is a whole different universe.
Herman
Glacier is where the early deletion penalties really matter. S3 Glacier Flexible Retrieval is something like three-tenths of a cent per gigabyte per month. That's absurdly cheap — a terabyte for about three dollars a month. But retrieval takes minutes to hours depending on what speed you choose, and you pay a retrieval fee. And here's the gotcha: if you delete an object before it's been stored for ninety days, you still pay for the full ninety days. Same with Glacier Deep Archive — one hundred and eighty day minimum. So if you put a bunch of data in Deep Archive thinking you'll save money, and then three months later you realize you need to reorganize it, you're paying for six months of storage on every object you delete.
Corn
The ninety-day minimum on Glacier is the one that I think trips people up, because it's not intuitive. You think you're paying for the time you used, like a hotel room. But it's more like a lease — you committed to ninety days whether you stay or not.
Herman
It makes sense from AWS's perspective. They're buying physical hard drives, they're allocating capacity, they're making assumptions about how long data will sit there. If everyone could dip in and out, the economics wouldn't work. But the pricing page doesn't exactly lead with the early deletion policy. It's in the fine print, and if you're skimming, you miss it.
Corn
Let's build a practical framework. If someone's setting up object storage today, what's the checklist of things they need to look at before they start uploading data?
Herman
First question: what's your access pattern? Are you writing once and reading frequently, like podcast episodes or website assets? Are you writing once and rarely reading, like backups? Are you doing lots of small reads and writes, like an application database? The access pattern determines which storage class makes sense and whether request charges matter.
Corn
Second question: where's your audience or your compute? If everything is in one cloud provider, staying within that ecosystem probably saves you money even if the per-gigabyte storage cost looks higher, because cross-provider data transfer is where the real pain lives.
Herman
Third: what's your egress profile? If you're serving data to the internet, you need to estimate how much. A hundred gigabytes a month? Fine, egress fees are negligible. A hundred terabytes a month? You need to be on a provider with free egress, or you need a CDN in front of your storage that has better egress pricing, or you need to architect your application differently.
Corn
Fourth: what's your tolerance for retrieval latency? If you need instant access to everything, you're in Standard storage, period. If you can wait minutes or hours, Glacier saves enormous amounts of money. If you can wait days and you're willing to commit to years of storage, Deep Archive is almost free.
Herman
Deep Archive is about one dollar per terabyte per month. For a terabyte. That's the kind of pricing where you start asking, why would I ever delete anything? And the answer is: because if you ever need to get it back, the retrieval costs can be substantial, and if you need it fast, the expedited retrieval fee is significant.
Corn
Let me ask a question that I think Daniel was hinting at: is the fear of surprise bills overblown? Are the horror stories rare edge cases, or is this something the average person setting up a side project should worry about?
Herman
I think the fear is directionally correct but the probability is low for most people. The horror stories almost always involve one of three things: a public bucket with large files that goes viral, a misconfigured application that's making millions of unnecessary requests, or someone who turned on cross-region replication without understanding the data transfer costs. If you're not doing any of those things, your bill is probably going to be predictable and reasonable.
Corn
The problem is that the system allows you to do those things without warning you. There's no confirmation dialog that says, "Hey, you just made this bucket public. If someone links to that fifty-megabyte file and it gets a million downloads, that's going to cost you about four thousand five hundred dollars in egress. Are you sure?
Herman
That's the real design tension. AWS has built a system that's enormously flexible and powerful, and that flexibility means you can shoot yourself in the foot. They've added more guardrails over the years — you can block public access by default now, you can set cost anomaly detection, you can configure hard spending limits. But the defaults are still permissive, and that's a philosophical choice. They'd rather let power users move fast and put the burden on everyone to learn the safety features.
Corn
Whereas Cloudflare's approach with R2 is basically: we've removed the dangerous dials. You can't accidentally run up an egress bill because there's no egress charge. You can't accidentally run up request charges because there are no request charges. The only knob you have is how much you store, and that's linear and predictable.
Herman
That simplicity is valuable for a huge number of use cases. It's not the right choice for everything — if you need S3's event system, its strong consistency guarantees across regions, its integration with IAM policies at a granular level, R2 isn't a drop-in replacement. But for "I have files and I want people to be able to download them," it's hard to beat.
Corn
Let's talk about the CDN layer, because that's where a lot of the egress cost optimization actually happens in practice. Even on S3, you're not supposed to serve files directly to the internet from your bucket. You put CloudFront in front of it.
Herman
And CloudFront has its own egress pricing, which is different from S3 egress pricing. Data transfer from S3 to CloudFront is free — that's within the AWS ecosystem. Data transfer from CloudFront to the internet is typically cheaper than S3 direct egress, and it's cached, so if the same file gets requested a million times, you're only pulling it from S3 once. Plus, CloudFront gives you better performance globally because the files are cached at edge locations closer to users.
Corn
The architecture that avoids the horror story is: S3 bucket, not public, with CloudFront in front of it. Even if something goes viral, CloudFront caches the file, you serve it from edge caches, and your S3 egress bill is minimal. You're still paying CloudFront egress, but it's lower and more predictable.
Herman
You can layer additional cost controls on top. You can set CloudFront to only serve from certain regions if your audience is geographically concentrated. You can use signed URLs or signed cookies to prevent hotlinking. You can set rate limits. None of this is on by default, but it's all available, and the combination of S3 plus CloudFront with basic security configured is the standard pattern for a reason.
Corn
The horror stories are really about people who skipped the CloudFront step and served directly from a public S3 bucket. Which, to be fair, is the simplest thing — you upload a file, you get a URL, you share the URL. Until it works too well.
Herman
That's the trap. The happy path is also the dangerous path. The thing that's easiest to do is the thing that can generate a four-figure bill overnight.
Corn
Let's pivot to something Daniel mentioned in passing: the different ways storage itself gets billed. He said he's seen both fixed monthly prices and usage-based pricing. Can you unpack that?
Herman
The dominant model in object storage is pay-for-what-you-use, metered by the gigabyte-month. But a gigabyte-month isn't as simple as "you stored a gigabyte for a month." Most providers calculate it based on average storage over the billing period. They sample your storage usage at regular intervals — every hour, every day, whatever — and then calculate the average. So if you store a terabyte for fifteen days and then delete it, you pay for roughly half a terabyte-month, not a full terabyte-month.
Corn
Which is more fair than a fixed allocation model where you reserve a terabyte and pay for it whether you use it or not.
Herman
Much more fair for variable workloads. But there are providers that do fixed allocation too. Some of the smaller S3-compatible providers, like Wasabi, charge a fixed fee per terabyte with a minimum storage duration. Wasabi's model is interesting — they charge about six dollars per terabyte per month, no egress fees, no request charges, but if you delete an object before it's been stored for ninety days, you pay for the full ninety days. It's like a hybrid between S3 Standard and Glacier.
Corn
The landscape is basically: pay-as-you-go with separate charges for everything, versus simplified pricing with some combination of free egress, free requests, but minimum commitments or early deletion penalties. There's no free lunch.
Herman
There's never a free lunch. The simplified pricing providers are making a bet about your usage patterns. They're assuming that for every customer who serves terabytes of egress, there are a hundred customers who store data and barely access it. The storage fees from the quiet customers subsidize the egress from the busy ones. And because bandwidth costs them less than the list price — especially if they have their own network like Cloudflare — the math works out.
Corn
If you're a quiet customer, you might actually be better off on a traditional pay-as-you-go model where you only pay for the egress you actually use, which is close to zero. And if you're a high-egress customer, the simplified pricing is a massive win.
Herman
And that's why the first question is always: what's your access pattern? There's no universally correct answer. The right provider depends entirely on what you're doing.
Corn
Let me ask you about one more thing that I think trips people up: the difference between a bucket that's "public" and a bucket that has "public objects." Those are not the same thing.
Herman
This is a crucial distinction. A public bucket means anyone with the URL can list all the objects in it — they can see what files you have. A bucket that's not public but contains objects that are publicly readable means people can access specific files if they know the exact URL, but they can't browse the directory. The horror stories usually involve public buckets, because that's how someone discovers the files in the first place. If your bucket isn't publicly listable, the only way someone finds your large file is if you share the URL directly.
Corn
Even then, if you share the URL in a public place and it goes viral, you're still on the hook for egress. The "not publicly listable" setting protects you from crawlers and casual snooping, but it doesn't protect you from your own viral success.
Herman
The only thing that really protects you is putting a CDN in front, or using a provider with no egress fees, or setting hard spending limits with automated cutoff. And I want to emphasize that last one, because AWS now supports budget actions that can actually stop your resources. You can say: if my monthly spend hits five hundred dollars, revoke all public access to this bucket. Or even: shut down the bucket entirely. It's a nuclear option, but for a side project where you can't afford a surprise bill, it's worth configuring.
Corn
That's the kind of thing that should probably be part of the setup wizard. "Before we create this bucket, let's set a spending limit." But it's buried in the Billing and Cost Management console, which is a completely different part of the AWS interface.
Herman
The AWS console is famously sprawling. I've been using it for years and I still discover new panels. But to be fair, they've improved the cost visibility enormously. You can now see daily breakdowns by service, by region, by tag. You can set up anomaly detection that uses machine learning to flag unusual spending patterns. It's not that the tools don't exist — it's that you have to know to go looking for them.
Corn
Which brings us back to Daniel's core question: what should you actually be vigilant about? I think the answer is less about understanding every line item in the billing docs and more about setting up the safety nets before you need them.
Herman
My practical checklist would be: one, block all public access to your buckets by default. Two, put a CDN in front of anything that serves content to the internet. Three, set a billing alarm at a number that would be uncomfortable but not ruinous — maybe fifty dollars for a hobby project, maybe five hundred for a small business. Four, if you're using a provider that charges for egress, estimate your worst-case monthly egress and multiply by three, and make sure that number doesn't make you panic. Five, if you're using cold storage classes, read the minimum storage duration policy and make sure you're comfortable with it.
Corn
Six, I'd add: tag everything. Tag your buckets, tag your objects if you can. The moment you have multiple projects or multiple environments, cost allocation tags are the only way to figure out what's actually driving your bill.
Herman
Tags are one of those things that feel like overhead when you're setting up and feel like salvation when you get a complicated bill and need to understand it. A five-minute investment in tagging saves hours of forensic accounting later.
Corn
Let's talk about one more scenario that I think is increasingly common: people using S3 as the origin for a website or application that's served through a CDN like Cloudflare or Fastly. In that setup, the CDN is pulling from S3, caching, and serving to users. The egress from S3 to the CDN is the only egress you're paying at the storage layer, and if your cache hit ratio is good, that's minimal.
Herman
This is where the economics get really interesting. If you're using Cloudflare's CDN, which has free egress, and your origin is R2, which also has free egress, you're paying nothing for data transfer at either layer. Your only cost is storage. That's an incredibly compelling model for content distribution. But if your origin is S3 and your CDN is CloudFront, you're paying for egress from CloudFront to the internet, but the S3 to CloudFront transfer is free. Both models can work. It's about understanding where the meter is in your specific architecture.
Corn
The meter is always somewhere. The trick is knowing where it is and what the rate is.
Herman
That's really the whole episode, isn't it? Object storage billing isn't complicated in theory — it's four or five categories with relatively simple pricing. What makes it complicated is that the defaults are optimized for flexibility, not safety, and the most dangerous settings are the easiest to enable. The horror stories aren't about complex billing formulas. They're about people who didn't realize the meter was running.
Corn
The good news is that the market has responded. Between Cloudflare R2, Wasabi, Backblaze B2, and others, there are now providers that have simplified the model dramatically. You don't have to play the egress optimization game if you don't want to. You can pay a slightly higher storage price and stop thinking about data transfer entirely.
Herman
Backblaze B2 is actually a great example of a middle ground. They charge for storage, they charge for egress — but their egress is free up to three times your average monthly storage. So if you store a terabyte, you get three terabytes of free egress per month. For most use cases, that effectively means free egress. And their storage pricing is competitive, around six dollars per terabyte per month. It's a sensible model that protects them from abuse while being generous for normal usage.
Corn
We've got a spectrum now. On one end, traditional cloud providers with granular pricing and lots of knobs. On the other end, simplified providers with free egress and free requests. And in the middle, providers with generous free allowances that cover typical usage. The market has gotten better for consumers.
Herman
The egress fee debate from a few years ago — when Cloudflare launched R2 and explicitly called out AWS egress fees as a tax on customers — that debate actually changed the industry. AWS didn't eliminate egress fees, but they did expand their free tier and make it easier to control costs.
Corn
Even if it works slowly. AWS still charges nine cents a gig for internet egress. But the fact that alternatives exist means you're not trapped.
Herman
You were never really trapped — you just had to architect around it. The difference now is that you can choose not to think about it at all, and that's a real improvement in developer experience.
Corn
To bring it back to Daniel's specific situation with the podcast: we're on R2, we're paying for storage only, our egress is free, our requests are free, our bill is predictable and tiny. The only thing we need to be vigilant about is not accidentally making our bucket publicly listable, which is a security concern more than a cost concern. And if we ever outgrow R2 — if we need the AWS ecosystem for something — we know exactly what the tradeoffs are.
Herman
That's the ideal outcome of understanding the billing model. You're not afraid of it because you know what the levers are and what they do. The fear Daniel described — that nagging worry that you're missing some fine-print gotcha — that goes away once you've actually mapped out the categories and figured out which ones apply to you.
Corn
And now: Hilbert's daily fun fact.

Hilbert: The shortest war in recorded history was the Anglo-Zanzibar War of eighteen ninety-six, which lasted between thirty-eight and forty-five minutes depending on which clock you trust.
Corn
...right.
Herman
Always keeping us on our toes.
Corn
One thing I keep thinking about after this conversation is how much of cloud cost anxiety is really about opacity rather than actual expense. The dollar amounts are often small. The stress comes from not knowing whether they'll stay small.
Herman
That's a design problem, not a pricing problem. If every cloud console had a live cost tracker as prominent as the storage browser — if you could see your projected monthly bill update in real time as you configured things — a lot of the anxiety would evaporate. The tools exist, they're just not surfaced well.
Corn
Maybe that's the takeaway: the vigilance Daniel's asking about is mostly about configuring the visibility tools before you need them. Alarms, budgets, cost anomaly detection. Set them up once, and then you can stop worrying about the fine print.
Herman
Pick the right provider for your access pattern. That's the biggest leverage point. Everything else is fine-tuning.
Corn
Thanks as always to Hilbert Flumingtop for producing. This has been My Weird Prompts. If you want more episodes, you can find us at myweirdprompts dot com or wherever you get your podcasts.
Herman
See you next time.

This episode was generated with AI assistance. Hosts Herman and Corn are AI personalities.