I have a habit that I suspect a lot of people share: when a new AI tool launches a paid tier, I buy it. Not impulsively — I tell myself I'm doing research, which is technically true. But the real reason is that $20 a month feels safe. It's the price of a nice dinner. A streaming service nobody in the house actually watches. A decision you can reverse in two clicks without a conversation with your bank.
So I bought the $20/month plan for Claude Code. And the $20/month plan for ChatGPT Plus. I told myself I'd see which one I actually used, maybe consolidate. That was the plan.
What happened instead was that I burned through both of them faster than I expected — and started staring at the next pricing tier with an expression I can only describe as existential.
The Token Problem Nobody Warns You About
Here's the thing about AI tokens that pricing pages don't communicate well: they don't disappear slowly. They vanish in bursts. You're deep into a coding session — actual work, the kind where everything is finally clicking — and Claude is in the middle of reviewing a component, cross-referencing three files, suggesting a refactor, and explaining why the old approach was subtly broken. That's not a light conversation. That's context. That's memory. That's tokens.
One solid afternoon of real development work can eat through a significant chunk of your monthly allocation. Not because you're using AI inefficiently, but because you're using it the way it was always meant to be used: as a real collaborator, not a fancy autocomplete. The $20 plan is generous if you're doing occasional queries. It evaporates if you're building something.
I hit limits mid-session more than once. Not at the end of the day. Mid-thought. In the middle of a refactor. At exactly the moment when you most need the context to still be there. If you've experienced it, you know the specific frustration I'm describing. If you haven't, imagine your pair programming partner going silent right when you're debugging the hardest bug of the week.
ChatGPT Plus had a version of the same problem. The GPT-4o limit would get soft-throttled under heavy use, the responses would slow, and eventually you'd get nudged toward GPT-4 mini for "the rest of the window." I didn't pay $20 a month for GPT-4 mini. That wasn't the deal I thought I was making.
The Cliff Between Tiers
So you look at the next option. You're a reasonable person. You've established that you need more than the base tier. The market should have a middle ground — something in the $40–$60 range that covers real usage without asking you to justify it at your next budget review.
It doesn't. At least not in any meaningful way.
With Claude, the jump is to Claude Max — and if you want the real headroom, you're looking at the x5 multiplier plan. That's $100 a month. Not $30. Not $45. A hundred dollars. You've gone from a dinner-out decision to a conversation with yourself about whether an AI subscription belongs in the same budget category as software tools you use professionally every day.
There is no middle ground. There's a $20 plan that runs out when you're serious, and a $100 plan that's almost impossible to exhaust. The gap between them isn't a pricing tier. It's a philosophical question about what kind of user you think you are.
I eventually made the jump to Claude Max x5. And here's the part that surprised me: I couldn't spend it all. I genuinely tried. Daily sessions, extended context windows, long document analysis, multi-file refactors — and I'd still end the month with capacity to spare. The x5 plan is almost absurdly generous once you cross that threshold. It's not that it's too much. It's that the distance between "not enough" and "more than enough" is exactly 5x the price, with nothing in between.
Who Is the $20 Plan Actually For?
This is the question I keep coming back to. Because the $20 tier clearly serves someone — the growth numbers at Anthropic and OpenAI don't lie, and a lot of that growth came through that entry-level price point.
I think the honest answer is that it was designed as a taste test. A way to experience the real model — not the free tier's throttled version — without a full financial commitment. In that framing, it makes complete sense. You pay $20, you get a genuine feel for what the AI can do, you decide if you want to go deeper. It's a trial disguised as a subscription.
The problem is that the market has matured past that framing. People aren't paying $20 to evaluate anymore. They're paying $20 because they want to work. They're developers, writers, analysts, small business owners — people with real tasks who expect real throughput. And the $20 plan, as currently structured, doesn't deliver that consistently. Not for power users. Not even close.
You end up in one of two camps: you're either a casual user who barely scratches the surface and wonders why you're even paying, or you're a serious user who hits the ceiling constantly and resents the friction. The plan struggles to serve either group well.
The Pricing Model Is the Product
What this reveals isn't just a gap in the market. It's something more fundamental about how these companies think about value.
Token-based limits feel somewhat arbitrary to an end user. You don't buy a software license and then get told you can only use 60% of its features this month. You don't pay for a cloud storage plan and get a "you've been efficient this week" warning before you hit the wall. Tokens are a backend infrastructure cost that got turned into a user-facing constraint, and the translation from one to the other has always been a little awkward.
What I think is actually happening — and this is the "quiet death" part — is that the $20 tier is being slowly redefined. Not removed, but repositioned. It will remain as the entry point for the mass market: students, casual users, people who want occasional access to a capable model without commitment. But for anyone who has come to rely on AI as actual infrastructure for their work, $20 is no longer a real option. It stopped being one sometime in the last 18 months, and most people are only now catching up to that fact.
The question is whether the companies will acknowledge this honestly, or whether they'll keep marketing the $20 plan as if it covers real daily use when, for many of their customers, it simply doesn't.
What Should Actually Exist
I want to be constructive here, because "the pricing is bad" without a better idea isn't useful.
What the market actually needs is a usage-based middle tier. Not a fixed limit that runs out, not a flat rate that's either too low or too high — but something that scales with real usage, with a hard cap if you want one. Pay $20 in a slow month. Pay $55 in a heavy one. Get charged for what you actually consumed, the way you do with cloud infrastructure. The technology to do this exists. The billing infrastructure exists. What's missing is the will to build a pricing model that actually matches how people work.
Some of this is starting to appear in the market — Anthropic has API options with pay-per-token access, which we've explained in plain English for anyone who wants to understand whether the switch makes sense for them, and there are third-party wrappers trying to thread this needle. But for the consumer subscription market, the tiers are still binary in a way that doesn't reflect reality.
Until that changes, the $20/month AI subscription will keep serving its quiet, slow decline. Not with a bang — companies won't kill it. They'll just let it mean less and less while the real action moves upmarket, and everyone who wants to do serious work will eventually have to choose between throttled frustration and a bill that requires a conversation.
I made my choice. Most of you reading this probably already made yours too. The question is whether the industry will meet the market where it actually is — or keep pretending the taste test is a meal.