Person reviewing expenses on a laptop

At some point in the past two years, my monthly AI spend quietly became a real number. Not alarming — but real. Claude Pro, a ChatGPT Plus renewal I'd never bothered to cancel, a brief flirtation with Perplexity Pro when I was doing heavy research, and a token credit on the Anthropic API that I'd loaded and mostly ignored. Add it up and I was spending north of $60 a month on AI tools.

The embarrassing part? I wasn't using most of them. The ChatGPT renewal was pure inertia. The Perplexity Pro had done its job during one specific project and then sat there billing me silently. The API credit was still at 80% capacity.

I'd never let that happen with any other category of expense. I audit my software subscriptions. I cancel gym memberships I don't use. I switch energy providers when the deal runs out. But AI plans had slipped through a blind spot — partly because they're individually cheap enough to feel negligible, and partly because the whole space moved so fast I kept telling myself I'd "need them again soon."

The moment I actually looked at the numbers, the fix was obvious. And it points to a broader principle that I think most people using AI tools are missing.

Manage AI the Way You Manage Everything Else

The rule in any other part of life is simple: if you're not using something, stop paying for it. You don't keep a streaming subscription live during the three months you're too busy to watch anything. You don't pay for a cloud storage tier you're not filling. You cancel, downgrade, or pause — and you come back when you need it again.

For some reason, AI tools have been treated differently. Part of it is the novelty — there's a psychological reluctance to cut something that feels like it's on the frontier. Part of it is the fear of missing out on the next capability update. And part of it is a vague, never-quite-examined belief that there's some cost to cancelling.

There isn't. That's the thing nobody's saying loudly enough: the AI market has no lock-in, no cancellation fees, and no loyalty penalties. Claude Pro is month-to-month. ChatGPT Plus is month-to-month. Perplexity Pro is month-to-month. Every major AI subscription is structured this way, because the market is competitive enough that none of them can afford to add friction. You can cancel today, come back in three months, and your account picks up exactly where it left off.

There is no such thing as a commitment penalty in the AI subscription market right now. None. Cancel when you're not using it. Come back when you are.

This matters because it changes the calculus entirely. If cancelling cost something — a reactivation fee, a lost tier, a lost conversation history — then you'd have a real reason to stay subscribed even during low-use periods. But none of that exists. The cost of cancelling is literally zero. Which means the cost of staying subscribed when you're not using it is purely a waste.

The Right Time to Cut a Plan

I've developed a rough mental rule: if I haven't opened a tool in 10 days, it gets reviewed. If I can't point to a specific upcoming use case within the next 30 days, it gets cancelled or downgraded.

This sounds aggressive, but in practice it means very little churn. My actual usage patterns are predictable. I use Claude heavily during coding-heavy periods — which is most of the time. I reach for Perplexity when I'm doing research with a lot of web sources. I use ChatGPT for specific tasks where its voice mode or canvas features are relevant. But I don't need all three running simultaneously at $20 a pop. I need whichever one is relevant to what I'm doing right now.

The shift is from "keep everything available just in case" to "pay for what I'm actively using, re-subscribe when I need the next one." It's the same logic as a pay-as-you-go phone plan versus a full contract. You give up convenience at the margins — the millisecond to re-subscribe — and you gain real money back.

For someone paying $60/month across three AI tools, rotating based on actual use could realistically drop that to $20-25 while maintaining access to all the same capabilities. The tools haven't changed. The usage pattern has.

Before You Commit to Any Plan: Test Multiple Models

The other mistake I see constantly is people locking into one AI ecosystem without genuinely comparing alternatives. Not because they made an informed choice, but because they started with one tool, got comfortable, and never seriously tried another.

This is expensive in two ways. First, you might be paying for capability you don't need. Second, you might be paying for a worse tool than what's available at the same price — or cheaper.

The AI model landscape has compressed dramatically. In 2023, there was a real quality gap between the top models and everything else. In 2026, that gap has narrowed to the point where the right tool often depends more on the specific task than on any universal quality ranking. Claude is genuinely better at certain types of reasoning and long-context work. ChatGPT has the voice mode and the ecosystem integrations. Gemini has the Google Workspace tie-in. DeepSeek punches extremely hard on technical and coding tasks.

None of this matters if you've never compared them on your actual use cases. The right approach is to run the same real task through two or three models before committing to a paid tier. Not a contrived benchmark — your actual work. The query you run every morning. The document you need summarised. The code you need reviewed. See which one produces output you'd actually use.

Don't pick an AI plan based on reputation. Pick it based on what happens when you run your actual work through it. The answer is often surprising.

The free tiers available right now are genuinely capable. Claude's free tier, ChatGPT's free tier, and Gemini's free tier are all usable tools for moderate workloads. You can do a serious comparison before spending a cent. The paid tiers matter for heavy use — larger context windows, faster rate limits, priority access — but the core model quality is testable for free.

The API Option: Pay Only for What You Actually Consume

Here's where it gets interesting for anyone who wants to go a step further. The subscription model — flat monthly fee regardless of usage — is designed for heavy users. If you're running Claude for hours every day, $20/month is exceptional value. If you're using it for a few targeted tasks each week, you're subsidising someone else's usage.

The API model flips this entirely. Instead of paying a flat fee, you pay per token — the basic unit of text that the model processes. You load credits, use them at the rate you actually consume, and stop paying when you stop using. There's no idle cost. There's no paying for a month of access when you only needed two sessions.

DeepSeek is the clearest current example of why this matters. Its API is, at the time of writing, among the cheapest available for a capable frontier-class model. The pricing per token is a fraction of what OpenAI or Anthropic charge through their APIs — and the model quality on technical tasks, coding, and structured reasoning is genuinely competitive. For someone who wants the power of a top-tier model without the subscription overhead, DeepSeek's API is worth understanding seriously.

The practical setup involves loading a small amount of credit — $5, $10 — connecting it to an interface like Continue.dev in VS Code, and then consuming it at your actual rate. A $10 credit, used thoughtfully, can last weeks or months for casual users. Compare that to $20/month for a subscription that runs whether you use it or not.

I want to be specific about what the API approach trades away, because it's not the right choice for everyone. Subscriptions give you a polished product interface — the chat UI, the mobile app, the features like canvas or voice. APIs give you raw model access, which requires either building your own interface or using a third-party client. For a non-technical user, that friction is real. For someone comfortable in VS Code or willing to spend an afternoon setting up a client, it's a one-time cost that pays back quickly.

The Honest Monthly Audit

My actual routine now looks like this. On the first of every month, I spend about five minutes reviewing my AI subscriptions alongside everything else. The questions I ask are: Did I use this at least a few times per week? Is there a specific project coming up where I'll need it? Is there a cheaper alternative that covers my current use case?

In the months where my coding load is heavy, Claude Pro is non-negotiable — the context window and the quality of reasoning on complex problems justify the cost easily. In quieter months, I drop back to the free tier and use API credits for specific tasks. I've had Perplexity Pro active for two months out of the last six, during the periods when I was doing research-heavy work. The rest of the time, the free tier is enough.

This kind of active management saved me roughly $35-40 a month compared to my previous "keep everything running" approach. Over a year, that's real money — and I haven't lost a single capability I actually needed. Everything was available when I needed it, and nothing was billing me when I didn't.

The AI market is unusually buyer-friendly right now. No lock-in, intense competition, and model quality available across price points that didn't exist two years ago. The only way to lose is to not pay attention.

A Practical Starting Point

If you've never audited your AI subscriptions, do it now. List everything you're paying for. Next to each one, write down the last time you used it and what you actually used it for. If you can't remember, that's your answer.

Cancel the ones you haven't used in the past two weeks and have no specific plan to use in the next two weeks. They'll be there when you come back. Downgrade the ones where the free tier would honestly cover your actual usage.

For your main tool — the one you use daily — keep the subscription. The quality difference between free and paid matters at heavy usage, and the context window on a paid plan genuinely changes what's possible with complex work.

And if you're technical enough to consider the API route, spend an hour with DeepSeek's API documentation and a $5 credit. Not because it's objectively the best model — it isn't always — but because understanding pay-per-token pricing changes how you think about cost entirely. Once you see what $10 of actual consumption looks like, the "just keep the subscription running" logic starts to look a lot less reasonable.

The AI tools are going to keep getting better and cheaper. The companies building them are fighting hard for your subscription. You can afford to be deliberate about who gets it, and when.

Jaime Delgado

Jaime Delgado

Product Analyst & AI early adopter

Jaime has been tracking the AI landscape since the GPT-3 era. He writes about AI capabilities, model comparisons, and practical applications for builders and founders. His daily driver is Claude inside Visual Studio Code — though he also reaches for Grok, Gemini, and ChatGPT when the question is quick and the context is light. He stays genuinely open to every AI that comes along: the landscape moves fast, and so does he. Based in Spain.

View on LinkedIn