Two years ago, my company enrolled me in Percipio. It was well-intentioned — a curated library of courses, video lessons, skill paths, certifications. The kind of platform that looks impressive in a LinkedIn Learning comparison slide. I tried it for a few weeks and then quietly stopped opening it.
It wasn't bad. It just wasn't answering what I actually needed to know. Every time I had a real question — something specific, something urgent, something that sat at the intersection of two different domains I was working across — I'd search the platform, find something adjacent but not quite right, and end up on Stack Overflow or YouTube anyway.
That pattern repeated itself with Coursera too. I paid for a month. I enrolled in a course. I watched four hours of content, none of which addressed the exact problem I was trying to solve, and let the subscription lapse.
I'm not alone in this. Chegg's stock has collapsed. Coursera has been cutting costs. The whole category of structured online learning is facing a pressure that didn't exist three years ago — and it's coming entirely from AI subscriptions.
The Knowledge Tool That Doesn't Know What You Need
Here's the fundamental problem with traditional learning platforms: they're built around what someone else decided you should learn, not what you actually need right now.
Course creators build paths. Curriculum teams sequence modules. Editors curate libraries. All of that work produces something that is, by design, generic — a version of a topic that tries to serve as many people as possible. The tradeoff is that it serves almost no one exactly. You get a broad survey when you needed a specific answer. You get the foundational theory when you needed the three lines of code that make it work in your specific context. You get sixty hours of material when your question deserved sixty seconds.
Learning platforms sell structured content. What knowledge workers actually need is contextual answers. These are not the same product.
This isn't a criticism of the instructors or the editors — they're doing exactly what the format requires. The problem is the format itself. Pre-recorded, pre-sequenced, pre-decided learning was the best we had when the alternative was a $500 textbook or a $5,000 conference. When AI subscriptions showed up at $20 a month, the equation changed entirely.
What an AI Subscription Actually Gives You
I've been using Claude as my primary knowledge tool for over a year now. And the more I use it, the more I realise that what it replaced wasn't just a platform — it was an entire model of how to acquire knowledge.
When I have a question, I ask it in the exact form it exists in my head. Not the keyword version of it, not the sanitised version of it — the actual question, with all the context attached. "I'm building a SaaS on Next.js with a Supabase backend and I need to understand how row-level security works when I'm using server-side rendering, particularly in the case where the session might not be available on the first render." That's the question. And Claude answers that question, not a related question that a course author happened to write about.
That's the shift. You search for what you need, not for what someone decided you probably need. It sounds obvious when you say it out loud. But it's the most significant change in knowledge acquisition that's happened in my lifetime.
The result in practice is that I learn faster, retain more, and never sit through content that doesn't apply to me. When I'm stuck, I'm unstuck within minutes. When I don't understand something, I ask a follow-up in plain language and get a plain-language explanation. When I need an analogy that fits my background, I ask for one. The learning conforms to me, not the other way around.
Why Coursera, Percipio, and Chegg Are Losing
Let's be specific about what's happening to each of these platforms, because the story is slightly different in each case — but the underlying cause is the same.
Chegg was built on homework help. Students would submit a problem and get a worked solution, or access a textbook explanation, or find a solution set that had been uploaded by someone else. It was useful enough that millions of students paid for it. Then ChatGPT launched and students immediately discovered that an AI could explain the problem, walk through the solution step by step, and do it in conversational English for free. Chegg's subscriber count dropped so fast the company had to issue a public statement acknowledging that ChatGPT was causing the decline. They haven't recovered. The service isn't bad — the competition is just categorically better at the thing students actually used it for.
Coursera is a more nuanced case. It's not dead, and for certain use cases — certifications that employers specifically recognise, structured degree programmes, content from universities that carry weight on a CV — it still has value. But the casual learner, the professional trying to upskill quickly, the person who just needs to understand something well enough to use it? That person is gone. They're talking to Claude or ChatGPT instead, and they're learning faster.
Percipio is the corporate platform story. Companies pay for enterprise licences, employees are enrolled, completion rates are reported to HR. The learning is happening for the metrics dashboard more than for the learner. An AI subscription costs a fraction of a Percipio seat and delivers genuinely useful answers to the questions employees actually have. The value proposition of the platform-as-checkbox is collapsing in real time.
These platforms didn't fail because they got worse. They failed because AI subscriptions got good enough to make their fundamental model obsolete.
The model — aggregate content, organise it into paths, charge for access — made sense when the alternative was scattered and hard to access. When the alternative became a conversational AI that knows more than any course library and can apply that knowledge to your exact situation in seconds, the platform model stopped competing.
The One Real Problem with AI as a Knowledge Tool
I want to be honest here, because the article wouldn't be complete without it: AI is an extraordinary knowledge tool, but it is not an infallible one. And the failure mode is particularly dangerous in a knowledge context.
AI models hallucinate. They state things confidently that are simply wrong. Not because they're being deceptive, but because the way they generate text is fundamentally statistical — and sometimes the most statistically likely-sounding answer is an incorrect one. A course on Coursera that gets a fact wrong is embarrassing. An AI that tells you something false in a way that sounds completely plausible can send you down the wrong path for hours.
I've been burned by this. Not badly, but enough to build a habit around it.
The Multi-AI Verification Habit That Changes Everything
The technique is simple: for anything important, don't stop at one AI.
My primary tool is Claude — it's the one I use for deep work, for complex reasoning, for tasks that need sustained context. But when I get an answer that I'm going to act on, especially on a technical topic or something where being wrong has real consequences, I cross-check it with a second model. I'll take the same question to Grok, or Gemini, or ChatGPT. Not because I distrust Claude — I find it the most accurate of the four in most domains — but because two independent sources that reach the same answer give me significantly more confidence than one.
Think of it like a second opinion from a different doctor. The first doctor might be excellent, but if you're about to make an irreversible decision based on their diagnosis, you want confirmation from someone who hasn't seen the first opinion.
The practical workflow looks like this: Claude gives me an answer. I note the key claims. I ask Grok or Gemini the same question in slightly different words. If they converge, I'm confident. If they diverge, I dig into where they differ — and that divergence is itself informative. Sometimes it reveals that the question has two valid interpretations. Sometimes it surfaces a nuance one model missed. Sometimes one of them is just wrong, and knowing which requires a third check against primary sources.
Using one AI for knowledge is like reading one newspaper. Using three is like having access to three journalists, a fact-checker, and the original source documents — all at the same time.
The free tiers of Gemini, ChatGPT, and Grok are sufficient for this verification use case. You don't need to pay for three premium subscriptions. Your primary tool deserves a proper subscription — the context window, the reasoning quality, and the features that come with a paid tier genuinely matter for serious work. But a quick cross-check to validate a specific claim? The free tier handles that just fine.
This gives you something no learning platform has ever offered: a built-in adversarial review of the information you're receiving. You're not just learning something — you're verifying it from multiple independent sources before you act on it. That's a higher epistemic standard than almost any other form of self-directed learning in history.
What This Means for How You Actually Structure Your Learning
I still think there's a place for structured learning. If you're starting from zero in a new domain and you genuinely don't know what you don't know, a good course gives you the map before you start navigating. The foundational mental models for a field, the vocabulary that lets you ask better questions, the sense of what matters and what doesn't — these are real things, and a well-designed course delivers them efficiently.
But that's not most of learning. Most of learning, for most working professionals, is answering specific questions that arise from specific work. And that's the part AI subscriptions now dominate so completely that the comparison isn't even close.
My recommendation is this: keep one structured resource for domains where you're a true beginner and need orientation. Cancel everything else. Put that money into a quality AI subscription — Claude Pro, ChatGPT Plus, or whichever model fits your workflow best. And then build the multi-AI verification habit from day one, because the power of these tools is multiplied significantly when you use them to check each other.
You'll learn more, faster, at lower cost, with higher confidence in what you're learning. That's not a marginal improvement. That's a different category of tool entirely.