We've spent the last three years obsessing over what AI can do on a screen — write code, generate images, summarize documents, hold a conversation. And it's been remarkable. But something else has been quietly happening in parallel, and it's about to become impossible to ignore: AI is getting a body.
The same capabilities that power ChatGPT, Claude, and Gemini are being embedded into physical machines that walk, carry things, respond to their environment, and interact with the real world. We're not talking about robotic arms on assembly lines — that technology is decades old. We're talking about robots that can navigate a house, fold laundry, open a door, or work alongside a human on a warehouse floor.
The transition from digital AI to physical AI is happening. The question is just how fast.
What's Already on Sale — Barely
Let's be honest about where we actually are. Humanoid robots are technically available today, but "available" is doing a lot of work in that sentence. The number of units in real-world deployment is tiny — we're talking hundreds, not hundreds of thousands. These are engineering milestones dressed in a commercial suit.
Boston Dynamics made Atlas famous with those backflip videos, and their electric version is now genuinely being tested in real industrial settings. Figure's robot — backed by heavyweight investors and a partnership with OpenAI — has been demoed doing multi-step tasks in BMW factories. Agility Robotics' Digit is operating in Amazon warehouses in limited trials. Unitree from China has pushed price points down aggressively with its H1 and G1 models, making humanoid robots accessible to research labs and early adopters for the first time.
These machines are impressive. They're also expensive, fragile, slow, and not yet useful enough to deploy at scale. Anyone claiming otherwise is selling something.
We're at the iPhone 2007 moment for physical AI — the proof of concept exists, the platform is real, but the app ecosystem and the mass market are still ahead of us.
But "not yet" is very different from "never." And 2026 is shaping up to be the year the gap between demo and reality starts to close.
Desktop Robots and Pet Robots: The Curiosity Phase
Before humanoids go mainstream, something interesting is happening at a smaller scale. Desktop robots — tabletop companions that respond to voice, gesture, and emotion — are appearing as consumer products. They're not useful in any rigorous sense. They're charming. And that matters.
Companies like Emo (Living.ai) and Loona have built small, expressive robots that people are genuinely putting on their desks and forming attachments to. They're more emotionally sophisticated than a smart speaker, less intimidating than a full humanoid, and cheap enough that curious early adopters will actually buy one. They're the gateway drug — the thing that normalizes having AI in a physical form sharing your space.
Robot pets are doing something similar. Sony's Aibo never quite died, and a new generation of companion robots aimed at elderly care and children's education is quietly building a user base. These aren't toys pretending to be AI. They're genuinely AI-powered agents running on dedicated hardware, learning from interaction, and adapting over time.
None of this is the main event. But it's how markets warm up before the main event arrives. The psychology behind why humans form these attachments — and why physical presence changes the AI experience so fundamentally — is something we've explored in depth.
China Is Not Waiting
If you want to understand how fast physical AI is moving, you have to look at China — and you have to be honest about what you're seeing.
Unitree Robotics is shipping humanoid robots faster and cheaper than almost anyone in the West. Their G1 model launched at under $16,000, which is roughly a tenth of what equivalent machines cost from American competitors. UBTECH's Walker S is being trialed in manufacturing. Fourier Intelligence, Leju Robotics, AgiBot — the list of serious Chinese robotics companies is long and growing fast.
The Chinese government has made humanoid robotics a national strategic priority. Investment, infrastructure, manufacturing capacity, and policy support are aligned in a way that's hard to replicate elsewhere quickly. When you combine that with Chinese AI labs producing frontier-level models and a domestic manufacturing ecosystem that can scale production at speed, you get a very different timeline than the cautious Western one.
The pattern we're seeing from China on robotics looks like the pattern we saw on EVs five years ago. People underestimated that too.
2026: The Year It Gets Real
Two launches this year are likely to define the near-term trajectory of physical AI more than anything else: Tesla's Optimus and Xpeng's Iron.
Tesla has been showing Optimus doing increasingly capable tasks — folding clothes, sorting objects, navigating unfamiliar environments. Elon Musk has promised commercial production will begin in 2026, with units initially deployed in Tesla's own factories before broader availability. The ambition is mass production at automotive scale — eventually millions of units per year, not thousands. Whether that timeline holds is genuinely unknown, but the engineering investment is real and the manufacturing capability is uniquely positioned.
Xpeng, better known for its electric vehicles, launched Iron at a price point that surprised even industry observers — around $27,000 for a machine with specs that rival systems costing four or five times more. They're shipping to enterprise customers first, but the roadmap points toward consumer availability faster than anyone expected. For a Chinese EV company with deep experience in autonomous driving AI and large-scale manufacturing, the move into humanoid robotics is less of a leap than it looks.
These aren't the only launches this year. But they're the ones with the clearest path from demo to volume production, and they're the ones that will set the pricing and capability expectations for everything that follows.
Each Robot Gets Its Own AI
Here's where it gets interesting for people who follow the AI software side of this.
Physical AI is not just "put ChatGPT in a robot." The AI that runs a humanoid robot is a fundamentally different system from the one that answers your questions on a screen. It has to process sensor data in real time, make decisions in milliseconds, understand physical constraints, coordinate multiple limbs and joints simultaneously, and operate safely around humans who are unpredictable.
Every major robotics company is building or partnering to build dedicated AI systems for their hardware. Tesla has FSD-derived neural networks being adapted for general-purpose robotics tasks. Figure's partnership with OpenAI produced a robot that can have a natural conversation about what it's seeing while simultaneously manipulating objects. Boston Dynamics acquired Spot AI capabilities that run on the robot itself, not in the cloud.
This is creating a new category of AI: embodied intelligence. And it's bifurcating from the language model world in interesting ways. The skills that matter — spatial reasoning, motor planning, real-time physical awareness — are different from the skills that matter for a chatbot. Some architectures transfer well; others don't. The companies figuring out which is which have a significant advantage.
The Open-Source Layer
One development that doesn't get enough attention: some of these robots are being built with open or semi-open software stacks, specifically so that third parties can program them.
Unitree's robots run on ROS2 and expose APIs that developers can build on. Several startups are already selling software packages — safety layers, task libraries, vertical-specific control systems — that run on top of commodity hardware. The idea is the same one that made smartphones so powerful: the hardware company builds the platform, and a thousand developers build the applications that make it genuinely useful.
This is how you get to mass adoption faster. Not by waiting for one company to solve every use case, but by letting the developer ecosystem solve them in parallel. A warehouse operator doesn't need a general-purpose humanoid AI — they need one that's very good at their specific picking workflow. An elderly care facility needs something different again. Open platforms let those specialized solutions exist without each company having to build the robot from scratch.
The analogy to Android is imperfect but instructive. The fragmentation is real. So is the speed.
Before 2030: What "Mainstream" Actually Means
When people say mass adoption of robots is coming before 2030, they usually don't mean a humanoid robot in every home by 2029. That's not the realistic scenario, and the hype merchants who imply otherwise are doing the industry a disservice.
What's more likely — and still remarkable — is that by 2030 humanoid robots will be a normal feature of large manufacturing facilities, logistics operations, and some sectors of healthcare. Hundreds of thousands of units, not millions. Pilots that have converted into standard operating procedure. Price points that have dropped from $50,000+ to something accessible to mid-sized businesses. A developer ecosystem building real applications on top of open platforms. A consumer market that's curious and beginning to experiment.
That's not science fiction. That's the EV curve applied to a different vehicle — and we've already seen how fast that can move when the conditions are right.
The AI on your screen right now is probably better than it will ever need to be for most of what you use it for. The AI in a robot is just getting started.