The race to build useful humanoid robots just took a decisive turn. On January 27, 2026, Figure AI founder Brett Adcock unveiled Helix 02, demonstrating something competitors have struggled to achieve: a humanoid robot executing a complete dishwasher cycle across a full-sized kitchen with zero human intervention.
The four-minute autonomous task involved 61 separate loco-manipulation actions—walking to the dishwasher, unloading dishes, navigating across the room, stacking items in cabinets, then reloading and starting the dishwasher again. According to Figure AI’s announcement, this represents “the longest horizon, most complex task completed autonomously by a humanoid robot to date.”
The technical foundation makes this possible: a 10-million-parameter neural network trained on over 1,000 hours of human motion data, replacing 109,504 lines of hand-engineered code. That’s not incremental progress. It’s a fundamental shift in how robots learn to move.

Why Previous Robots Failed at Continuous Tasks
The breakthrough solves a problem that has stalled the entire industry: getting robots to walk and manipulate objects as one continuous behaviour, not as separate programmed sequences stitched together. Previous demonstrations have shown humanoid robots jumping, dancing, or moving objects, but nearly all rely on pre-planned motions with limited real-time feedback. If something shifts or doesn’t go as expected, the behaviour collapses.
Traditional robotics worked around this by separating locomotion and manipulation into distinct controllers connected through state machines: walk, stop, stabilise, reach, grasp, walk again. These handoffs are slow, brittle, and unnatural.
Helix 02 takes a fundamentally different approach. The system connects every onboard sensor—vision, touch, and proprioception—directly to every actuator through what Figure calls a “single unified visuomotor neural network.” When the robot’s hands are occupied, it closes a drawer with its hip and lifts the dishwasher door with its foot, using the entire body as a tool rather than relying solely on hand movements.
From Human Motion Data to Robot Control
The advancement builds on Figure AI’s original Helix model from last year, which controlled only the robot’s upper body. Helix 02 extends that capability through a three-layer architecture working at different speeds. System 2 handles semantic reasoning and language. System 1 translates perception into full-body joint targets at 200 Hz. System 0—the new foundation layer—executes movements at 1 kHz while managing balance and coordination.
System 0 represents the critical breakthrough. Rather than programming specific behaviours, it learned to reproduce human movement patterns from that massive corpus of motion data. Figure describes it as “a foundation model for human-like whole-body control: a learned prior over how people move while maintaining balance and stability.”
The hardware amplifies these capabilities in ways that matter for real-world tasks. Figure 03’s embedded tactile sensors detect forces as small as three grams—sensitive enough to feel a paperclip—while palm cameras provide visual feedback when objects are blocked from the head camera. These aren’t just specs. They enable manipulation that was previously impossible.
Dexterity That Changes What’s Possible
The company demonstrated four tasks that showcase this new level of capability. The robot unscrewed a bottle cap, requiring bimanual coordination with tactile-regulated grip force. It located and extracted a single pill from a medicine organiser when the pill was occluded from the head camera, using palm-level visual feedback. It pushed exactly 5 millilitres from a syringe through force-controlled actuation. And it picked small metal components from a cluttered box where objects overlapped and shifted during interaction.
Each of these tasks demands something different—precise force control, visual reasoning under occlusion, tactile confirmation of contact—but they all depend on the same unified system connecting sensors to actuators.
The Competitive Stakes
The timing matters. As companies like Tesla, Boston Dynamics, and several well-funded Chinese startups pour billions into humanoid robotics, Figure AI’s demonstration shows what continuous, whole-body autonomy actually looks like in practice. The company is targeting both home and workforce applications, though commercialisation timelines remain unannounced.
What separates this from previous showcases isn’t just technical sophistication—it’s the shift from choreographed demos to genuine autonomous problem-solving. The robot doesn’t replay memorised sequences. It perceives, decides, and acts in real time across multiple minutes of continuous operation.
That’s the milestone the industry has been waiting for.

