The universal brain for robots.
20n builds one artificial brain that controls any robot body. Zero training, no cloud, 20 watts on a laptop CPU. Built on Active Inference and Liquid Neural Networks.
The robotics industry has spent over $20 billion trying to build general purpose robots. The result: zero ship at scale. Every robot today uses one model per task. Each skill requires millions of GPU hours. The controllers break the moment they see anything outside their training data. This is not a compute problem. It is an architecture problem. The industry builds narrow specialists and tries to stitch them into something general. That approach has a ceiling, and they have hit it.
Nobody has built a brain that works across different robot bodies. One you can drop into a biped, a quadruped, or a 17 joint humanoid and have it figure things out on its own. Deep reinforcement learning was never designed for this. It maximizes a reward signal inside a fixed environment. That is a fundamentally different goal than building a system that adapts to any body in any situation.
The core claim is simple. One brain, one set of parameters, any robot body. You drop it in and it figures out how to move. It learns the physics of its own body in real time, builds a world model on the fly, and picks actions by predicting the future and choosing the path that minimizes uncertainty.
The architecture combines two ideas: Active Inference (Karl Friston's framework for how biological brains work) and Liquid Neural Networks (continuous time neural networks from MIT). No retraining. No fine tuning. Just drop and go.
No cloud. No GPU cluster. The brain runs on a laptop CPU at 1000 Hz, drawing roughly 20 watts. An A100 draws 400. The entire 20n system uses less power than a light bulb.
Five layers, modeled on the biological brain, operating at different frequencies. Each handles a distinct level of abstraction. They communicate vertically through prediction errors, just like the neocortical hierarchy.
Active Inference is Karl Friston's theory of brain function. The central idea: all living systems minimize surprise. The brain maintains a generative model of the world and constantly predicts what it should sense. When prediction matches reality, nothing happens. When there is a mismatch, two options: update the model (perception) or act on the world to make the prediction come true (action). Predict, act, correct. Continuously.
This is fundamentally different from RL. Reinforcement learning agents maximize a reward signal and need millions of trials to get there. Active Inference agents need no reward function. They reduce uncertainty. Exploration emerges naturally from the math. RL requires massive offline training and breaks outside its distribution. Active Inference works in real time and adapts on the fly because it never stops updating its model.
Every layer of the 20n architecture implements free energy minimization. The spinal cord minimizes proprioceptive error. The cerebellum minimizes forward model error. The cortex minimizes world model error. The basal ganglia minimizes expected free energy over future trajectories. One principle, applied at every scale.
Validated on three standard morphologies in MuJoCo. Score of 1000 = maximum achievable performance.
| Robot | Joints | Max score | Mean score | Status |
|---|---|---|---|---|
| Walker2d | 6 joints (2D) | 1000 | 952 | Near perfect locomotion |
| Ant | 8 joints (3D) | 1000 | 667 | Solved |
| Humanoid | 17 joints (3D) | 474 | 163 | Hardest benchmark, improving |
Walker2d. Planar biped, 6 joints. Mean score 952/1000. Near perfect locomotion, smooth and stable gait.
Ant. Quadruped, 8 joints, full 3D. Max 1000, mean 667. Learns to coordinate all four legs within seconds of deployment.
Humanoid. The hardest benchmark in MuJoCo. 17 joints, full 3D, inherently unstable. Currently at 474 max and improving. Zero training, a fraction of the compute.
Same brain, same parameters, all three bodies. No retraining between morphologies.
Solo operation, founded and built by Nick0 (Nicolas Philippe). Paris. 100% bootstrapped, no external funding yet.
Already in active conversations with J12 Ventures, Plug and Play, and MIT professors. Won first place at Innov Hack Paris 2026 (150 participants). Codebase is private for now, with partial open sourcing planned.
Three bodies validated in simulation. Improving Humanoid performance.
Swimming bodies, snake morphologies, manipulators. Anything with joints and actuators.
Active Inference is inherently robust to model mismatch, which makes sim to real more natural than RL based approaches.
First proof the architecture works outside simulation.
Seed round, team scaling, licensing the brain to robot manufacturers.