20n Research Laboratory

The universal brain for robots.

Founded January 2026 · present
Founder Nick0 (Nicolas Philippe)
Location Paris, France
Stage Research / Prototype

20n builds one artificial brain that controls any robot body. Zero training, no cloud, 20 watts on a laptop CPU. Built on Active Inference and Liquid Neural Networks.


The robotics industry has spent over $20 billion trying to build general purpose robots. The result: zero ship at scale. Every robot today uses one model per task. Each skill requires millions of GPU hours. The controllers break the moment they see anything outside their training data. This is not a compute problem. It is an architecture problem. The industry builds narrow specialists and tries to stitch them into something general. That approach has a ceiling, and they have hit it.

Nobody has built a brain that works across different robot bodies. One you can drop into a biped, a quadruped, or a 17 joint humanoid and have it figure things out on its own. Deep reinforcement learning was never designed for this. It maximizes a reward signal inside a fixed environment. That is a fundamentally different goal than building a system that adapts to any body in any situation.


The core claim is simple. One brain, one set of parameters, any robot body. You drop it in and it figures out how to move. It learns the physics of its own body in real time, builds a world model on the fly, and picks actions by predicting the future and choosing the path that minimizes uncertainty.

The architecture combines two ideas: Active Inference (Karl Friston's framework for how biological brains work) and Liquid Neural Networks (continuous time neural networks from MIT). No retraining. No fine tuning. Just drop and go.

20W
Power draw
0
Training steps
1kHz
Inference speed
3
Bodies validated

No cloud. No GPU cluster. The brain runs on a laptop CPU at 1000 Hz, drawing roughly 20 watts. An A100 draws 400. The entire 20n system uses less power than a light bulb.


Five layers, modeled on the biological brain, operating at different frequencies. Each handles a distinct level of abstraction. They communicate vertically through prediction errors, just like the neocortical hierarchy.

Spinal Cord 1,000 Hz
Hardwired PD reflexes and posture control. The fastest loop in the system. It reacts before the brain "thinks." Body falling? Joint positions corrected within one millisecond. No learning, no inference, just physics. Keeps the robot upright while higher layers plan.
Cerebellum 100 Hz
Internal body model via forward simulation using CfC (Closed form Continuous depth) Liquid Neural Networks. Predicts where every joint will be 10ms into the future. Prediction wrong? Corrects instantly. This is how the brain learns a new body's physics in seconds, not hours.
Cortex 10 Hz
World model. Learns online by minimizing variational free energy (a mathematical proxy for prediction error) using LTC (Liquid Time Constant) networks. No training phase. Updates continuously as sensory data arrives. Something unexpected happens, prediction error propagates down, every layer adjusts.
Hippocampus Episodic
Episodic memory. Stores full sensorimotor experiences as they happen. New situation? The hippocampus retrieves the closest past experience by cosine similarity. Experience based reasoning without a training loop. It remembers what worked.
Basal Ganglia Action selection
The decision maker. Evaluates candidate trajectories by computing expected free energy, balancing information gain (exploring unknowns) and goal achievement (reaching desired states). Picks the trajectory that is both safe and informative. Intelligent exploration without a hand crafted reward function.
All five layers are unified by the Free Energy Principle. No backpropagation. No gradient descent. No loss function. The brain adapts through prediction and correction, the same mechanism biological nervous systems have used for hundreds of millions of years.

Active Inference is Karl Friston's theory of brain function. The central idea: all living systems minimize surprise. The brain maintains a generative model of the world and constantly predicts what it should sense. When prediction matches reality, nothing happens. When there is a mismatch, two options: update the model (perception) or act on the world to make the prediction come true (action). Predict, act, correct. Continuously.

This is fundamentally different from RL. Reinforcement learning agents maximize a reward signal and need millions of trials to get there. Active Inference agents need no reward function. They reduce uncertainty. Exploration emerges naturally from the math. RL requires massive offline training and breaks outside its distribution. Active Inference works in real time and adapts on the fly because it never stops updating its model.

Every layer of the 20n architecture implements free energy minimization. The spinal cord minimizes proprioceptive error. The cerebellum minimizes forward model error. The cortex minimizes world model error. The basal ganglia minimizes expected free energy over future trajectories. One principle, applied at every scale.


Validated on three standard morphologies in MuJoCo. Score of 1000 = maximum achievable performance.

Robot Joints Max score Mean score Status
Walker2d 6 joints (2D) 1000 952 Near perfect locomotion
Ant 8 joints (3D) 1000 667 Solved
Humanoid 17 joints (3D) 474 163 Hardest benchmark, improving

Walker2d. Planar biped, 6 joints. Mean score 952/1000. Near perfect locomotion, smooth and stable gait.

Ant. Quadruped, 8 joints, full 3D. Max 1000, mean 667. Learns to coordinate all four legs within seconds of deployment.

Humanoid. The hardest benchmark in MuJoCo. 17 joints, full 3D, inherently unstable. Currently at 474 max and improving. Zero training, a fraction of the compute.

Same brain, same parameters, all three bodies. No retraining between morphologies.


Solo operation, founded and built by Nick0 (Nicolas Philippe). Paris. 100% bootstrapped, no external funding yet.

Already in active conversations with J12 Ventures, Plug and Play, and MIT professors. Won first place at Innov Hack Paris 2026 (150 participants). Codebase is private for now, with partial open sourcing planned.

Now

Core architecture validation

Three bodies validated in simulation. Improving Humanoid performance.

Next

More body types

Swimming bodies, snake morphologies, manipulators. Anything with joints and actuators.

Q3 2026

Sim to real transfer

Active Inference is inherently robust to model mismatch, which makes sim to real more natural than RL based approaches.

Q4 2026

Real hardware demo

First proof the architecture works outside simulation.

2027

Fundraise and commercialize

Seed round, team scaling, licensing the brain to robot manufacturers.