An autonomous AI agent for macOS
Friday wasn't a chatbot or a voice assistant. It was an autonomous agent that could see the screen, move the mouse, type on the keyboard, and complete complex tasks on its own. No APIs, no cloud, no shortcuts. It used computer vision and chain of thought reasoning to perceive the display and decide what to do next, exactly like a human sitting at a computer. Except faster, and it never got tired.
See the screen, understand it, act on it
Friday continuously screenshotted the display, ran OCR and vision models on it, and built a mental model of everything visible: buttons, text fields, menus, windows. From that understanding, it decided what to do next.
It could click, type, scroll, switch apps, and chain multi step workflows across different programs. For complex tasks, it used chain of thought reasoning to break the problem down and execute each step in sequence.
The most interesting behavior was tool delegation. Friday understood which app was best for which job. Need to write something? It opened ChatGPT. Build a presentation? Gamma. Design work? Canva. Research? Perplexity. It picked the right tool, just like you would.
What Friday could actually do
- Build full presentations: open the app, write content, add slides, format the layout
- Draft and send emails: navigate to the mail client, compose, review, send
- Research across multiple sources, compile findings, output a structured summary
- Process PDFs: open, extract key info, reorganize content
- Navigate complex interfaces with nested menus, dropdowns, and multi page forms
- Switch between apps seamlessly, carrying context from one tool to the next
- Handle errors and unexpected states by recognizing failures and adapting on the fly
Two versions, both fully on device
Friday went through two major iterations. No external servers or cloud processing in either.
V1
Proof of concept. Basic screen reading, simple action execution, linear reasoning. Minimal error handling, but it proved the core idea: an AI could perceive a GUI and interact with it through synthetic input.
V2
The real version. Deeper reasoning chains that planned several steps ahead, robust error recovery, smarter action planning for complex tasks. The perception pipeline got significantly more accurate too.
The real takeaway
Friday proved that an AI brain could perceive and act on a complex environment in real time. Not through APIs or pre programmed scripts. It literally saw the interface and interacted with it the way a human would. That's a fundamentally different approach from most AI tools, which work behind the scenes through programmatic interfaces.
If a brain can control a screen by seeing and acting on it, what happens when the environment is not a screen but a physical robot body? That question became the foundation of 20n.
The perception · reasoning · action loop that powered Friday is the same loop that powers biological organisms. Friday ran it in a digital environment. The natural next step was to run it in the physical world. That's what led Nick0 to leave Friday behind and go all in on robotics.
The concept was proven. The real frontier was elsewhere.
Controlling a screen was the proof of concept. But a screen is a controlled, predictable, 2D environment. A physical robot body deals with gravity, friction, inertia, and a world that doesn't pause when you stop to think. That's where the real difficulty is, and that's where the real impact would come from. So Nick0 went all in on robotics. Friday's core architecture (the perception · reasoning · action loop, the continuous state monitoring) became the conceptual foundation of 20n Research Laboratory. Friday was not abandoned. It graduated.
Built with
Intentionally minimal. Just what's needed to capture the screen, understand it, reason about it, and act on it.
Friday is no longer active. The website now serves as a transition page pointing to 20n, the project that grew directly out of Friday's core ideas.