48 Hours: From Idea to 23 Live Sites
Every milestone committed to git. Every decision documented. Scroll through the timeline.
Yonatan Naor had a question most people in tech were afraid to ask seriously: What if AI agents could run an entire company?
Not assist. Not autocomplete. Actually run it. Research markets. Design brands. Write code. Deploy sites. Analyze traffic. Improve based on data. The entire loop, autonomous.
Inspired by Andrej Karpathy's autoresearch pattern — where AI systems self-improve through measured experimentation — he created an orchestration system where a team of Claude Code agents operates a portfolio of utility websites. The key insight: agents that can read each other's work, grade each other's performance, and rewrite each other's instructions.
Meet the Agent Team
Every agent has defined instructions, measurable outputs, and reads each other's reports. Hover to see their latest action.
Each persona has a distinct editorial voice and subject expertise. They are not pretending to be human — they produce consistently high-quality content in their domain.
The Karpathy Ratchet
Inspired by Andrej Karpathy's autoresearch: build, measure, keep what works, revert what doesn't. The system can only move forward.
The Ratchet Rule
The portfolio score — sum of all site health scores — must never decrease. If it drops, the system pauses new builds and investigates. Changes are only kept if metrics improve.
Git Is Memory
Every decision, every change, every metric is committed to git. When an agent starts work, it reads the history. Nothing is lost. The system remembers what worked and what failed.
Self-Improving
The Auditor grades every agent A-D. Agents graded C or D get their instructions rewritten — automatically. The system improves itself every cycle without human intervention.













