April 2, 2026

The Org Is the Bottleneck

Three stages of what a developer becomes with AI. First, you use the systems directly. Prompts, completions, the hands-on stuff. Then you orchestrate agents, plural, running them in parallel, coordinating outputs. Then you supervise. You're not writing code or even directing it. You're watching autonomous systems do work and deciding if the work is correct. That's the trajectory. Clean, obvious, talked about endlessly.

Except here's what actually matters: none of that lands if the org doesn't know how to distribute work for it. Had a few conversations internally this week and one thing kept surfacing. A quiet frustration. Why aren't we moving on this? Not a capability problem. Not a tooling problem. The pipeline itself doesn't have a shape that lets agents slot in. You can hand someone Claude Code and a multi-agent framework and if the way tasks flow through the company hasn't changed, nothing changes. The technology just sits there, underused, waiting for someone to reorganize around it.

This is the part that doesn't make it to Twitter. The bubble talks about what's possible. Frontier models, autonomous coding, agents that ship PRs while you sleep. All true. None of it false. But the world outside the bubble moves on completely different rails. For most people, AI is still a chatbot. A thing you ask a question and get an answer. Maybe a better Google. The gap between "built an autonomous agent swarm" and "asked ChatGPT to rewrite my email" is astronomical, and the second group is almost everyone. People aren't rejecting the technology. They just don't have a frame for what it does beyond conversation. Some folks try OpenCode-style setups, run a slice of their workflow through it, then quietly stop. Not because it failed. Because fitting it into how they already work requires changing how they already work, and that's the hard part nobody ships a demo for.

So the real constraint isn't compute or models or context windows. It's human behavior. How people operate. How orgs assign work. How teams decide what's worth automating versus what stays manual. Psychology over infrastructure. You could have a trillion dollars of AI capability available and if the org chart and the task pipeline look the same as they did in 2023, you're capturing almost none of it. The technology is ready. The humans are catching up. And that catching up isn't a technical problem. It's a behavioral one, which means it's slower and messier and doesn't fit in a release cycle.