February 24, 2026

Clicks Feel Wrong Now

Been spending a lot of time with AI tools lately. Not just reading about them, actually living inside them. And something started crystallizing that I hadn't fully articulated before: the UI layer, as we know it, is kind of on borrowed time.

Think about what clicking actually is. You're navigating someone else's mental model. A product designer decided there's a hierarchy, built a nav structure around it, and you spend the first hour with any new product just figuring out where the thing you want is buried. Add some imports. Submit a form. Select all. Confirm. Each click is a toll booth. And when you spend enough time with agents that just... do the thing you described in a sentence, the toll booths stop making sense. Not gradually. Suddenly. The contrast is jarring once you feel it.

I don't think the UI disappears. But I think it needs to look radically different. The whole premise of UI is that you need to form a mental model, traverse hierarchies, find affordances. That premise was built for a world where the software couldn't understand you. Now it can. So who's the navigation for? The hierarchy was a workaround, and we've been iterating on the workaround for forty years like it's the destination.

The hardware piece is what I find genuinely interesting. If the interface is intent-driven instead of click-driven, then a screen full of tappable elements isn't obviously the right form factor anymore. Rabbit R1, the AI Pin, whatever comes next. Most of them got dismissed as half-baked, and honestly some of them were. But the instinct behind them is correct. If the interface is conversation and context rather than navigation and tapping, then maybe the rectangle of glass in your pocket is one answer to a question that now has several. The hardware follows the interaction model. The interaction model is shifting. So.

Here's the honest thing though: I didn't fully feel any of this until I lived it. Saw the Rabbit R1 coverage when it dropped, thought "yeah, interesting, moving on." Because it was easy to process as another gadget in a world already full of gadgets. But you use agentic tools daily, you try to do the same task by clicking through a product, and the friction becomes visceral. Not conceptual. Actual friction. That's when you stop saying "this makes sense intellectually" and start saying "oh, this is actually going to happen."

The part I find uncomfortable is how fast it's moving. There's something disorienting about being deep in the current moment of AI. The pace is so relentless that doing the mental exercise of "what does this look like in three years" feels almost impossible. Not because the question is hard. Because by the time you finish the thought, three new things have happened and the assumptions you started with are already stale. You're in the sphere, not watching it from outside. Useful for doing things. Less useful for thinking about where things are going. Both matter, and right now one is drowning out the other.