2025
Imagine if Tony Stark had to carefully prompt Jarvis and explicitly manage his context.
When I started using GitHub Copilot in 2022, it seemed obvious that the future of coding would be Ironman/Jarvis-style "deep autocomplete" keeping the developer in flow, unblocking them at every friction point, making them FASTER, meanwhile keeping them in control.
That's not at all where we're at now. We currently have conversational agents. Prompt (carefully) → wait → review (boring, let's skip) → repeat. They're truly powerful. Smart people build elaborate systems of context management around this interface and produce impressive software.
But the problem is that prompting in natural language is a low-bandwidth interface for expressing developer intent. A child can hum a tune to express a musical idea, but a composer can write in precise notation. So why exactly do we force expert software composers to hum like little children?
For code composers, prompting is poor ergonomics. It gives me something I call prompting fatigue. I never wanted to prompt. I wanted to code, get super-smart, deep AI autocomplete, and occasionally prompt the agent with questions to get tailored answers.
I keep thinking about Tony Stark. He doesn't ask Jarvis to fly. He moves and the suit amplifies. The intelligence is in service of his intent, not a separate stateless actor with its own agency.
What if the IDE watched my edits and inferred what I'm trying to do? What if my behavior was the prompt? A system that translates developer behavior into the prompts that would have produced the right completions. Without the developer having to write those prompts.
Concretely: Ghost diffs that appear when I rename a function, showing me the ripple effect across files. Tab to accept, Esc to dismiss. No chat. The agent learns what I want from what I accept. Over time, it just knows.
This is what I wanted Copilot to become. Maybe it's time to build it.