No critical developer nonetheless expects AI to magically do their work for them. We’ve settled right into a extra pragmatic, albeit nonetheless barely uncomfortable, consensus: AI makes an awesome intern, not a alternative for a senior developer. And but, if that is true, the corollary can be true: If AI is the intern, that makes you the supervisor.
Sadly, most builders aren’t nice managers.
We see this daily in how builders work together with instruments like GitHub Copilot, Cursor, or ChatGPT. We toss round imprecise, half-baked directions like “make the button blue” or “repair the database connection” after which act shocked when the AI hallucinates a library that has not existed since 2019 or refactors a important authentication move into an open safety vulnerability. We blame the mannequin. We are saying it’s not good sufficient but.
However the issue often is just not the mannequin’s intelligence. The issue is our lack of readability. To get worth out of those instruments, we don’t want higher immediate engineering methods. We want higher specs. We have to deal with AI interplay much less like a magic spell and extra like a proper delegation course of.
We should be higher managers, in different phrases.
The lacking ability: Specification
Google Engineering Supervisor Addy Osmani not too long ago printed a masterclass on this precise subject, titled merely “How one can write a great spec for AI brokers.” It is likely one of the most sensible blueprints I’ve seen for doing the job of AI supervisor nicely, and it’s an awesome extension on some core rules I laid out not too long ago.
Osmani is just not making an attempt to promote you on the sci-fi way forward for autonomous coding. He’s making an attempt to maintain your agent from wandering, forgetting, or drowning in context. His core level is easy however profound: Throwing an enormous, monolithic spec at an agent typically fails as a result of context home windows and the mannequin’s consideration funds get in the best way.
The answer is what he calls “good specs.” These are written to be helpful to the agent, sturdy throughout periods, and structured so the mannequin can comply with what issues most.
That is the lacking ability in most “AI will 10x builders” discourse. The leverage doesn’t come from the mannequin. The leverage comes from the human who can translate intent into constraints after which translate output into working software program. Generative AI raises the premium on being a senior engineer. It doesn’t decrease it.
From prompts to product administration
You probably have ever mentored a junior developer, you already understand how this works. You don’t merely say “Construct authentication.” You lay out all of the specifics: “Use OAuth, help Google and GitHub, preserve session state server-side, don’t contact funds, write integration exams, and doc the endpoints.” You present examples. You name out landmines. You insist on a small pull request so you may examine their work.
Osmani is translating that very same administration self-discipline into an agent workflow. He suggests beginning with a high-level imaginative and prescient, letting the mannequin broaden it right into a fuller spec, after which modifying that spec till it turns into the shared supply of fact.
This “spec-first” method is shortly turning into mainstream, transferring from weblog posts to instruments. GitHub’s AI staff has been advocating spec-driven improvement and launched Spec Package to gate agent work behind a spec, a plan, and duties. JetBrains makes the identical argument, suggesting that you simply want evaluation checkpoints earlier than the agent begins making code modifications.
Even Thoughtworks’ Birgitta Böckeler has weighed in, asking an uncomfortable query that many groups are quietly dodging. She notes that spec-driven demos are likely to assume the developer will do a bunch of necessities evaluation work, even when the issue is unclear or giant sufficient that product and stakeholder processes usually dominate.
Translation: In case your group already struggles to speak necessities to people, brokers is not going to prevent. They are going to amplify the confusion, simply at the next token fee.
A spec template that really works
AI spec is just not a request for feedback (RFC). It’s a instrument that makes drift costly and correctness low cost. Osmani’s suggestion is to begin with a concise product transient, let the agent draft a extra detailed spec, after which appropriate it right into a dwelling reference you may reuse throughout periods. That is nice, however the actual worth stems from the precise parts you embody. Based mostly on Osmani’s work and my very own observations of profitable groups, a practical AI spec wants to incorporate just a few non-negotiable components.
First, you want aims and non-goals. It’s not sufficient to jot down a paragraph for the purpose. You have to record what’s explicitly out of scope. Non-goals stop unintended rewrites and “useful” scope creep the place the AI decides to refactor your whole CSS framework whereas fixing a typo.
Second, you want context the mannequin gained’t infer. This consists of structure constraints, area guidelines, safety necessities, and integration factors. If it issues to the enterprise logic, it’s a must to say it. The AI can not guess your compliance boundaries.
Third, and maybe most significantly, you want boundaries. You want express “don’t contact” lists. These are the guardrails that preserve the intern from deleting the manufacturing database config, committing secrets and techniques, or modifying legacy vendor directories that maintain the system collectively.
Lastly, you want acceptance standards. What does “accomplished” imply? This must be expressed in checks: exams, invariants, and a few edge circumstances that are likely to get missed. In case you are considering that this appears like good engineering (and even good administration), you’re proper. It’s. We’re rediscovering the self-discipline we had been letting slide, dressed up in new instruments.
Context is a product, not a immediate
One motive builders get pissed off with brokers is that we deal with prompting like a one-shot exercise, and it’s not. It’s nearer to establishing a piece setting. Osmani factors out that giant prompts typically fail not solely because of uncooked context limits however as a result of fashions carry out worse if you pile on too many directions without delay. Anthropic describes this similar self-discipline as “context engineering.” You have to construction background, directions, constraints, instruments, and required output so the mannequin can reliably comply with what issues most.
This shifts the developer’s job description to one thing like “context architects.” A developer’s worth is just not in understanding the syntax for a selected API name (the AI is aware of that higher than we do), however somewhat in understanding which API name is related to the enterprise drawback and guaranteeing the AI is aware of it, too.
It’s value noting that Ethan Mollick’s put up “On-boarding your AI intern” places this in plain language. He says it’s a must to be taught the place the intern is beneficial, the place it’s annoying, and the place you shouldn’t delegate as a result of the error fee is just too expensive. That could be a fancy manner of claiming you want judgment. Which is one other manner of claiming you want experience.
The code possession entice
There’s a hazard right here, after all. If we offload the implementation to the AI and solely deal with the spec, we threat shedding contact with the fact of the software program. Charity Majors, CTO of Honeycomb, has been sounding the alarm on this particular threat. She distinguishes between “code authorship” and “code possession.” AI makes authorship low cost—close to zero. However possession (the flexibility to debug, keep, and perceive that code in manufacturing) is turning into costly.
Majors argues that “if you overly depend on AI instruments, if you supervise somewhat than doing, your personal experience decays somewhat quickly.” This creates a paradox for the “developer as supervisor” mannequin. To write down a great spec, as Osmani advises, you want deep technical understanding. If you happen to spend all of your time writing specs and letting the AI write the code, you may slowly lose that deep technical understanding. The answer is probably going a hybrid method.
Developer Sankalp Shubham calls this “driving in decrease gears.” Shubham makes use of the analogy of a handbook transmission automotive. For easy, boilerplate duties, you may shift right into a excessive gear and let the AI drive quick (excessive automation, low management). However for complicated, novel issues, it’s worthwhile to downshift. You may write the pseudocode your self. You may write the troublesome algorithm by hand and ask the AI solely to jot down the check circumstances.
You stay the motive force. The AI is the engine, not the chauffeur.
The longer term is spec-driven
The irony in all that is that many builders selected their profession particularly to keep away from being managers. They like code as a result of it’s deterministic. Computer systems do what they’re advised (largely). People (and by extension, interns) are messy, ambiguous, and require steering.
Now, builders’ major instrument has develop into messy and ambiguous.
To achieve this new setting, builders have to develop tender abilities which are really fairly exhausting. You want to discover ways to articulate a imaginative and prescient clearly. You want to discover ways to break complicated issues into remoted, modular duties that an AI can deal with with out shedding context. The builders who thrive on this period gained’t essentially be those who can sort the quickest or memorize probably the most customary libraries. They would be the ones who can translate enterprise necessities into technical constraints so clearly that even a stochastic parrot can not mess it up.
