You open the ticket and it says:
“Make onboarding smoother.”
No screenshots.
No acceptance criteria.
No edge cases.
Just a feeling, written down as a sentence.
You could ask questions. You should ask questions. But it’s late, and the change “seems obvious,” and you have that familiar itch to move forward.
So you do what the modern engineer does.
You paste a few files into a prompt, describe what you want, and ask the model to implement it.
It produces a pull request in minutes.
It looks polished.
It compiles.
And then you read it twice and realize something uncomfortable:
You can’t tell if it’s right.
Not because the code is unreadable.
Because the request was.
The Problem
We often blame AI for “improvising.”
But improvisation is not a model quirk.
It’s what any collaborator does when the brief is thin.
Give a smart engineer a vague requirement and they will fill in gaps with taste, experience, and assumptions.
Sometimes those assumptions match you.
Sometimes they don’t.
AI simply makes this dynamic faster.
It turns ambiguity into code at high speed.
And high-speed ambiguity has a unique flavor of chaos: it feels productive until you try to verify it.
Vibes Don’t Compile
“Smoother.”
“Cleaner.”
“More modern.”
“Less friction.”
These are legitimate product instincts.
They are also non-executable instructions.
In a human team, the gap is filled with meetings.
In AI-assisted work, the gap is filled with guesses.
And guesses are expensive, not because they are morally wrong, but because they are unstable.
When the system changes later, the guess becomes a silent landmine:
Why is this redirect happening here?
Why does this form validate this way?
Why is the button disabled for this edge case but not that one?
Nobody remembers, because nobody ever defined it.
The Hidden Cost Is Emotional
There’s a reason vague work is draining.
It forces you to decide constantly.
Not big decisions. Small ones.
A hundred micro-judgments about what “should” happen.
That is what makes you feel tired even when the AI “did the work.”
You didn’t do the typing.
You did the guessing.
The Solution (The Calm Way)
Definition is not bureaucracy.
Definition is kindness.
It is kindness to the model, because it stops it from role-playing your intent.
It is kindness to your future self, because it gives you a stable reference when the code is changed again.
It is kindness to reviewers, because it lets them check behavior instead of debating taste.
And, quietly, it is kindness to your nervous system.
Because when “done” is explicit, you stop carrying the project in your head.
Define What You Want the Model to Stop Guessing
You don’t need a 20-page spec.
You need a small artifact that makes the next step executable.
If your task touches UX, the simplest definition is usually:
- what the user sees
- what the system does
- what happens when things go wrong
Here’s what “make onboarding smoother” could become in five minutes:
- Goal: user can finish onboarding without confusion
- Non-goal: no new feature steps, no redesign
- Acceptance criteria:
- the primary CTA is visible without scrolling on mobile
- the form shows one clear error at a time
- the “Back” button never loses entered data
- submitting twice does not create duplicate records
- Edge cases:
- slow network
- user refreshes mid-flow
- user returns after closing the tab
None of this is “perfect.”
But it is readable.
And readability is what you were missing.
Definition Is a Control System
There’s a systems-thinking point hiding here.
When inputs are ambiguous, a system’s output becomes sensitive to small variations:
- which engineer picks up the task
- which model you use
- which examples happen to be in context
- which mood you’re in during review
That sensitivity is how you get drift.
Definition reduces sensitivity.
It doesn’t remove creativity from the process.
It bounds it.
And bounded creativity is how you ship reliably.
The Stoic Lens
If you want the philosophical version:
Definition is the part you control.
You cannot control whether a model has a “bad day.”
You cannot control whether you are tired at 11:43 p.m.
You can control whether the work is clearly defined before you build it.
That one choice changes the rest of the experience.
You stop trying to win a wrestling match against uncertainty.
You reduce uncertainty at the source.
A Simple Operator Pattern
If you want something you can repeat without thinking, do this:
- Write the Goal in one sentence.
- Write 3 - 7 Acceptance Criteria that are binary.
- Write 3 - 7 Edge Cases you don’t want to rediscover.
- Write one Non-goal to protect scope.
- Hand the model the artifact and ask it to implement only that.
Then review against the artifact.
Not against the vibe you started with.
This is where calm comes from: you convert a fuzzy desire into a small contract.
The Quiet Trick: Make Review Mechanical
When definition is thin, review becomes emotional.
You stare at the output and ask: “Does this feel right?”
That’s exhausting.
When definition is clear, review becomes mechanical:
Does it satisfy each acceptance criterion?
Does it handle each edge case?
Did it violate a constraint?
Now you can be tired and still review well.
You no longer need perfect intuition in the moment.
You need attention.
That is a much cheaper resource.
When the work is in flight and you need to avoid mid-run steering, Don’t Touch the Wet Paint pairs with this approach by keeping execution coherent.
When the work is done and you need a calm review ritual, Review Is a Contract is the checklist for verifying what you defined actually happened.
Conclusion
If your AI work keeps drifting, ask yourself a blunt question:
What did you ask it to guess?
Because it will guess.
And it will do it convincingly.
The Operator move is to stop outsourcing definition to the model.
Write “done” down while your mind is clear.
Turn vibes into criteria.
Turn criteria into a contract.
Then let the machine do what it does best: execute cleanly inside clear boundaries.