It’s 11:43 p.m.
You tell yourself you’re going to make one small change and go to bed.
You paste a snippet. You ask the model to “just refactor this a little.” It responds instantly, confident and clean. You apply it. It works.
So you do it again.
And again.
Forty minutes later you’re not refactoring. You’re negotiating with a thread.
You scroll up to remember what you decided.
You find yourself writing sentences like: “No, not like that, like I said earlier.”
You’re not tired from typing.
You’re tired from steering.
There’s a very specific feeling that shows up at this point. A kind of quiet suspicion that the surface you’re working on has become unreliable. Not wrong, just smudged.
And the most frustrating part is that it’s hard to name what’s happening.
Is the model “getting worse”?
Did you “prompt badly”?
Did you forget some key instruction?
Or is something else more mundane and more human at play: the fact that you’re trying to do precise work on a surface that is no longer legible?
The Problem
People talk about context windows like they are memory.
They aren’t.
They are closer to a desk.
A desk can hold a lot of objects.
But the more you pile on, the harder it becomes to work.
At first you have space to lay things out. Later you’re balancing stacks. Then you’re moving stacks just to find the one tool you need. Eventually the desk stops being a workspace and becomes a storage unit.
Chat threads fail in the same shape.
Not because the model “forgets” in the way a person forgets, but because the information stops being clean enough to act on without guessing.
The Post-it Note, Not the Hard Drive
If you want a simple mental model, don’t imagine the model as having an infinite notebook.
Imagine it has a single Post-it Note.
Not because the Post-it is tiny, but because it behaves like a fixed surface: finite space, finite legibility, and no real concept of “version history.”
On a Post-it, additions don’t replace earlier ink.
They sit beside it.
They overlap it.
They compete with it.
And when you keep editing in place, you don’t just add information, you add noise.
How the Post-it Fills Up
The first phase is clean.
You write:
- ship the registration flow
- validate inputs
- show helpful errors
The work is coherent because the surface is coherent.
Then you add nuance:
- “password rules”
- “rate limit sign-ups”
- “no new dependencies”
Still fine.
Then the real project arrives: the part where requirements evolve, constraints collide, and the “small change” reveals a hidden system underneath it.
Now you’re writing in margins:
- “keep the existing auth flow”
- “don’t touch the middleware”
- “also fix the redirect edge case from earlier”
- “actually, revert the last refactor, but keep the tests”
Nothing is wrong with any single line.
The problem is that they no longer form one readable instruction set. They form a collage.
And collages are interpretive.
The Stranger Test
Here is the test that reveals what’s really happening.
Imagine you hand that Post-it to a new engineer, someone smart, but unfamiliar with the history. You tell them: “Build what’s on this note.”
What do they do?
They squint.
They ask questions you thought were already answered.
They pick an interpretation and start moving.
If they’re cautious, they’ll spend half the day trying to infer which edits supersede which.
If they’re confident, they’ll ship the wrong thing quickly.
The AI behaves the same way because the AI is, in this moment, a stranger reading your Post-it.
It isn’t “dumb.”
It’s doing what any capable collaborator does when the brief is smudged: it fills in gaps, resolves contradictions, and commits to a guess.
The Hidden Tax: You Start Managing Context Instead of Work
There’s a second problem here that’s easy to miss.
Once the surface becomes unclear, you begin spending your attention on the surface itself.
You re-read your own messages.
You rewrite constraints.
You correct the model’s interpretation instead of progressing the system.
You start doing context maintenance.
And context maintenance feels like progress because it produces words and patches and diffs, but it’s not the kind of progress you wanted when you sat down.
This is why chaotic AI work is exhausting even when you “didn’t type much.”
You were carrying the project in your head and carrying the conversation in your head.
You weren’t building.
You were juggling.
Why “Prompting Harder” Doesn’t Fix It
Most people respond to drift by adding more.
More detail. More reminders. More “rules.” A longer prompt. A stricter tone. One more clarification.
That approach can work when the problem is missing information.
But when the problem is a saturated surface, adding more is like writing smaller in the margins.
It buys you a few minutes.
Then you’re right back where you started, except the ink is thicker now.
This is the trap: you try to fix the ambiguity created by accumulation with even more accumulation.
And because the model remains helpful, it will happily participate in the cycle.
It will respond.
It will generate.
It will try to reconcile contradictions you didn’t explicitly resolve.
And your thread will become even more of an archaeological site: a layer cake of decisions, reversals, temporary hacks, and half-finished refactors.
At some point you can feel it in your own behavior.
You stop thinking in terms of “what should the system do?” and start thinking in terms of “how do I get the model to stop doing that?”
That is the moment you are no longer operating.
You are reacting.
The Solution (The Calm Way)
In the physical world, the solution to a full Post-it is not heroism.
It’s a second Post-it.
A new surface.
Fresh ink.
Clean boundaries.
The Operator move is to treat context like a system resource: something you design, allocate, and refresh, not something you casually accumulate until it collapses.
Add Post-its, Not Tokens
The calm fix is not to “fit more into the context window.”
The calm fix is to make the important parts legible.
You do that by externalizing project state into artifacts: small documents that hold decisions in a stable form.
Not in the chat history. In files.
This shift sounds boring. It is, in the best way.
Boring means predictable.
Predictable means calm.
Artifacts Are Not “Documentation”
This is where many engineers flinch.
We’ve been trained to associate writing with bureaucracy.
Specs that nobody reads.
Docs that rot.
Confluence graveyards.
Artifacts are not that.
Artifacts are working surfaces.
They are the minimum state you need to stop holding everything in your head.
The point is not to describe the project to a future archaeologist.
The point is to give the next execution step a clean surface right now.
And because they are small, they are cheap to keep correct.
The Four Post-its I’d Keep If I Could Only Keep Four
If you strip this down to essentials, most AI-assisted engineering work improves dramatically with four artifacts:
-
Goal
One paragraph. What does “done” look like? -
Constraints
What must not change? What must be preserved? What is off-limits? -
Decisions
The handful of “we chose X over Y” items that stop the model from improvising. -
Current Task
What are we doing today, and what does success mean for this one step?
Notice what’s missing: a full transcript of your thinking.
That’s intentional.
The artifact is not a diary.
It’s a contract.
If you can’t point to the artifact where a decision lives, that decision doesn’t exist yet.
It’s a vibe.
And vibes are where software goes to die slowly.
The Operator Loop: Context, Definition, Non-Interference, Review
Once you have clean surfaces, the workflow becomes surprisingly simple.
You operate in a loop:
- Context: Provide the model a clean, relevant set of artifacts.
- Definition: Make “done” explicit before building.
- Non-interference: Let the model complete a coherent run instead of steering mid-flight.
- Review: Compare output against artifacts, not against your momentary emotional reaction.
These principles are not moral advice.
They are practical defenses against chaos.
They reduce the number of micro-decisions you make per hour, which is the real source of burnout.
They also reduce the model’s need to guess, which is the real source of regressions.
Non-Interference: Stop Touching the Wet Paint
One of the strangest failure modes in AI-assisted work is self-inflicted.
You give a task.
The model starts moving.
Halfway through, you interrupt it with a new requirement.
Then you interrupt again to correct a detail.
Then you interrupt again to ask for a different approach.
You would never do this to a human engineer without expecting confusion.
But because the interface makes interruption cheap, you do it anyway.
This is like touching wet paint to “improve it.”
You can do it.
But you cannot be surprised when it smears.
Don’t Touch the Wet Paint is the longer breakdown of why mid-run steering creates incoherent outcomes and how to resist the urge to intervene.
Non-interference is the discipline of letting a coherent run finish.
It’s what turns the model from a jittery collaborator into an executor.
Review: Stop Checking Output Against Your Mood
When you work without artifacts, review becomes emotional.
You stare at code and ask yourself if it “feels right.”
That’s a miserable way to work, even without AI.
With AI, it’s worse, because the volume of change is higher and the speed is faster.
Artifacts give you a calmer review method:
Does this match the explicit constraints?
Does it satisfy the acceptance criteria you defined?
Did it violate a decision you documented?
Now review is not a vibe check.
It’s a contract check.
Review Is a Contract goes deeper on how to set that reference so large AI diffs can be evaluated calmly.
You can be tired and still review well.
That is the whole point.
The Context Reset (Without Drama)
Ending a chat is not failure.
Ending a chat is hygiene.
When a thread gets long enough that you can feel the “ink” bleeding, you do a simple move:
You extract what matters.
You discard the rest.
Then you begin again on a clean surface.
There’s a reason this feels powerful.
A fresh session is the closest thing you get to a clean desk.
It removes the ambient pressure of the history.
It stops the model from being pulled by outdated decisions.
And it stops you from being pulled by outdated arguments.
If you want a practical trigger:
If you are re-explaining the same constraint for the third time, stop.
Your Post-it is full.
Write the constraint into an artifact.
Start a new session with the artifact attached.
The 60-Second Extraction
When you end a chat, a part of you worries you’re losing nuance.
You’re not.
You’re discarding the noise that pretends to be nuance.
Do this instead:
Open a fresh document and write what is true now, not what was true twenty messages ago.
Keep it small enough that you can read it tomorrow without caffeine and still know what to do next.
If you want a simple structure, use five lines:
- Goal: one sentence
- Constraints: three bullets
- Decisions: three bullets
- Open questions: whatever is unresolved
- Next task: one paragraph with “done” defined
The discipline is not in writing more.
The discipline is in leaving things out.
If a detail doesn’t change the next implementation step, it doesn’t belong on the Post-it.
This is how you stop building a conversation and start building a system.
The Philosophical Layer: Respect Finitude
There’s a deeper point here that isn’t really about AI.
It’s about being human.
Your attention is finite.
Your working memory is finite.
Your patience is finite.
Good process respects finitude.
Bad process pretends finitude doesn’t exist and then shames you for not being a machine.
The Operator mindset is a quiet refusal to play that game.
You don’t try to out-muscle complexity.
You design around it.
You choose clarity over heroics.
You choose fewer, cleaner decisions over a constant stream of improvisation.
You choose systems that hold state so your mind doesn’t have to.
The goal is not to “talk to AI better.”
The goal is to make the work legible enough that neither you nor the AI has to guess.
Conclusion
The Post-it Note analogy is not meant to insult the model.
It’s meant to clarify the real mechanism of failure: a saturated surface that forces interpretation.
When your AI sessions start degrading, don’t reach for a cleverer prompt.
Ask a simpler question:
Am I writing smaller, or am I writing clearer?
If the surface is full, stop.
Grab another Post-it.
Externalize the decisions.
Reset the context.
Then continue, calmly, with clean ink.