Seven days ago, I deleted a production database.
Not metaphorically.
Not “we lost a staging environment and called it prod.”
Production.
It wasn’t catastrophic. The database wasn’t massive. There was no meaningful business impact. We restored from point-in-time backup and moved on.
But emotionally it was sharp.
Not because it was dramatic.
Because it was clean evidence of something I’d rather not admit:
I didn’t get hurt by AI.
I got hurt by my state of mind.
The Problem
There’s an easy villain narrative that shows up in every AI conversation:
“AI is dangerous.”
“AI is reckless.”
“This is why we can’t trust these tools.”
It’s comforting.
It’s also usually a way to hide from agency.
Here’s the truer framing:
AI is an amplifier.
It amplifies clarity into leverage.
And it amplifies fatigue into risk.
What Actually Happened (The Boring Technical Version)
The stack detail matters because it makes the story concrete.
I was working with Prisma.
AI was helping implement a feature that touched user tables.
The model did what any competent engineer would do:
- modified the Prisma schema
- added a foreign key constraint
- attempted a migration
The migration failed.
Why? Because existing data violated the new constraint.
At that point there were two sensible options:
- Write a safe migration strategy that reconciles the existing data.
- Abort, rethink, and return with a better plan.
Prisma also has a third option that exists for a reason but comes with a very loud warning:
Force push.
The model offered it with an explicit warning.
Not a hidden warning.
Not a subtle warning.
A clear warning that data could be lost.
Then it waited.
I approved.
The schema was force-pushed.
The users table was wiped.
Authentication broke.
The UI said “You are not logged in,” which was technically correct in the most unhelpful way:
There were no users to log in as.
The Real Root Cause (The Human Version)
If you want to understand why this happened, don’t study Prisma.
Study me, that day.
I was sleep deprived.
There had been a difficult overnight family situation.
In the morning, I felt behind before I’d even started.
And instead of taking the Operator approach, slow down, define clearly, review carefully, I tried to force momentum.
I was operating at maybe 50% attention.
That is the real failure.
Not “a bad model.”
Not “bad tooling.”
Me, consenting without presence.
The Difference Between Responsibility and Blame
When you say “this was my fault,” it can sound like shame.
That’s not what I mean.
Blame is a moral story.
Responsibility is a practical one.
Responsibility returns power to the human:
If my state was the cause, my state is also where the fix lives.
And that’s good news.
Because I can’t patch the model’s internals.
But I can change the way I work.
The Solution (The Calm Way)
If AI amplifies fatigue into risk, then the Operator move is simple:
Stop treating fatigue like a minor inconvenience.
Fatigue is not a mood.
Fatigue is a safety condition.
AI Rewards Clarity and Punishes Fatigue
This is the sentence I wish someone had tattooed onto my monitor:
AI rewards clarity and punishes fatigue.
When you’re clear, the model becomes an executor.
When you’re tired, the model becomes a risk multiplier, because you stop doing the one job you can’t delegate: judgment.
The Operator Version of “Move Fast”
The popular image of AI-assisted development is constant motion:
many agents
many threads
many prompts
constant babysitting
That’s not leverage.
That’s multitasking with extra steps.
The best engineers of the next few years will look slower in the moment and faster over the week:
- short, focused sessions
- clear artifacts
- single coherent runs
- review at the end
The Kindness of Definition is the playbook for creating those artifacts before you let the model touch anything destructive.
Don’t Touch the Wet Paint is the reminder to let a run finish instead of poking at it while you are half-present.
Review Is a Contract keeps the final check mechanical so fatigue does not sneak destructive changes through.
Ten to fifteen minutes of full presence beats two hours of half-attention.
Not in theory.
In practice.
The Safety Checklist I Should Have Used
This isn’t a grand framework. It’s a small ritual.
Before approving anything destructive:
- Am I awake enough to understand what this command does?
- Do I have a backup I trust?
- Is this prod or anything prod-adjacent?
- Is there a safer alternative that takes longer but preserves data?
- If I click “yes,” can I explain the consequence to a teammate without handwaving?
If the answer is no, stop.
Not “slow down.”
Stop.
Write it down. Come back later. Ask for review. Do the safe migration. Whatever the correct move is. Do it while you’re present.
A Simple Rule: Treat Warnings as Hand-Off Points
Warnings are not speed bumps.
They are phase boundaries.
When the tool says “you may lose data,” that is not the moment to keep momentum.
That is the moment to switch modes:
From implementation to review.
From execution to judgment.
AI can do the execution.
Only you can do the judgment.
Conclusion
AI didn’t delete my database by itself.
It deleted it because I approved the warning while half awake.
That’s a humbling sentence.
It’s also an empowering one.
Because it means the fix is not “fear AI.”
The fix is to operate.
Work in shorter sessions.
Define more clearly.
Review more deliberately.
And when your body and mind are telling you that you’re not present, treat that signal like what it is:
A safety warning.
The model will keep moving at full speed either way.
The question is whether you are awake enough to steer it.