🧭 THIS WEEK AT BuildProven
Howdy, well my OpenClaw, StarkNet has been down most of this week. Mother. Issue was Anthropic decided to stop allowing subscription use of Claude Code with agentic harnesses like OpenClaw. So, I have been pivoting to use OpenRouter but not simple and just burned $30 today due to various screw ups. Bummer!
Let’s get building.
🧰 Worth Your Click
Here are a few things I found recently:
We've passed the inflection point — and dark factories are coming — Lenny's Newsletter / Simon Willison
The best AI-and-coding interview I've read this year. Simon Willison (co-creator of Django, coined "prompt injection") explains why November 2025 was the moment AI coding agents actually crossed from "mostly works" to "works."
The future of software engineering with AI: six predictions — The Pragmatic Engineer
Juniors get guided by AI. Seniors direct it. Mid-career engineers — the ones who mostly execute defined tasks without strategic input — are the ones squeezed. If you are a domain expert who has been watching from the sidelines, this is your signal.
How to write specs that make AI build the right thing — Addy Osmani
The single most useful 15-minute read for operators who want AI to produce good work. The spec IS the expertise. The code is just the execution.
🗺️ FEATURED INSIGHT
The Dark Factory Is Coming.
A few weeks ago, Lenny Rachitsky interviewed Simon Willison — the developer who coined "prompt injection" and "AI slop," and co-created Django (the framework that powers Instagram and Pinterest). The interview is worth your hour.
He called it the "dark factory."
The concept: fully automated code pipelines where nobody writes or reviews code, AI does the entire build and its own QA. No human in the loop. The term comes from manufacturing: a lights-out factory that runs in the dark because there are no humans who need to see.
And he thinks it's coming for software development.
The reaction I see online is panic. Mid-career developers anxious about their roles. Engineers asking whether they should retrain. The tech industry looking at itself in the mirror wondering what happens when the thing it built can do the thing it does.
Here's what I notice: nobody is talking about what thbis means for people like us.
What "Us" Means
I'm 52. I spent 25+ years in program management and electronics — managing large-scale programs, building complex systems, navigating the gap between what engineers build and what the business actually needs.
I am not a developer anymore. I write almost no code.
In every single conversation about AI replacing software jobs, people like me are invisible. The assumption is that the AI revolution is happening to developers, and everyone else is watching.
That assumption is backwards.
Here's the thing about dark factories: they need a blueprint. Someone has to tell the factory what to build, in what order, to what standard, with what edge cases handled, and why any of those decisions matter.
That person is not a developer. That person is you.
What I Built This Week (And the Pattern Behind It)
I shipped two things in the last two weeks that illustrate what the dark factory looks like from the operator side.
QA Architect — a CLI that installs 25 years of "ships cleanly" into any project
I built an npm package that injects a complete CI/quality pipeline into any codebase in one command. ESLint, Prettier, Husky, lint-staged, GitHub Actions — all configured correctly, opinionated by default.
npx create-qa-architect@latest
Every default in that package reflects a decision I've watched teams argue about, skip under deadline pressure, or implement wrong and pay for later. The pre-commit hook that catches lint errors before they reach CI. The pre-push hook that runs tests on changed files only — fast feedback without burning GitHub Actions minutes. The adaptive workflow tiers so a solo dev on a side project doesn't accidentally rack up $300/month in CI costs.
A developer could build this in a day. Nobody else would know which defaults actually matter, or why the ones teams skip are the ones they regret.
I found a monetization leak and closed it in one session
QA Architect has a Pro tier ($49/month) that gates features like GitHub Actions cost analysis behind a license check. I went through the codebase and found that --analyze-ci was accessible to free users — the gate was written but not activated.
Described the problem to Claude. Thirty minutes later, the gate was live, the tests were updated, and the PR was merged.
The domain knowledge wasn't "how to implement a feature gate." It was knowing my own pricing model well enough to recognize when the code didn't match it.
The Technique Callout: Spec Like You're Briefing a New Hire
The thing that made all of those builds work was not the AI. It was the spec.
Here's the pattern I've settled on:
Brief the AI like it's a smart new hire who knows nothing about your domain.
Not "fix the billing check."
This:
"I have a SaaS product with a Pro tier at $49/month. The feature --analyze-ci should be gated behind a license check — free users should see an upgrade prompt, Pro users should proceed. I've found the gate is written in lib/commands/analyze-ci.js but the check is commented out. I need you to: 1) uncomment and activate the gate, 2) verify the error message matches the copy on my pricing page ('Upgrade to Pro at buildproven.ai/qa-architect'), 3) update the test file so the new behavior is covered. Don't touch anything outside those two files."
I wrote that in about two minutes. The AI had the code written and tested in under thirty minutes.
The domain knowledge is the spec. The spec is the value. The code is the plumbing.
When I see people struggle with AI tools, it's almost never a tool problem. It's a spec problem. They give AI a vague instruction and then are surprised when they get a generic output. That's the unlock: you have spent your career developing the ability to ask precise questions. You know what "good" looks like in your field. You know what breaks, what matters, what the edge cases are.
Write that down. Put it in a prompt. You've just spec'd a product.
The Actual Threat Model
The dark factory is not coming for domain experts.
What AI can't do: generate the requirements. Know what the business actually needs. Understand why the spec is written the way it is. Catch the edge case that isn't in the requirements but that any senior person in the industry would know matters.
That's not a prediction. That's already how it works.
I'm running 9 AI agents. They do what I spec. They don't know why. I know why. The product works because of the why.
If you have 20+ years in your field, you have decades of why that nobody has written down. That's not just a competitive advantage. It's the only part of the system that doesn't get automated.
Build with it, or someone else will.
Weekly build logs from a 25-year program manager who codes with AI.
— Brett
👉 Hit “Reply” and share your experience — I read every one!
Picture by xxx on Unsplash.
