🧭 THIS WEEK AT BuildProven
Howdy, 1 yr + 1 wk and still going!. Leaning fully into AI building now!
Picked up the building a bit lately but need to pick up the marketing/promoting too. Not a strong suit. That said, standard blurb:
Would appreciate if you could share this newsletter to help it grow :). Also, poll at the end if you’d like to give feedback & even better, just hit ‘reply’ and write to me!
What i’ve been building lately -
Budget enforcer for OpenClaw since I have to use expensive paid APIs now for LLM calls
Strict agent tracking/logging for OpenClaw - since agents do whatever the frick they want sometimes
Claude Kit Pro - nearly ready for sale
Build Skill - teaches how to build with AI - just started
Most with Claude Code, some codex reviews etc.
🧰 Worth Your Click
Here are a few things I found recently:
DeepSeek v4 released - I need to look at switching most of StarkNet [openclaw!] to this!
new OpenClaw release with some cool new capabilities - Sonos! I built that myself but was a bit hacky so maybe better here.
openAI cookbook - wow, could spend months in here learning to build! More advanced topics.
My builds - updated, improved free Claude kit and QA Architect this week. Both of these nearly ready to launch a paid version, just gotta wire up the license stuff.
openAI harness engineering - bit detailed/complex article on concept of wrapping AI agents in a loop to get stuff done.
🗺️ FEATURED INSIGHT
Track A — Teens
Kids learning to build real things by talking to AI.
The pilot build is "Build a quiz about your favourite thing." Single HTML file, deployed via a one-shot tool I have, live URL in about 20 minutes. Six steps, each with the exact prompt to paste into Claude. Steps include what goes wrong and how to fix it. Debugging is the lesson, not an interruption to it.
Here's the actual Step 1 prompt a kid pastes into Claude:
Make a single web page for a 10-question quiz about Minecraft. Each question has 4 multiple-choice answers. After all 10, show my score and which ones I got wrong. Use bright colours, big buttons, and make it work on phone and laptop. Keep all the code in one file so I can save it and open it.Eight lines. One file out. Then Step 2 is the fix-it prompt for whatever broke (and something always breaks):
The "Submit" button doesn't show the score on my phone. Show me what to change and explain in one sentence why it broke.That's the loop. Idea, prompt, run, fix, ship. The kid swaps "Minecraft" for whatever they actually love and the quiz is theirs in 20 minutes.
Why this format works:
The kid provides the domain knowledge (their favourite thing)
Claude does the coding
They see the loop: idea, prompt, code, running thing, share
The shareable URL is the motivation to finish
Test plan: hand the URL to a kid. Watch silently. Note what confuses them. Iterate.
Track B — Professionals
Experienced domain experts using AI to amplify what they already know. Not developers. Director-level. They know their field deeply. They probably use ChatGPT occasionally. They've never opened Claude Code.
The four builds:
Personal AI assistant that knows your domain. Concept: prompting plus context injection.
Automate one thing you do weekly. Concept: workflow automation.
A simple tool for your team. Concept: AI-powered tools without code.
Ship it publicly. Concept: deployment plus iteration.
The pilot is Session 1. A Director who's never used Claude Code finishes with a Claude Project loaded with their domain knowledge, custom instructions, and three saved prompts they'll actually use on Monday morning.
The Session 1 setup prompt a Director pastes into Claude (after uploading 5-10 of their own files: meeting notes, a strategy doc, an org chart, last quarter's review):
You are my domain assistant. The files I've attached cover [their function, e.g. "supply chain operations across our 4 EU plants"]. From now on:
1. Always ground answers in the attached files first; say "not in your context" before reaching outside them.
2. Use my company's vocabulary, not generic business language. Match the terms in the docs.
3. When I ask for a decision, give me the recommendation first, then the trade-offs, then what you'd need to be sure.
4. If a question needs data I haven't given you, ask for the specific file by name.
Confirm you understand by summarising what you now know about my function in 5 bullets, and list 3 questions I should be asking you that I probably haven't thought of.That last line is the hook. Most Directors have never had a tool ask them a sharper question. Once that lands, they get the loop without anyone having to name it.
Weekly build logs from a 25-year program manager who codes with AI.
— Brett
👉 Hit “Reply” and share your experience — I read every one!
