A strict, human-in-the-loop methodology for building software with LLMs. Not a tool. Not a library. A discipline — for solo developers and small teams who can't afford expensive rewrites.
When context is tight and relevant, the AI reasons better. Noise degrades output quality and increases hallucinations. Saving tokens is not an optimisation. It is the foundation of reliable AI reasoning.
This constraint — treating tokens as a precious resource — produces better software than unlimited context. The discipline is the feature.
The AI is an execution engine, not a driver. You architect the blueprint and define what "good" looks like. The AI executes exactly what you scoped — no more, no less.
Autonomy is earned per stage, not granted by default. The Architecture Critic gets high autonomy. The Implementer gets low. This is intentional.
One monolithic prompt doesn't scale. Nudge uses targeted shards — you feed the AI exactly what it needs for the current task, nothing from three sessions ago.
A shard is the quantum of work. A session is the quantum of context. Keep them aligned and the AI stays sharp throughout the project.
Format follows consumer. AI-consumed outputs are telegraphic. Human-consumed outputs are scannable. The best outputs are both — that's what nudgeDSL achieves, and what the handover format already approximates.
Copy a prompt, paste it into any AI, follow the rules. No tooling required. Run stages in order — each one's output is the next one's input.
Once you've run a few sessions with the prompts above, these patterns will save you hours. Each one exists because someone paid for the lesson in wasted tokens.
Do not restart after every phase by default. But carrying dead context is worse than restarting. The decision is structural, not intuitive.
Does the next slice share more than ~60% of its file surface with the current slice? If YES → continue the session. If NO → generate a handover and restart with a clean context.
Before stopping, the AI generates a handover using strict Context Minification. No conversational filler. This is everything the next session needs.
slice: auth-engine-v1 status: complete files_modified: [auth.go, auth_test.go] files_created: [auth_types.go] decisions_locked: - Using JWT, not sessions - Refresh tokens stored in Redis next_slice_needs: - contracts.yaml - auth_types.go skeleton
A shard is stale when the last commit to its subsystem is newer than the shard file. Stale shards produce wrong implementations.
Your core rules file (AI_RULES.md or CLAUDE.md) is injected every turn. Every token in it competes with your actual task context.
Put phase-specific rules in the phase shard — not the global rules file. The rules file is the operating manual. The shard is the spec.
AIs waste massive context running ls or reading 1000-line files to find one function. Give the AI a map. These scripts are project-agnostic — copy them once.
The AI should never use raw cat, ls, or grep. Every file navigation goes through one of these wrappers. Your most expensive tokens should never be spent reading code or searching directories.
The Nudge Framework already pushes you toward telegraphic, structured outputs. nudgeDSL completes that journey — your orchestration documents become machine-executable without losing human readability.