Building With AI Without the Chaos: The PIV Loop and Principled Agentic Engineering
AI coding feels magical for a few prompts, then turns into chaos. The fix isn't a better tool — it's a tighter workflow. Here's the per-ticket loop and the larger flow around it that I use to ship real software with agents instead of just demos.

Most developers who try "AI coding" hit the same wall. The first few prompts feel like magic. Then it turns into chaos. The model hallucinates requirements, loses earlier constraints, and you end up babysitting an overconfident autocomplete that's gradually drifting away from what you actually asked for.
That's not a tooling problem. The model is fine. It's a workflow problem.
Two ideas fix it, and they fit together cleanly:
- The PIV loop — a tight, per-ticket cycle of Plan/Prime → Implement → Validate.
- Principled Agentic Engineering — the larger flow that wraps around PIV so the work stays anchored to real product goals instead of drifting into "what would be cool to build next."
This is the rhythm I use to ship real software with agents — not just demos.
The core problem: vibe coding
Most people interact with agents in giant prompt mode:
- Dump every thought into one huge prompt.
- Ask the model to "build X."
- Hope it gets it right on the first or second try.
The results are predictable. Requirements drift as the conversation grows. The model forgets earlier constraints. You get plausible-looking code that doesn't actually match what you need — and because it compiles and the tests it wrote for itself pass, it feels like progress until integration day, when it doesn't.
Vibe coding scales to a weekend project. It doesn't scale to a production system, and it definitely doesn't scale to a team. The PIV loop and Principled Agentic Engineering exist to give both you and the agent a predictable rhythm so the model's limitations stop mattering as much.
The PIV loop: Plan/Prime → Implement → Validate
At the heart of all of this is a small loop that runs per ticket.
Plan / Prime
Scope the work. Instead of "build the payments system," zoom in to one unit:
- What exactly is being built or changed?
- What files and components are involved?
- What are the acceptance criteria?
- What constraints — naming, architecture, testing — apply?
The agent gets only the relevant context and a clear definition of done for this ticket. No more, no less. This is where Context Engineering earns its keep at the small scale: prime the model with the file paths and conventions it actually needs, and the implementation step gets dramatically more reliable.
Implement
The agent codes — writes, refactors, wires up tests — within the priming context. The goal is depth over breadth: one piece of work, done well, rather than hopping around the codebase following whatever the model suggests next.
Watch for an early exit signal here. If the agent starts touching files outside the priming scope, that's the alarm — either the priming was too narrow, or the agent has wandered. Stop, fix, re-prime. Don't let it ride.
Validate
Reality check. Does the implementation match the plan?
- Run tests and linters.
- Start the app and exercise the behavior — ideally in the actual UI for anything visible.
- Review the diff like you would for a human teammate.
If validation fails because of a bug, loop back to Implement. If it fails because the spec was wrong or requirements changed, loop back to Plan/Prime and fix the framing. Don't let the agent "fix" a problem the ticket never accounted for — that's how scope creep enters production.
The loop is deliberately boring. That's the point. AI work becomes a sequence of small, well-defined cycles instead of a 90-message chat that nobody can audit.
Principled Agentic Engineering: the layer above PIV
PIV is the inner loop. It lives inside a larger flow with three layers:
- Strategic planning
- The PIV loop (execution)
- System evolution
Each layer fixes a different failure mode of AI-assisted development.
1. Strategic planning — from messy idea to PRD
Before any PIV loop runs, start at the idea level.
Instead of jumping straight to "write the code":
- Have a brainstorming conversation with your assistant about what you're building — context, goals, constraints, edge cases, users.
- Let it ask clarifying questions. A good agent reduces its own assumptions, just like a good human engineer.
- Turn that conversation into a Product Requirements Document via a structured prompt or custom command.
A useful AI-generated PRD includes an executive summary, the users and their concerns, what's in and out of scope, and the phases or milestones. Then you review and edit it. The PRD is the bridge between the vague idea in your head and the concrete work an agent can help with.
If you can't get to a clean PRD with the agent's help, that's a signal you don't understand the problem yet — and no amount of prompting downstream will fix that.
2. From PRD to tickets — units of work
Strategy turns into execution by feeding the PRD back into the assistant and having it generate tickets:
- Each ticket has a clear objective and acceptance criteria.
- If you use Jira, Linear, or GitHub Issues, the agent can draft tickets in your team's exact format — or create them directly via integration.
The backlog stops being wishful thinking. It becomes a set of realistic, independently completable units, each one ready for its own PIV cycle.
This is the part of Principled Agentic Engineering that feels meaningfully different from "AI helps me code." The agent is involved in structuring the work, not just typing out functions.
3. The PIV loop per ticket
Pick one ticket. Run Plan/Prime → Implement → Validate. Close it or iterate until it meets the acceptance criteria. Repeat.
Because each ticket carries its own loop, you get:
- Parallelism across developers — and across agents, when you're running more than one.
- Audit trails — every change ties back to a ticket, which ties back to a PRD section.
- Less context churn — each loop is small and focused, which keeps the model's effective accuracy high instead of letting it degrade as the conversation grows.
This is the same answer to the same problem I keep landing on in Why Most AI Projects Fail — the model is the easy part; the information architecture around it is the work. PIV is that information architecture at the per-ticket scale.
4. System evolution — compounding gains
The last layer is where this approach actually starts to win.
Every time something goes wrong — missing tests, inconsistent patterns, off-brand naming, an agent making the same mistake twice — you have a choice:
- Fix just this ticket and move on.
- Or upgrade the system so the same issue is less likely next time.
The compounding move is the second one. Concretely, that means continuously improving:
- Global rules the agent always sees — coding standards, architectural patterns, naming conventions, the explicit "don'ts."
- Skills and commands —
/create-prd,/generate-tickets-from-prd,/refactor-with-pattern-X. - Tool configs and integrations — how the agent talks to your repos, trackers, CI.
Over time, the agent stops being a generic LLM and starts being a junior engineer that's been properly onboarded to your team's way of building software. The first month feels like a tax. The third month feels like leverage. The sixth feels like an unfair advantage.
Why this beats one giant prompt
Stack the layers and the picture is clean:
- Strategic planning anchors the work in real goals, not whatever the hype cycle is selling this quarter.
- Ticketization turns ambition into independent units of work.
- PIV loop keeps each unit small, testable, and grounded in explicit acceptance criteria.
- System evolution makes the next loop run a little better than the last.
You're not fighting the model's limitations. You're designing the workflow so the limitations matter less.
How to start tomorrow
You don't have to rebuild your whole development process. A simple on-ramp:
- Pick one feature or bug you're already working on.
- Spend 10–15 minutes letting your assistant help you turn it into a mini-PRD.
- Have it generate 3–5 tickets from that PRD.
- Pick one ticket and consciously run a PIV loop — Plan/Prime, Implement, Validate.
- Anything that went sideways, write down as a system upgrade — a new rule, a new command, a new template.
Repeat for a week. The difference between "playing with AI" and "building with AI" is real, and you'll feel it.
The unglamorous truth (again)
The teams getting real leverage out of agents aren't the ones with the cleverest prompts. They're the ones with a workflow the agent can actually fit inside: scoped tickets, primed context, validated outputs, and a system that gets a little smarter every cycle.
Same conclusion I keep landing on. The model is the easy part.
The workflow is the work.