Principles and Plans for AI-Driven Engineering
March 18, 2026
By Matt Blair
Evolving Our Development Practice: Principles and Plans for AI-Driven Engineering
As AI tools rapidly evolve from autocomplete assistants to capable collaborators, we've taken time to reflect on what role AI should play in our engineering practice. The past year has surfaced powerful new workflows and design patterns from leaders across the industry. Drawing inspiration from Boris Cherny's posts around Claude Code1, Bryan Liles' Agent Framework Vision2, Will Larson's Internal Agent development series3, and Random Labs work in agentic development4, we've defined our own standards and direction for integrating AI in our development process.
Our Core Principles for AI-Assisted Development
We believe that effective use of AI requires more than just access to powerful models. It requires a disciplined approach to collaboration, quality, and trust. These principles define how we will work with AI:
- AI as a teammate, not just a tool.
AI should augment our capabilities, not replace them. We treat AI agents as collaborators - each with a defined role - coordinated by the human developer.
- Orchestration over automation.
We aim to parallelize work using specialized sub-agents with distinct responsibilities, rather than expecting one agent to do everything. This builds on the orchestrated subagent architecture described by both Liles, Larson, and Random Labs234.
- Accuracy beats speed.
A slower, more capable model is often better than a fast, error-prone one. We value reduced rework and higher output quality. As one of our engineers says, “Slow is Smooth, Smooth is Fast”.
- Persistent learning from mistakes.
Each AI misstep should improve future outcomes. We'll maintain shared project memory for AI agents, capturing preferences, bugs, and preferred patterns, using a symlinked AGENTS.md/.rules convention, scoped to folder level specificity.
- Verification is non-negotiable.
No AI-generated code should reach production without validation. Human verification, verification agents and automated tests are core to our process, following best practices.134.
- Human-in-the-loop by default.
Risky, high-impact changes must be explicitly approved by a developer. Default behavior should prioritize safety and explainability. This mirrors Liles' framework guidance on checkpoints and review stages2, as well as the control systems built by Imprint3.
- Documentation lives near the code it documents
Our Documentation should live close to our code, so that both our developers as well as our agents could quickly read and understand our system. This also provides us the opportunity to update documentation at the same time the code changes. We can share documents by syncing our repositories' documents with third-party tooling (such as Notion).
- Transparency and traceability.
Logs, diffs, and rationales for agent decisions should be visible and auditable. Trust comes from clarity. Larson's emphasis on debug logs and Liles' arguments for explainability support this principle23.
- Continuous improvement through experimentation.
We'll treat AI workflows as evolving products—testing, measuring, and iterating over time. This is consistent with how both Cherny and Larson describe evolving their agent practices through iteration13.
References:
Footnotes
Boris Cherny, "The creator of Claude Code just revealed his workflow, and developers are losing their minds", VentureBeat (2026). https://venturebeat.com/technology/the-creator-of-claude-code-just-revealed-his-workflow-and-developers-are/ ↩ ↩2 ↩3
Bryan Liles, "Agent Framework Vision", blog.bryanl.dev (2025). https://blog.bryanl.dev/posts/agent-framework-vision/ ↩ ↩2 ↩3 ↩4
Random Labs Team, “Slate: moving beyond ReAct and RLM”, randomlabs.ai (2026) https://randomlabs.ai/blog/slate ↩ ↩2 ↩3
