Skip to content

How to Build an ARC

Use this guide to create an ARC using cat-arc-builder (full ARC) or cat-quick-arc (small changes).

  • An approved Intent Brief (or Quick Intent) must exist in _cat/artifacts/intent/
  • Features should be defined (recommended but not required)
cat-arc-builder

Julian, the Solution Architect, facilitates ARC creation through a structured conversation covering:

Julian asks about your runtime, frameworks, and architectural patterns:

“What’s the technology stack for this work? Give me the runtime, primary frameworks, and any established patterns your team follows.”

The core of the ARC. Julian helps you define:

  • MUST constraints — non-negotiable rules (e.g., “MUST use parameterized queries for all database access”)
  • MUST NOT constraints — explicit prohibitions (e.g., “MUST NOT store PII in application logs”)
  • SHOULD constraints — strong recommendations
  • MAY constraints — permitted approaches

Every ARC includes AI-specific governance:

  • Which code can be AI-generated vs human-authored
  • Review requirements for AI-generated code
  • Testing requirements for AI-produced artifacts

Julian defines the testing strategy traceable to constraints:

  • Unit test coverage expectations
  • Integration test requirements
  • Security testing requirements (if applicable)

Output: _cat/artifacts/architecture/arc.md

For small brownfield changes:

cat-quick-arc

Quick ARC focuses on what’s changing rather than documenting the entire system. Julian asks:

  1. What existing system are you modifying?
  2. What specific constraints apply to this change?
  3. Any new AI governance considerations?

Output: _cat/artifacts/architecture/quick-arc.md

cat-arc-checkpoint

Julian validates completeness and alignment with the Intent Brief.

What a passing checkpoint looks like:

  • Every MUST/MUST NOT constraint is present and specific enough to be binary (met/not met)
  • Each constraint traces to a business need stated in the Intent Brief
  • AI governance section is populated
  • Testing requirements are defined

What triggers a failing checkpoint:

  • Vague constraints that require subjective judgment (“MUST be performant”)
  • Constraints with no corresponding intent rationale
  • Missing AI governance or testing sections
  • Constraint count is zero or suspiciously low for the scope
cat-ai-validation

Produces evidence that the AI correctly understands both the Intent Brief and ARC before construction begins. This is a critical governance gate.

cat-readiness-check

A cross-agent traceability audit: Intent Brief → ARC → Features → Bolts. All threads must connect.

cat-arc-edit

Julian reads the existing ARC and helps you edit it in-place, preserving structure and adding change tracking.

  1. Be specific with constraints. “MUST validate input” is too vague. “MUST validate all API request bodies against JSON Schema definitions in /schemas/” is enforceable.

  2. Every constraint should trace to a business need. If you can’t explain why a constraint exists, it probably shouldn’t be in the ARC.

  3. Constraints are binary. During adherence checking, each constraint is either met or not met. Avoid constraints that require subjective judgment.

  4. Include AI governance even if you’re not using AI. The ARC records the decision, and future readers will know it was intentional.


See Also: What Is an ARC? · Architecture Phase · ARC Adherence Check