Skip to content

Requirements Stack for Skills, Commands, and Agents

Purpose: Checklist and mapping guide for context engineering — know what inputs you need and where they live.

Key insight from PM Lead (Lydia): "Three golden examples, and those golden examples have to be inputs AND outputs AND any feedback in the middle."


Quick Start: Service-Based Organization

Division leaders think in terms of services/deliverables, not input types. Each service to be agentified should have its own self-contained folder:

references/by-division/{division}/services/{service-name}/
├── README.md                      # ⭐ Status dashboard + checklist
├── requirements.md                # User stories, acceptance criteria
├── methodology.md                 # Step-by-step process (div-level)
├── data-sources.md                # Tools, schemas, sample queries
├── constraints.md                 # Edge cases, fallbacks
├── golden-examples/               # 2-3 complete examples
│   └── example-1/
│       ├── context.md             # Client brief, challenge
│       ├── inputs/                # Data files, source materials
│       ├── outputs/               # Final deliverable (anonymized)
│       └── feedback.md            # Team lead feedback, iterations
├── qa-testing/                    # Test cases
└── source-materials/              # Original files (NinjaCat, videos...)

👉 See Service Requirements Template for full folder structure and file templates.


The 7 Required Input Categories

These are the 7 things an agent needs to build a working skill/command. Each maps to a location in the service folder structure.

At a Glance

# Category Service Folder Location Key Question
1 User Stories requirements.md WHO uses this, WHAT they produce, WHY it matters?
2 Golden Examples golden-examples/ What does "excellent" look like (inputs + outputs + feedback)?
3 Methodology (SOPs) methodology.md What's the step-by-step process?
4 Data Sources data-sources.md What tools/data are accessed?
5 Validation Rules requirements.md (acceptance criteria) What does "done" look like?
6 QA Test Cases qa-testing/ How do we verify it works?
7 Constraints constraints.md What are the edge cases and limitations?

1. User Stories

What: WHO uses this / WHAT they produce / WHY it matters / HOW quality measured / WHEN it activates

Location: services/{service-name}/requirements.md

Quality Criteria:

  • Practitioner persona specified (SEO Strategist, Account Lead, Analyst...)
  • Deliverable type named (content audit, QBR outline, keyword analysis)
  • Success metrics defined (methodology compliance, client approval rate)
  • Activation triggers listed (keywords, phrases, intent patterns)

Example:

**WHO**: SEO Strategists and Content Managers  
**WHAT**: 9-step prioritized content audit report  
**WHY**: Diagnose content gaps and prioritize fixes for client  
**HOW**: Follows 9-step methodology, includes data sources for all findings  
**WHEN**: User says "content audit", "audit page", "analyze content"


2. Golden Examples

What: 2-3 high-quality deliverables showing what "excellent" looks like — with inputs, outputs, AND feedback

Location: services/{service-name}/golden-examples/

Quality Criteria (from Lydia):

  • Context: Client brief, industry, challenge (anonymized)
  • Inputs: What data/tools were used (data sources, parameters)
  • Outputs: Complete final deliverable (not just excerpts)
  • Feedback: Team lead feedback, what made it excellent, iterations

Example Structure:

golden-examples/
├── example-1-trex-ecommerce/
│   ├── context.md         # E-commerce, outdoor gear, traffic decline
│   ├── inputs/
│   │   ├── gsc-export.csv
│   │   └── ranking-data.xlsx
│   ├── outputs/
│   │   └── trex-content-audit-final.docx
│   └── feedback.md        # "All 47 findings had data sources cited..."

Why feedback matters: "Part of my work was dropping shit into Claude and being like, how did a user get from this project brief to this deliverable? The whole middle part is a gray area." — Lydia


3. Methodology (SOPs)

What: Step-by-step process, decision rules — defined at division level, not SME level

Location: services/{service-name}/methodology.md

Quality Criteria:

  • Actionable steps (not theoretical concepts)
  • Decision points documented ("If X, then Y")
  • Validation gates included (quality checks at each stage)
  • Edge cases handled ("What if data unavailable?")
  • Division consensus (not individual SME preferences)

Key Insight: "Methodology should be defined at the div level, not at the SME level." — Lydia


4. Data Sources

What: Schema docs, tool references, API capabilities, field definitions

Location: services/{service-name}/data-sources.md

Quality Criteria:

  • Tools listed (what does the practitioner access?)
  • Field definitions (name, type, description, example values)
  • Sample queries provided (SQL, API calls)
  • Access requirements documented (permissions, authentication)
  • Data freshness noted (daily, real-time, 2-3 day lag)

Key Insight: "What tools do you go to? I go to Google Ads, I go to here... okay, well then we know these are the three tools." — Lydia


5. Validation Rules

What: Acceptance criteria, quality gates, "done" definitions

Location: services/{service-name}/requirements.md (Acceptance Criteria section)

Quality Criteria:

  • Measurable criteria (not subjective "looks good")
  • Automatable when possible (LSP diagnostics, build pass/fail)
  • Clear pass/fail thresholds
  • Documents who approves (auto-ship vs peer review vs expert approval)

Example:

## Acceptance Criteria

For this service to be considered complete, the agent MUST:
1. [ ] Follow all 9 methodology steps
2. [ ] Include data source citation for every finding
3. [ ] Prioritize recommendations by impact × effort
4. [ ] Generate output in standard template format


6. QA Test Cases

What: Sample inputs + expected outputs for testing skills/commands

Location: services/{service-name}/qa-testing/

Quality Criteria:

  • Real client scenarios (anonymized)
  • Complete inputs provided (data snapshots, parameters)
  • Expected outputs documented (not just "should work")
  • Edge cases included (missing data, errors, boundary conditions)

7. Constraints & Edge Cases

What: Known limitations, error conditions, fallback behaviors

Location: services/{service-name}/constraints.md

Quality Criteria:

  • Documents "what if X fails" scenarios
  • Provides user-facing guidance (not just internal notes)
  • Specifies graceful degradation path
  • Notes when manual intervention required

Completeness Checklist

Use this to validate any service folder. Minimum viable: User Stories + Methodology + 2 Golden Examples + Data Sources.

## Completeness Check: {Service Name}

| Category | Status | Notes |
|----------|--------|-------|
| User Stories | ⬜/✅ | WHO/WHAT/WHY/HOW/WHEN documented |
| Golden Example 1 | ⬜/✅ | Context + Inputs + Outputs + Feedback |
| Golden Example 2 | ⬜/✅ | Context + Inputs + Outputs + Feedback |
| Golden Example 3 | ⬜/✅ | (Optional but recommended) |
| Methodology | ⬜/✅ | Step-by-step, decision rules, div-level |
| Data Sources | ⬜/✅ | Tools, schemas, sample queries |
| Validation Rules | ⬜/✅ | Acceptance criteria measurable |
| QA Test Cases | ⬜/✅ | Input + expected output |
| Constraints | ⬜/✅ | Edge cases, fallback behaviors |

**Build Readiness**: {🔴 Missing Core | 🟡 Partial | 🟢 Ready}

Migration from Current Structure

Current references/by-division/{division}/build-specs/[DIV] Service Name/ folders contain raw materials. To standardize:

Current Location New Location Action
[DIV] Service Name/*.mp4, *.pptx services/{name}/source-materials/videos/ Move (preserve)
NinjaCat Agent Files/INSTRUCTIONS.md Extract → services/{name}/methodology.md Extract key content
NinjaCat Agent Files/*.md services/{name}/source-materials/ninjacat/ Move (preserve)
QA Testing/*.md Reorganize → services/{name}/golden-examples/ or qa-testing/ Split by purpose
Custom Actions Code/*.py services/{name}/source-materials/code/ Move (preserve)
Guru Cards, Training Outlines services/{name}/source-materials/training/ Move (preserve)

Migration Workflow:

  1. Create services/{service-name}/ folder with README.md from template
  2. Move source materials to source-materials/
  3. Extract methodology from NinjaCat instructions
  4. Organize golden examples with context + inputs + outputs + feedback
  5. Fill README.md checklist — identify gaps
  6. Gather missing items from SMEs

Agent Validation

An agent should validate completeness and tell you what's missing:

## /validate-requirements {service-path}

**Output**:
- ✅ User Stories: Complete (requirements.md exists, has WHO/WHAT/WHY)
- ⬜ Golden Example 1: Missing feedback.md
- ✅ Methodology: Complete (methodology.md has decision rules)
- ⬜ Data Sources: Incomplete (no sample queries)
- ...

**Missing Items**:
1. Golden Example 1 needs `feedback.md` with team lead feedback
2. Data Sources needs sample SQL queries for BigQuery tables

**Build Readiness**: 🟡 Partial (65% complete)
**Recommendation**: Gather team lead feedback for golden examples before building

Quick Reference: Category → File Mapping

Category File What Goes There
User Stories requirements.md WHO/WHAT/WHY/HOW/WHEN, acceptance criteria
Golden Examples golden-examples/{name}/ context.md, inputs/, outputs/, feedback.md
Methodology methodology.md Steps, decision rules, validation gates
Data Sources data-sources.md Tools, schemas, sample queries
Validation Rules requirements.md Acceptance criteria section
QA Test Cases qa-testing/ test-case-1.md, test-case-2.md...
Constraints constraints.md Edge cases, fallbacks, error messages
Source Materials source-materials/ Original files (preserved, not modified)


Last updated: 2026-01-30