Skip to content

Requirements-Driven Design

Every successful agent starts with clear requirements. This guide documents the pattern Seer uses to translate business needs into well-structured agents.


The Requirements Template

Before writing any agent code, document these four elements:

Element Question Example
Data Requirements What data sources does this agent need? SeerSignals SERP Snapshots, BigQuery organic rankings
Questions That Must Be Answered What specific questions should the agent answer? "How many keywords do we rank for in top 10?"
Actions That Must Happen What does the agent actually do? Generate prioritized list, export to Wrike
Output Format What does the deliverable look like? Markdown outline with tables, exported as Google Doc

This structure comes from the Executive Tracker Requirements and has proven effective across all divisions.


Real Example: Search Landscape Analysis

Data Requirements

- SERP Snapshots table from SeerSignals
- Client/project/date selection context

Questions That Must Be Answered

Organized by theme:

Data Foundation - How many search terms are included in this analysis? - What's the total search volume across all tracked terms? - What are the tracking settings (engines, devices, locations, languages)? - How many competing domains are ranking on page 1? How many URLs?

Client Performance - How many of our target keywords do we rank for in the top 10? - What's our current organic page 1 coverage percentage? - How many branded vs. non-branded terms are we tracking? - What's our page 1 coverage for branded vs. non-branded?

Competitive Landscape - Who are our top organic competitors by page 1 coverage? - What types of sites are ranking (categorization)? - What can we infer about user intent based on categories?

SERP Landscape - What SERP features appear most frequently? - What can we infer about desired content based on features? - What is our ownership of prevalent SERP features?

Strategic Focus Areas - Which themes (ngrams) offer best effort vs. impact opportunities? - Which tags offer best effort vs. impact opportunities?

Actions That Must Happen

Questions are answered in the chat and then exported as an 
outline for review and deeper analysis.

Output Format

Structured outline with: - Section headers matching the question themes - Data tables for quantitative answers - Bullet points for qualitative insights - Priority recommendations highlighted


Pattern: From Requirements to Agent Structure

Step 1: Map Questions to MCP Queries

Each "question that must be answered" maps to one or more data queries:

Question Data Source Query Type
"How many keywords in top 10?" BigQuery Aggregate count with position filter
"Who are top competitors?" SeerSignals SERP Group by domain, count page 1 appearances
"What SERP features appear?" DataForSEO Feature frequency analysis

Step 2: Group into Workflow Stages

Questions naturally cluster into workflow stages:

Stage 1: Data Foundation
  → Set context (client, project, date range)
  → Validate data availability
  → Surface key metrics

Stage 2: Analysis
  → Client performance
  → Competitive landscape
  → SERP features

Stage 3: Synthesis
  → Strategic opportunities
  → Prioritized recommendations
  → Next steps

Step 3: Define Output Sections

Each workflow stage produces output sections:

## Executive Summary
- Key findings (3-5 bullets)
- Immediate opportunities
- Strategic recommendations

## Data Foundation
- Analysis scope and settings
- Volume and coverage metrics

## Client Performance
- Ranking distribution
- Brand vs non-brand coverage

## Competitive Analysis
- Top competitors table
- Site type breakdown

## SERP Landscape
- Feature prevalence
- Ownership opportunities

## Recommendations
- Prioritized by effort/impact
- Actionable next steps

Requirements Checklist for New Agents

Before starting development, verify you have:

Data Requirements

  • All required MCP servers identified
  • Specific tables/datasets documented
  • Authentication/access confirmed
  • Fallback behavior defined (what if data unavailable?)

Questions That Must Be Answered

  • Questions grouped by theme/section
  • Each question is specific and testable
  • Questions align with deliverable purpose
  • No ambiguous or open-ended questions

Actions That Must Happen

  • Clear workflow stages defined
  • Human touchpoints identified (review gates)
  • Export/output mechanism specified
  • Error handling documented

Output Format

  • Section structure defined
  • Table formats specified
  • Markdown conventions documented
  • Export workflow clear (markdown → Google Doc, etc.)

Division-Specific Requirements Patterns

SEO Division

Common Data Requirements: - SeerSignals SERP Snapshots - BigQuery organic rankings - DataForSEO keyword data - Google Search Console metrics

Common Question Themes: - Ranking performance - Competitive landscape - Content opportunities - Technical issues

Common Output Formats: - Prioritized page lists - Keyword opportunity tables - Content outlines - Audit checklists

PDM Division

Common Data Requirements: - Google Ads API - Meta Ads API - AdClarity competitive data - SeerSignals Paid Media Daily

Common Question Themes: - Investment comparison - Creative performance - Keyword efficiency - Budget pacing

Common Output Formats: - Performance dashboards - Creative analysis summaries - Keyword recommendations - Budget allocation tables

Client Services Division

Common Data Requirements: - SeerSignals (both organic and paid) - Previous deliverables (PDFs) - Guru knowledge base - Client call transcripts

Common Question Themes: - Period-over-period performance - Industry benchmarking - Strategic alignment - Action item tracking

Common Output Formats: - QBR outlines - Recommendation tables - Performance summaries - Strategic roadmaps


Anti-Patterns to Avoid

❌ Vague Questions

Bad: "How is the client doing?"

Good: "What is the month-over-month % change in organic page 1 coverage?"

❌ Missing Data Source

Bad: "Show competitor spend"

Good: "Using AdClarity Share of Voice data, what is our spend relative to top 3 competitors?"

❌ Undefined Output

Bad: "Provide recommendations"

Good: "Generate a table with columns: Recommendation, Effort (Low/Med/High), Impact (Low/Med/High), Priority (1-10), Timeline"

❌ No Fallback Behavior

Bad: Assumes all data sources always available

Good: "If SeerSignals data unavailable, prompt user for manual keyword list input"


Connecting to Agent Definition

Once requirements are documented, they map directly to agent sections:

Requirement Element Agent Section
Data Requirements ## Input Requirements + MCP dependencies in frontmatter
Questions That Must Be Answered ## Core Capabilities + ## Process & Methodology
Actions That Must Happen ## Process & Methodology workflow
Output Format ## Output Format + example templates

See Plugin Authoring for the full agent definition structure.



Last updated: January 2026