Skip to content

Understanding Skills

This guide explains what skills are, why they matter, and how they differ from other features in the agent system.


What is a Skill?

A skill is a behavioral module that the agent loads automatically based on context. Think of skills as mental models or expertise areas that shape how the agent thinks and responds.

Key characteristics:

  • Automatic: The agent decides when to apply them (you don't invoke skills manually)
  • Contextual: Loaded when relevant keywords, patterns, or file types are detected
  • Modular: Each skill is self-contained with clear boundaries
  • Token-efficient: Only loaded when needed, unlike CLAUDE.md which is always present

Skills vs Other Features

Understanding when to use skills versus other features is crucial for builders:

Skills vs Commands

Aspect Skill Command
Triggered by Agent (automatic) User (manual)
Purpose Shape behavior Execute workflow
Example "Use Seer writing style" "/content-audit"
When loaded Context-driven On invocation

Analogy: Skills are like an expert's instincts. Commands are like specific tasks you ask them to do.

Skills vs CLAUDE.md

Aspect Skill CLAUDE.md
Loaded On-demand Always
Size Modular (< 500 lines) Can be large
Scope Specific expertise Project-wide context
Token cost Low (conditional) High (always present)

Analogy: CLAUDE.md is what you always know about a project. Skills are expertise you tap into when relevant.

Skills vs Hooks

Aspect Skill Hook
Triggered by Task context Lifecycle events
Purpose Behavioral patterns Automatic actions
Example Data analysis methods Run linter after edit
Control Agent discretion Always fires on event

Analogy: Hooks are reflexes (automatic responses to events). Skills are learned expertise (applied thoughtfully).


How Skills Get Activated

The skill activation flow uses a three-stage process:

flowchart TB
    subgraph STAGE1["1️⃣ USER PROMPT ARRIVES"]
        P["'Write a content audit report for this page'"]
    end

    subgraph STAGE2["2️⃣ UserPromptSubmit HOOK FIRES"]
        H["Hook scans all installed plugins"]
        S1["skill-rules-fragment.json"]
        S2["Keywords: audit, report, content"]
        S3["Patterns: (write|create).*(report)"]
        H --> S1 & S2 & S3
    end

    subgraph STAGE3["3️⃣ ACTIVATION SUGGESTION INJECTED"]
        A["'Consider using: writing-standards, seo-methods'"]
        D{"Agent decides<br/>relevance"}
        L["Load skills"]
        SK["Skip"]
        A --> D
        D -->|"Relevant"| L
        D -->|"Not needed"| SK
    end

    STAGE1 --> STAGE2 --> STAGE3

    style STAGE1 fill:#5050BC,color:#fff
    style STAGE2 fill:#5050BC,color:#fff
    style STAGE3 fill:#5050BC,color:#fff
    style L fill:#54DEDB,color:#343456
    style SK fill:#343456,color:#fff
Stage What Happens
1. Prompt Arrives User types request with potential skill-relevant keywords
2. Hook Fires UserPromptSubmit scans plugins for matching activation rules
3. Suggestion Injected Matching skills suggested; agent decides whether to load
┌─────────────────────────────────────────────────────────┐
│ 1. USER PROMPT ARRIVES │
│ "Write a content audit report for this page" │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ 2. UserPromptSubmit HOOK FIRES │
│ Scans all installed plugins for: │
│ - skill-rules-fragment.json │
│ - Matching keywords: ["audit", "report", "content"] │
│ - Matching patterns: "(write create).*(report)" │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ 3. ACTIVATION SUGGESTION INJECTED │
│ "Consider using: writing-standards, seo-methods" │
│ Agent decides whether to load based on relevance │
└─────────────────────────────────────────────────────────┘
### Activation Rules

Skills declare their activation rules in `skill-rules-fragment.json`:

```json
{
  "writing-standards": {
    "type": "domain",
    "enforcement": "suggest",
    "priority": "high",
    "promptTriggers": {
      "keywords": ["deliverable", "client", "report", "audit"],
      "intentPatterns": ["(write|create|draft).*(report|deliverable)"]
    },
    "fileTriggers": {
      "pathPatterns": ["**/deliverables/**", "**/reports/**"]
    }
  }
}
Field Purpose
type Category of skill (domain, reasoning, etc.)
enforcement How strongly to apply (suggest, require)
priority Loading order when multiple skills match
promptTriggers Keywords and regex patterns in user prompts
fileTriggers File paths that trigger activation

Skill Categories

Shared Skills (core-dependencies)

Located in plugins/core-dependencies/skills/, these apply across all division plugins:

Skill Purpose
writing-standards Seer voice and tone for client deliverables
quality-standards Data quality checks, QA gates
prompt-engineering Prompt optimization patterns

Domain Skills (Division Plugins)

Located in plugins/divisions/{division}/skills/, these contain specialized expertise:

Division Skill Purpose
SEO seo-methods SEO analysis patterns, scoring
Innovation skill-seekers Creating new skills
Innovation mcp-expertise MCP server building
Operations wrike-time-tracking Time entry patterns

Anatomy of a Skill

Every skill follows a standard structure:

skills/{skill-name}/
├── SKILL.md                     # Main skill file (< 500 lines)
├── skill-rules-fragment.json    # Activation rules
└── resources/                   # Deep-dive content
    ├── data-analysis.md         # Specific methodology
    ├── strategy.md              # Specific methodology
    └── examples/                # Usage examples

SKILL.md Format

---
name: seo-methods
description: "SEO-specific methodologies and scoring"
type: skill
auto-load: true
version: 1.0.0
---

# SEO Methods

When working on SEO tasks:

## Data Analysis Approach
- Prioritize data-driven insights over assumptions
- Cross-reference multiple data sources
- Quantify impact with specific metrics

## Strategic Thinking
- Consider competitive landscape
- Map to business objectives
- Provide actionable recommendations

## For detailed methodologies:
- See resources/data-analysis.md
- See resources/strategy.md

Progressive Disclosure

Skills use progressive disclosure to stay token-efficient:

  1. SKILL.md (always loaded): Quick reference, < 500 lines
  2. resources/ (loaded on demand): Deep methodologies, examples

The agent loads detailed resources only when the task requires them.

CRITICAL: Only skills support progressive disclosure. Commands are standalone markdown files and cannot load associated resources. If a command needs substantial reference material (methodologies, rules, >50 lines), that material must live in a skill. The command references the skill via frontmatter skills: [skill-name].

Anti-pattern: Creating commands/my-command/references/ folders — commands cannot load these.

Correct pattern: Create a dedicated skill, put resources there, reference from command.


Why Skills Matter

Token Efficiency

Without skills, you'd put everything in CLAUDE.md:

CLAUDE.md: 5000 lines (always loaded)
Token cost: HIGH (every request)

With skills:

CLAUDE.md: 500 lines (core project info)
Skills: 400 lines each (loaded when needed)
Token cost: LOW (conditional loading)

Separation of Concerns

Skills enforce clean boundaries:

  • writing-standards = HOW to write
  • seo-methods = WHAT to analyze
  • quality-standards = HOW to verify

Each can evolve independently.

Team Consistency

When everyone uses the same skills, outputs are consistent:

  • Same voice and tone across deliverables
  • Same analysis patterns across audits
  • Same quality standards across reviews

Building Your Own Skills

When to Create a Skill

Create a skill when you have:

  • Repeatable behavioral patterns ("always do X when Y")
  • Domain expertise that applies to multiple commands
  • Standards that should be consistent across outputs

When NOT to Create a Skill

Don't create a skill for:

  • One-off tasks (use a command instead)
  • Project-specific context (use CLAUDE.md)
  • Automatic reactions to events (use hooks)

Skill Checklist

  • Stays under 500 lines
  • Has clear activation rules
  • Doesn't duplicate shared standards
  • Uses resources/ for deep content
  • Tested with /core:doctor activation-test

Skill Development Prerequisites

Before building any skill, gather these foundational materials.

Complete requirements checklist: See Requirements Stack for all 7 required input categories, quality criteria, and canonical storage locations.

User Stories

Every skill should solve real practitioner problems. Document:

Question Example Answer
WHO uses this skill? SEO Strategists
WHAT do they produce? Content audit reports
WHY does it matter? Diagnose gaps, prioritize fixes
HOW is quality measured? 9-step methodology compliance
WHEN should it activate? "content audit", "optimize page"

Where to find: references/by-division/{division}/review-booklet.md

Golden Examples

Collect 2-3 high-quality examples of the desired output:

  • What does "excellent" look like?
  • What patterns are consistent across examples?
  • What mistakes should be avoided?

Where to find: references/by-division/{division}/examples/

Content Engineering Process

Skills are context-engineered learning packages. Follow this process:

1. REQUIREMENTS CAPTURE
   ├─ User stories (division review booklets)
   ├─ Practitioner interviews
   └─ Example deliverables (golden examples)

2. DOMAIN KNOWLEDGE EXTRACTION
   ├─ Identify reusable patterns
   ├─ Document methodologies
   └─ Create quick reference tables/rules

3. STRUCTURE & ORGANIZATION
   ├─ SKILL.md (< 500 lines, quick reference)
   ├─ resources/ (detailed methodologies)
   └─ skill-rules-fragment.json (activation)

Context engineering = organizing the agent's environment to steer behavior without overwhelming the context window. Skills with resources/ folders ARE context engineering.


Common Pitfalls

"My skill isn't activating"

Check: Is skill-rules-fragment.json present and valid?

Fix: Verify keywords and patterns match your test prompts.

"Skill loads but doesn't affect output"

Check: Is the skill content actionable?

Fix: Use imperative language ("Do X", "Apply Y") not passive descriptions.

"Skill conflicts with another skill"

Check: Are you duplicating shared standards?

Fix: Move shared content to core-dependencies. Keep division skills domain-specific.

"Skill is too big and slow"

Check: Is SKILL.md over 500 lines?

Fix: Move detailed content to resources/. Keep SKILL.md as quick reference only.


Anthropic Best Practices Alignment

This section documents how Seer skills align with Anthropic's official skill authoring best practices.

Core Principles We Follow

Principle Anthropic Guidance Seer Implementation
Concise is key Only add context Claude doesn't already have Keep SKILL.md < 500 lines; use resources/ for details
Degrees of freedom Match specificity to task fragility High freedom for analysis, low freedom for scoring
Progressive disclosure SKILL.md as TOC, load details on demand All skills use resources/ subdirectory pattern
One level deep References only from SKILL.md, not nested SKILL.md links to resources/*.md directly

Required Skill Structure

Based on Anthropic best practices:

skills/{skill-name}/
├── SKILL.md                     # < 500 lines, quick reference
├── skill-rules-fragment.json    # Activation triggers
└── resources/                   # Deep-dive content (loaded on demand)
    ├── {topic-a}.md
    └── {topic-b}.md

Writing Effective Descriptions

Always write in third person. The description is injected into the system prompt.

# Good:
description: Applies SEO analysis patterns and scoring methodologies. Use when auditing pages, analyzing rankings, or creating content strategies.

# Bad:
description: I help you with SEO analysis
description: You can use this for SEO work

Progressive Disclosure Pattern

SKILL.md serves as an overview that points Claude to detailed materials:

# SEO Methods

## Quick Reference
[Essential patterns here - < 500 lines]

## Detailed Methodologies
- **Data Analysis**: See [resources/data-analysis.md](resources/data-analysis.md)
- **Strategy**: See [resources/strategy.md](resources/strategy.md)

Claude loads resources/*.md only when the task requires deep detail.

Table of Contents Rule

For resource files over 100 lines, include a TOC at the top:

# Data Analysis Methods

## Contents
- Metrics selection criteria
- Data source prioritization
- Cross-referencing patterns
- Quantification standards
- Confidence thresholds

## Metrics selection criteria
...

This ensures Claude can see the full scope even with partial reads.

MCP Tool References

When referencing MCP tools, use the format Server:tool_name:

## BigQuery Integration

Use `bigquery:run_query` for data retrieval:
- Filter to client-specific datasets
- Include date range parameters
- Apply row-level security

Skill vs Command Decision Tree

Use this flowchart when deciding whether something should be a skill or command:

Is it triggered by the user explicitly?
├── YES → Consider a COMMAND
│   └── Does it produce a specific deliverable?
│       ├── YES → Definitely a COMMAND (/content-audit)
│       └── NO → May be a SKILL used by commands
└── NO → Consider a SKILL
    └── Is it reused across multiple contexts/divisions?
        ├── YES → SHARED SKILL in core-dependencies
        │   └── Examples: writing-standards, branding, presenting
        └── NO → DOMAIN SKILL in division plugin
            └── Examples: seo-methods, wrike-time-tracking

Worked Example: Pitch Decks

Original thinking: /pitch-deck command in Business Development

Problem discovered: QBRs (Client Services), campaign wrap-ups (Paid Media), and audit presentations (SEO) all need deck structures.

Solution: presenting skill in core-dependencies with deck-type resources

core-dependencies/skills/presenting/
├── SKILL.md              # Deck patterns, slide structure
└── resources/
    ├── pitch-deck.md     # BD: Sales pitch structure
    ├── qbr-deck.md       # CS: QBR structure
    ├── campaign-wrap-up.md # PDM: Campaign results
    └── audit-presentation.md # SEO/CRE: Audit findings

Benefit: User says "help me build my QBR section" and the skill auto-activates based on context. No explicit command needed.

See Also