π Metrics & Self-Improvement
BaC Principle
βA process that doesnβt measure itself canβt improve itself. Every KPI is a feedback signal. Every signal drives a proposal. Every proposal closes the loop.β β Business as Code Manifesto
Stage: S9 β Improvement Loop Type: AUTO (data collection + proposals) + HUMAN (proposal approval) Cadence: Weekly data collection; weekly metrics review; monthly strategy review
See Map of Content for full vault navigation.
π Improvement Loop Architecture
flowchart TD DATA["π₯ Data Collection<br/>GSC + Ahrefs + AI citation probes"] AGG["π Metric Aggregation<br/>SEO track + AEO track"] ANOM["π¨ Anomaly Detection<br/>Automated rules"] PROP["π Improvement Proposals<br/>Generated by MetricsAgent"] REVIEW["π€ Human Review<br/>~30 min/week"] DEPLOY["π Deploy Changes<br/>Update manifest / agents / target spec"] DATA --> AGG --> ANOM --> PROP --> REVIEW --> DEPLOY --> DATA
π SEO Track KPIs
Primary Dashboard
| Metric | Source | Target | Alert Threshold |
|---|---|---|---|
| Organic sessions/week | GSC | Growing 10%+ MoM | Drop > 15% WoW |
| Avg SERP position (all keywords) | GSC | Improving | Regression > 3 positions WoW |
| Avg SERP position (target clusters) | GSC | β€ 15 | > 25 β deprioritized cluster |
| Organic CTR | GSC | β₯ 3% | < 2% β title/meta review |
| Indexed pages | GSC | Growing | Stagnation > 2 weeks |
| Referring domains | Ahrefs | Growing | Flat for 30 days |
| New backlinks/month | Ahrefs | β₯ 5 | 0 β authority building alert |
| Core Web Vitals pass rate | PageSpeed | β₯ 75% | < 60% β dev escalation |
| Pages with content score β₯ 75 | SurferSEO | β₯ 80% of indexed | < 65% β content optimization sprint |
Funnel Metrics
| Stage | Metric | Week N | Week N-1 | Delta |
|---|---|---|---|---|
| Discovery | Impressions | β | β | β |
| Click | Organic clicks | β | β | β |
| Engagement | Avg time on page | β | β | β |
| Conversion | Goal completions | β | β | β |
π€ AEO Track KPIs
AI Citation Dashboard
| Metric | Source | Target | Alert Threshold |
|---|---|---|---|
| AI citation rate (probed queries) | LLM probe | β₯ 20% | < 10% β AEO sprint |
| Google AI Overview appearances | GSC (AI) | β₯ 5 queries | Declining β schema review |
| Perplexity citation count/month | Manual probe | Growing | Drop > 20% MoM |
| ChatGPT citation rate | OpenAI API probe | β₯ 15% | < 5% β content gap issue |
| FAQ schema rich result rate | GSC rich results | β₯ 60% of eligible pages | < 40% β schema fix |
| Avg AEO readiness score (site-wide) | AEO agent output | β₯ 75/100 | < 60 β AEO agent re-run |
| Citation gap queries addressed | Content inventory | 100% of top 10 gaps | Any unaddressed β priority |
Weekly AI Citation Probe Protocol
citation_probe:
frequency: weekly
queries_per_run: 30
platforms:
- chatgpt: 10 queries (OpenAI API, model: gpt-4o)
- perplexity: 10 queries (Perplexity API)
- claude: 5 queries (Claude API, model: claude-opus-4-6)
- gemini: 5 queries (Gemini API)
query_categories:
- definitional: "What is [target concept]?"
- instructional: "How to [target action] with AI?"
- comparative: "What is the best tool for [task]?"
- brand: "What is Business as Code / businessascode.co?"
output:
- cited: true/false
- citation_url: if cited
- competitor_cited_instead: domain
- answer_gap: what the model answered that we don't coverπ¨ Anomaly Detection Rules
anomaly_rules:
- id: ANO-01
name: traffic_drop
condition: "organic_sessions this week < 85% of 4-week rolling average"
action: Trigger immediate audit β check GSC for manual actions, algorithm updates, indexing drops
notify: CMO
- id: ANO-02
name: ranking_regression
condition: "avg SERP position for target clusters degrades > 5 positions WoW"
action: Pull affected keywords; compare to competitor SERP changes; check for content updates needed
notify: CMO
- id: ANO-03
name: indexing_stall
condition: "indexed pages count unchanged for 14 days"
action: Check sitemap submission status; check robots.txt; re-submit sitemap via GSC
notify: CMO
- id: ANO-04
name: aeo_citation_drop
condition: "AI citation rate drops > 25% MoM"
action: Re-run AEO structuring agent on top 10 pages; check for schema validation errors
notify: CMO
- id: ANO-05
name: gate_rejection_spike
condition: "Gate #2 rejection rate > 30% of sampled pages for 2 consecutive weeks"
action: Review content optimization engine prompt; likely quality regression
proposal: Retune content brief template or Claude API system prompt
- id: ANO-06
name: backlink_stall
condition: "zero new referring domains for 30 days"
action: Review authority building agent output; check if outreach queue is being executed
notify: CMOπ Improvement Proposal Format
Every proposal generated by the MetricsAgent follows this schema:
improvement_proposal:
proposal_id: "IMP-2026-04-001"
generated_at: "2026-04-09"
generated_by: MetricsAgent
triggered_by: ANO-04 # or metric name, or human feedback from gate
problem: >
AI citation rate dropped from 18% to 9% over the past 3 weeks.
Probe analysis shows Perplexity is now citing zapier.com for
"business automation AI" queries that we previously captured.
proposed_change:
target: AEOStructuringAgent
change_type: prompt_update
description: >
Add explicit entity relationship mapping to AEO pass:
"Business as Code" β related to β [AI agents, process automation, BPM].
This signals to LLMs how our content fits their knowledge graph.
test_plan:
method: a_b_test
variant_a: current AEO pass (control)
variant_b: AEO pass with entity relationship section
metric: ai_citation_rate
minimum_sample: 4 weeks
success_threshold: "Citation rate returns to β₯ 18%"
human_decision_required: true
urgency: high
estimated_effort: lowπ§ͺ A/B Test Registry
ab_tests:
- test_id: AB-001
element: content_title_format
variant_a: "What is [Topic]? Complete Guide (Year)"
variant_b: "[Topic]: Definition, Examples & How It Works"
metric: organic_ctr
started: 2026-04-02
status: running
minimum_pages: 10
results: pending
- test_id: AB-002
element: faq_question_count
variant_a: "5 FAQ questions per page"
variant_b: "10 FAQ questions per page"
metric: aeo_citation_rate
started: 2026-04-02
status: running
minimum_pages: 8
results: pendingA/B Promotion Rules:
- Minimum 4 weeks per test
- Statistical significance: β₯ 90% confidence
- Minimum sample: β₯ 8 pages per variant
- Winning variant becomes default in the relevant agent spec
π Review Cadences
review_cadences:
weekly:
duration: 30 min
owner: CMO
agenda:
- Review SEO + AEO dashboard (5 min)
- Check anomaly alerts (5 min)
- Review improvement proposals from MetricsAgent (10 min)
- Approve / reject / defer proposals (10 min)
monthly:
duration: 60 min
owner: CMO
agenda:
- Full funnel review (10 min)
- A/B test results review (10 min)
- Target Definition update β add/remove keyword clusters (15 min)
- Content calendar planning for next month (15 min)
- Agent prompt review β any systemic quality issues (10 min)ποΈ Improvement Log
improvement_log:
- version: 1.0.0
date: 2026-04-02
change: Initial process definition
impact: Baseline establishedπ Agent Reporting Matrix
| Agent | Reports | Frequency |
|---|---|---|
| SiteAuditAgent | Audit report + baseline metrics | Monthly |
| KeywordResearchAgent | Keyword universe + AEO gap analysis | Monthly |
| ContentOptimizationEngine | Content quality scores + batch completion | Weekly |
| AEOStructuringAgent | AEO readiness scores + schema validation | Weekly |
| TechnicalSEOAgent | Issues resolved + CWV status | Weekly |
| AuthorityBuildingAgent | Citation rate + backlink acquisition | Weekly |
| MetricsAgent | KPI dashboard + anomaly alerts + proposals | Weekly |
π Related Files
- 01-Process-Manifest β S9 stage definition + change protocol
- 02-Target-Definition β Primary optimization targets; updated by improvement proposals
- 06-AEO-Structuring-Agent β AEO readiness scores; agent prompts updated via proposals
- 05-Content-Optimization-Engine β Content quality scores; briefs updated via proposals
- 09-Human-Review-Gates β Gate feedback feeds anomaly detection
- 08-Authority-Building-Agent β Citation rates tracked here