π€ Human Review Gates
BaC Principle
βHumans should direct the process, not execute it. Every gate is a decision point β not a work session.β β Business as Code Manifesto
This file defines exactly where humans enter the SEO & AEO process and what decisions they make. Agents handle execution. Humans handle judgment calls β strategy alignment, brand consistency, factual accuracy.
See Map of Content for full vault navigation.
ποΈ Gate Overview
| Gate | Stage | Input | Time Required | Decisions |
|---|---|---|---|---|
| Gate #1 β Strategy Review | After S2 | Keyword clusters + content calendar | ~20 min/month | APPROVE / REPRIORITIZE / REJECT_CLUSTER |
| Gate #2 β Content Review | After S4+S5+S6 | 10% sample of optimized pages | ~30 min/week | APPROVE / EDIT / PAUSE_BATCH / FLAG |
BaC Result
Total human time: ~60 min/week including both gates and the improvement loop review in 10-Metrics-and-Self-Improvement.
π¦ Gate #1 β Strategy Review
Trigger: 04-Keyword-and-Topic-Research-Agent delivers keyword_topic_map
Stage: S3
Frequency: Monthly (or when a new topic cluster batch is ready)
Time budget: ~20 minutes
Input Package
The agent delivers a pre-compiled strategy brief containing:
- Ranked list of keyword clusters (by score)
- Recommended content calendar (next 4 weeks)
- AEO query map (what AI models currently answer for these topics)
- Content gap summary (whatβs missing vs. what exists)
- Any clusters flagged as βborderlineβ by the agent
Decision Framework
| What youβre looking for | Decision |
|---|---|
| Clusters align with business goals and current company priorities | APPROVE β proceed to content optimization |
| Good clusters, but wrong order β some should ship sooner | REPRIORITIZE β reorder and approve |
| New topic the agent missed β relevant to our audience | ADD_TOPIC β add to cluster list |
| Cluster doesnβt match what we sell or who we serve | REJECT_CLUSTER β remove and note reason |
| Scoring model seems off β too many irrelevant suggestions | FLAG_MODEL β trigger Target Definition review |
Approval Heuristics (15-min review guide)
1. Scan top 10 clusters by score (2 min)
β Do these feel right? Are there obvious misfits?
2. Check content calendar sequence (3 min)
β Is the order logical? Does the first piece establish context for later ones?
3. Review AEO query map for top 5 clusters (5 min)
β Are these questions our audience actually asks?
β Is our site positioned to answer them credibly?
4. Spot-check 2β3 "borderline" clusters flagged by agent (5 min)
β Are these worth investing content effort in?
5. Make decisions and record in feedback log (5 min)
Approval Threshold
gate_1_threshold:
approve: "β₯70% of clusters approved without major changes"
revise: "30β70% approved β reorder and return to agent for re-run"
reject: "<30% approved β flag [[02-Target-Definition]] for revision"Feedback Format
{
"gate": "strategy_review",
"reviewed_by": "CMO",
"reviewed_at": "2026-04-02",
"decisions": [
{
"cluster_id": "C001",
"decision": "APPROVE",
"priority": 1,
"notes": ""
},
{
"cluster_id": "C004",
"decision": "REJECT_CLUSTER",
"reason": "Too generic β not aligned with BaC positioning",
"notes": "Consider replacing with 'AI-executable SOP' angle"
}
],
"calendar_adjustment": "Move C007 to week 1 β timely topic",
"new_topics_added": ["AI agents for operations teams"],
"overall_decision": "APPROVE_WITH_CHANGES"
}π¦ Gate #2 β Content Review
Trigger: 05-Content-Optimization-Engine + 06-AEO-Structuring-Agent + 07-Technical-SEO-Agent deliver approved_publish_batch
Stage: S7
Frequency: Weekly (each content batch)
Time budget: ~30 minutes
Input Package
The agent delivers:
- 10% random sample of the content batch (or all pages with AEO score < 75)
- Per-page summary: word count, SurferSEO score, AEO score, schema status, changes made
- Flagged pages: unsourced claims, AEO score misses, schema failures
- Technical fix specs from 07-Technical-SEO-Agent (for awareness, not action)
Decision Framework
| What youβre seeing | Decision |
|---|---|
| Content is accurate, brand-consistent, AEO structure intact | APPROVE β publish batch |
| Minor issues in sample β small edits needed | EDIT_AND_RESAMPLE β edit flagged pages, re-sample from batch |
| Significant quality issues in sample | PAUSE_BATCH β do not publish; return to content engine |
| One specific piece has a factual error or brand risk | FLAG_PIECE β remove that page from batch; publish rest |
Quality Checklist (per sample page)
content_review_checklist:
accuracy:
- "All factual claims are correct to your knowledge"
- "No outdated statistics or superseded information"
- "No hallucinated citations or non-existent source links"
brand:
- "Tone is authoritative but accessible β not academic, not casual"
- "BaC terminology used consistently"
- "No competitor names used in unflattering ways"
aeo_structure:
- "First paragraph of each major section answers its heading's question directly"
- "FAQ section is present and questions are phrased naturally"
- "Key terms are defined on first use"
seo:
- "Title and meta description present and relevant"
- "H1 is present and matches topic"
- "Internal links are contextual and anchor text is descriptive"
technical:
- "Schema markup is present (visible in page source or confirmed by agent)"
- "No broken links in the content"Approval Threshold
gate_2_threshold:
approve: "Sample passes checklist with 0 critical issues"
edit_resample: "1β2 minor issues in sample β edit + re-sample"
pause_batch: "β₯3 critical issues OR any hallucinated facts"
flag_piece: "One page has brand/accuracy issue β remove, publish rest"Feedback Format
{
"gate": "content_review",
"reviewed_by": "CMO",
"reviewed_at": "2026-04-09",
"batch_id": "batch_2026-04-07",
"pages_in_batch": 8,
"pages_sampled": 1,
"decisions": [
{
"page_id": "page_C001",
"url": "/what-is-business-as-code",
"decision": "FLAG_PIECE",
"reason": "Stat in paragraph 3 is from 2022 β needs updating",
"action": "Update with 2025 data from Gartner report"
}
],
"batch_decision": "APPROVE_WITH_EXCEPTIONS",
"flagged_pages": ["page_C001"],
"publish_remaining": true,
"feedback_to_agents": {
"content_optimization_engine": "Ensure statistics are <2 years old",
"aeo_structuring_agent": "FAQ on page C003 has duplicate questions β tighten deduplication"
}
}β±οΈ Weekly Time Budget
| Activity | Time | Frequency |
|---|---|---|
| Gate #1 β Strategy Review | 20 min | Monthly (1Γ per month) |
| Gate #2 β Content Review | 30 min | Weekly |
| Metrics & Improvement Review | 30 min | Weekly |
| Total per week (avg) | ~65 min |
When to escalate
If Gate #2 is consistently surfacing the same issue (e.g., outdated stats, wrong tone), escalate to the agent prompt β not the gate. The gate should be catching exceptions, not patterns. Patterns belong in 10-Metrics-and-Self-Improvement.
π Related Files
- 04-Keyword-and-Topic-Research-Agent β Delivers input for Gate #1
- 05-Content-Optimization-Engine β Delivers content for Gate #2
- 06-AEO-Structuring-Agent β AEO scores inform Gate #2 sample selection
- 07-Technical-SEO-Agent β Technical specs reviewed at Gate #2 (awareness)
- 10-Metrics-and-Self-Improvement β Gate feedback feeds improvement proposals
- 01-Process-Manifest β S3, S7 stage definitions