API Reference
This page provides the complete command-line interface reference for all three GEO Optimizer scripts, the full JSON output schema for geo_audit.py, and a ready-to-use GitHub Actions workflow for automated GEO auditing.
geo_audit.py
Performs a comprehensive GEO compliance audit on a target URL and returns a score from 0 to 100.
Flags
| Flag | Type | Required | Default | Description |
|---|---|---|---|---|
--url | string | Yes | -- | Target website URL to audit |
--format | string | No | text | Output format: text or json |
--output | string | No | stdout | File path to write results to |
--verbose | flag | No | off | Show detailed raw data for each check |
Examples
# Basic text audit
./geo scripts/geo_audit.py --url https://example.com
# JSON output for programmatic use
./geo scripts/geo_audit.py --url https://example.com --format json
# Verbose debugging
./geo scripts/geo_audit.py --url https://example.com --verbose
# Save JSON report to file
./geo scripts/geo_audit.py --url https://example.com --format json --output audit.json
# Combined: verbose JSON to file
./geo scripts/geo_audit.py --url https://example.com --format json --verbose --output debug-report.json
JSON Output Schema
When --format json is used, the output follows this structure:
{
"url": "https://example.com",
"timestamp": "2026-02-24T10:30:00.000Z",
"score": 82,
"band": "good",
"checks": {
"robots_txt": {
"score": 18,
"max": 20,
"passed": true,
"details": {
"found": true,
"citation_bots_ok": true,
"bots_allowed": ["OAI-SearchBot", "ClaudeBot", "PerplexityBot", "Googlebot"],
"bots_blocked": [],
"bots_missing": ["GPTBot"]
}
},
"llms_txt": {
"score": 14,
"max": 20,
"passed": true,
"details": {
"found": true,
"word_count": 1247,
"has_h1": true,
"section_count": 4,
"link_count": 2
}
},
"schema_jsonld": {
"score": 20,
"max": 25,
"passed": false,
"details": {
"schemas_found": ["WebSite", "FAQPage"],
"schemas_missing": ["WebApplication"],
"website_valid": true,
"faqpage_valid": true,
"webapp_valid": false
}
},
"meta_tags": {
"score": 18,
"max": 20,
"passed": true,
"details": {
"title": "Example Corp - Data Analytics Platform",
"title_length": 42,
"description": "Enterprise solutions for real-time data analytics and business intelligence.",
"description_length": 73,
"has_canonical": true,
"has_og_title": true,
"has_og_description": true,
"has_og_image": false,
"has_og_url": true
}
},
"content": {
"score": 12,
"max": 15,
"passed": true,
"details": {
"has_h1": true,
"h1_text": "Enterprise Data Analytics",
"heading_count": 12,
"word_count": 2840,
"stat_references": 8,
"external_links": 0
}
}
},
"recommendations": [
"Add WebApplication JSON-LD schema for tool/utility pages",
"Add external citation links to authoritative sources",
"Include og:image meta tag for social sharing",
"Explicitly allow GPTBot in robots.txt"
]
}
Score Bands
| Band | Range | JSON Value |
|---|---|---|
| Excellent | 91--100 | "excellent" |
| Good | 71--90 | "good" |
| Foundation | 41--70 | "foundation" |
| Critical | 0--40 | "critical" |
generate_llms_txt.py
Auto-generates an /llms.txt file from a website's sitemap.
Flags
| Flag | Type | Required | Default | Description |
|---|---|---|---|---|
--base-url | string | Yes | -- | Root URL of the website |
--output | string | No | stdout | File path to save the generated llms.txt |
--sitemap | string | No | auto-detect | Manual sitemap URL |
--site-name | string | No | -- | Custom site name for the header |
--description | string | No | -- | Site description (displayed as blockquote) |
--fetch-titles | flag | No | off | Fetch actual page titles via HTTP requests |
--max-per-section | int | No | 20 | Maximum number of URLs per category |
Examples
# Auto-detect sitemap and output to stdout
./geo scripts/generate_llms_txt.py --base-url https://example.com
# Full options with output file
./geo scripts/generate_llms_txt.py \
--base-url https://example.com \
--site-name "Example Corp" \
--description "Enterprise data analytics" \
--fetch-titles \
--max-per-section 15 \
--output public/llms.txt
# Manual sitemap
./geo scripts/generate_llms_txt.py \
--base-url https://example.com \
--sitemap https://example.com/post-sitemap.xml \
--output llms.txt
schema_injector.py
Generates, validates, analyzes, and injects JSON-LD structured data into HTML files.
Flags
| Flag | Type | Required | Default | Description |
|---|---|---|---|---|
--file | string | Depends | -- | HTML file to analyze or modify |
--analyze | flag | No | off | Analyze existing schemas (read-only) |
--inject | flag | No | off | Inject generated schema into the file |
--type | string | No | -- | Schema type: website, webapp, faq, article, organization, breadcrumb |
--name | string | No | -- | Site or application name |
--url | string | No | -- | Site URL |
--description | string | No | -- | Description text |
--author | string | No | -- | Author name |
--logo-url | string | No | -- | URL to logo image |
--faq-file | string | No | -- | JSON file containing FAQ items |
--auto-extract | flag | No | off | Auto-detect FAQ content from HTML |
--astro | flag | No | off | Output Astro-compatible snippet |
--no-backup | flag | No | off | Skip .bak backup creation |
--no-validate | flag | No | off | Skip schema validation before injection |
--verbose | flag | No | off | Show full schema JSON during analysis |
Examples
# Analyze what schemas exist in a file
./geo scripts/schema_injector.py --file dist/index.html --analyze
# Generate WebSite schema to stdout
./geo scripts/schema_injector.py \
--type website \
--name "Example Corp" \
--url https://example.com \
--description "Enterprise analytics"
# Inject Organization schema into HTML
./geo scripts/schema_injector.py \
--file dist/about.html \
--inject \
--type organization \
--name "Example Corp" \
--url https://example.com \
--logo-url https://example.com/logo.png
# FAQ with auto-extraction
./geo scripts/schema_injector.py \
--file dist/faq.html \
--inject \
--type faq \
--auto-extract
# Astro-compatible snippet
./geo scripts/schema_injector.py \
--type webapp \
--name "GEO Audit Tool" \
--url https://example.com/tools/audit \
--astro
CI/CD Integration
GitHub Actions Workflow
Add automated GEO auditing to your CI/CD pipeline. This workflow runs on every push to main and fails the build if the GEO score drops below a configurable threshold.
name: GEO Audit
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
GEO_MIN_SCORE: 70
SITE_URL: https://example.com
jobs:
geo-audit:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install GEO Optimizer
run: |
git clone https://github.com/auriti-web-design/geo-optimizer-skill.git /tmp/geo-optimizer
cd /tmp/geo-optimizer
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
- name: Run GEO Audit
id: audit
run: |
cd /tmp/geo-optimizer
source .venv/bin/activate
python scripts/geo_audit.py --url ${{ env.SITE_URL }} --format json --output /tmp/geo-report.json
SCORE=$(python -c "import json; print(json.load(open('/tmp/geo-report.json'))['score'])")
echo "score=$SCORE" >> $GITHUB_OUTPUT
echo "GEO Score: $SCORE"
- name: Check Score Threshold
run: |
if [ ${{ steps.audit.outputs.score }} -lt ${{ env.GEO_MIN_SCORE }} ]; then
echo "::error::GEO score (${{ steps.audit.outputs.score }}) is below minimum threshold (${{ env.GEO_MIN_SCORE }})"
exit 1
fi
echo "GEO score (${{ steps.audit.outputs.score }}) meets minimum threshold (${{ env.GEO_MIN_SCORE }})"
- name: Upload Report
if: always()
uses: actions/upload-artifact@v4
with:
name: geo-audit-report
path: /tmp/geo-report.json
retention-days: 30
- name: Comment on PR
if: github.event_name == 'pull_request'
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = JSON.parse(fs.readFileSync('/tmp/geo-report.json', 'utf8'));
const badge = report.score >= 91 ? '🟢' : report.score >= 71 ? '🟡' : report.score >= 41 ? '🟠' : '🔴';
const body = `## GEO Audit Report ${badge}\n\n` +
`**Score: ${report.score}/100** (${report.band})\n\n` +
`| Section | Score |\n|---------|-------|\n` +
`| robots.txt | ${report.checks.robots_txt.score}/${report.checks.robots_txt.max} |\n` +
`| llms.txt | ${report.checks.llms_txt.score}/${report.checks.llms_txt.max} |\n` +
`| JSON-LD | ${report.checks.schema_jsonld.score}/${report.checks.schema_jsonld.max} |\n` +
`| Meta Tags | ${report.checks.meta_tags.score}/${report.checks.meta_tags.max} |\n` +
`| Content | ${report.checks.content.score}/${report.checks.content.max} |\n\n` +
(report.recommendations.length > 0 ?
`**Recommendations:**\n${report.recommendations.map(r => `- ${r}`).join('\n')}` : '');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body
});
Using the JSON Output Programmatically
The JSON output is designed for machine consumption. You can parse it in any language:
import json
import sys
with open("geo-report.json") as f:
report = json.load(f)
# Gate deployment on score
if report["score"] < 70:
print(f"GEO score too low: {report['score']}/100")
sys.exit(1)
# Check specific sections
if not report["checks"]["robots_txt"]["details"]["citation_bots_ok"]:
print("WARNING: Citation bots are not properly configured in robots.txt")
# Bash: extract score with jq
SCORE=$(jq '.score' geo-report.json)
BAND=$(jq -r '.band' geo-report.json)
echo "GEO Score: $SCORE ($BAND)"
Next: Architecture -- the 9 Princeton GEO methods, scoring algorithm, and AI bot ecosystem.