Skip to main content

Contributing

GEO Optimizer is open source under the MIT License. Contributions are welcome -- whether fixing a bug, adding a feature, improving documentation, or expanding test coverage.

Repository: github.com/auriti-web-design/geo-optimizer-skill

Development Setup

Clone and Install

git clone https://github.com/auriti-web-design/geo-optimizer-skill.git
cd geo-optimizer-skill

# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Install test dependencies
pip install pytest pytest-cov

Verify Installation

# Run the test suite
pytest tests/ -v

# Run a quick audit to confirm scripts work
python scripts/geo_audit.py --url https://example.com

Running Tests

Full Test Suite

pytest tests/ -v

The test suite includes:

CategoryCountFiles
Unit tests67test_audit.py, test_http_utils.py, test_schema_validator.py
Integration tests13test_integration.py
Validation tests9test_schema_validator.py
Total89

With Coverage Report

pytest tests/ --cov=scripts --cov-report=term-missing -v

Sample output:

---------- coverage: platform linux, python 3.12.0 -----------
Name Stmts Miss Cover Missing
------------------------------------------------------------
scripts/geo_audit.py 245 32 87% 112-118, 245-260
scripts/generate_llms_txt.py 180 45 75% ...
scripts/schema_injector.py 210 52 75% ...
------------------------------------------------------------
TOTAL 635 129 80%

============ 89 passed in 12.34s ============

Coverage Targets

MetricTargetCurrent
Business logic85%+87%
Total coverage70%+70%
All tests passing100%100% (89/89)
Business Logic vs. Total Coverage

Business logic coverage (87%) measures the core scoring, parsing, and generation functions. Total coverage (70%) includes CLI argument handling, file I/O, and network code that is harder to test in isolation. The project prioritizes business logic coverage.

Run Specific Test Files

# Audit tests only
pytest tests/test_audit.py -v

# Integration tests only
pytest tests/test_integration.py -v

# Schema validation tests only
pytest tests/test_schema_validator.py -v

# HTTP utility tests only
pytest tests/test_http_utils.py -v

Run Tests Matching a Pattern

# All tests related to robots.txt
pytest tests/ -v -k "robots"

# All tests related to JSON output
pytest tests/ -v -k "json"

Quality Score

GEO Optimizer tracks a composite quality score across six dimensions, evaluated on a 0--10 scale with weighted categories. The same rubric is used for every version to ensure consistent, comparable measurements.

Scoring Rubric

DimensionWeightWhat It Measures
Idea & Positioning15%Market uniqueness, problem-solution fit
Code Structure20%Architecture, modularity, Python standards
Documentation20%README quality, examples, inline docs, changelog
Robustness & Testing25%Test coverage, CI/CD, error handling, edge cases
UX & Usability10%CLI intuitiveness, output readability, install simplicity
Growth Potential10%Roadmap, extensibility, community readiness

Score Scale

RangeMeaning
9.0--10.0Reference-quality, industry-leading
8.0--8.9Production-ready, professional grade
7.0--7.9Solid, functional, some rough edges
6.0--6.9Usable but needs improvement
Below 6.0Significant work required

Version History

VersionDateScoreKey Changes
v1.0.02026-02-187.20Initial release: 3 scripts, AI context files, docs
v1.1.02026-02-21--GitHub Actions CI, schema_injector v2 (FAQ auto-extract), CONTRIBUTING.md
v1.2.02026-02-219.20JSON output format, 22 unit tests, 66% coverage
v1.3.02026-02-219.40Network retry logic (exponential backoff), 67 tests, 87% business logic
v1.4.02026-02-219.15Schema validation (jsonschema), 13 integration tests, Codecov integration
v1.5.02026-02-219.25Verbose mode (--verbose), documentation cleanup, 89 total tests
Consistent Rubric

The scoring rubric does not change between versions. As noted in the project's rubric document: "version-to-version comparison uses the same rubric (no moving goalposts)." This ensures score changes reflect genuine improvements.


Code Standards

Python Style

  • PEP 8 compliant (checked by flake8)
  • 120 character line limit (not 79)
  • Type hints encouraged for all function signatures
  • Docstrings required for all public functions
def calculate_score(checks: dict[str, CheckResult]) -> int:
"""Calculate the total GEO score from individual check results.

Args:
checks: Dictionary mapping check names to their results.

Returns:
Integer score from 0 to 100.
"""
return sum(check.score for check in checks.values())

Import Order

Follow this organization, with alphabetical sorting within each group:

# 1. Standard library
import json
import sys
from pathlib import Path

# 2. Third-party packages
import requests
from bs4 import BeautifulSoup

# 3. Local imports
from scripts.geo_audit import run_audit

Commit Messages

Follow Conventional Commits with project-specific scopes:

feat(audit): add JSON output format
fix(llms): handle malformed XML sitemap gracefully
test(schema): add validation tests for FAQPage
docs(readme): update installation instructions
ci(actions): add Python 3.12 to test matrix

Valid scopes: audit, llms, schema, install, docs, ci


Pull Request Process

1. Fork and Branch

# Fork the repository on GitHub, then:
git clone https://github.com/YOUR-USERNAME/geo-optimizer-skill.git
cd geo-optimizer-skill
git checkout -b feature/your-feature-name

2. Make Changes

  • Write your code following the standards above
  • Add tests for new features and bug fixes
  • Run the full test suite to ensure nothing is broken

3. Run Quality Checks

# Run all tests
pytest tests/ -v

# Check coverage hasn't dropped
pytest tests/ --cov=scripts --cov-report=term-missing

# Lint (if flake8 is installed)
flake8 scripts/ --max-line-length=120

4. Update Documentation

When adding features, update the relevant files:

Change TypeFiles to Update
New script flagREADME.md, relevant docs/ page
New featureREADME.md, CHANGELOG.md (under [Unreleased])
Bug fixCHANGELOG.md (under [Unreleased])
New AI contextSKILL.md, ai-context/ directory

5. Push and Open PR

git add .
git commit -m "feat(scope): description of the change"
git push origin feature/your-feature-name

Then open a Pull Request on GitHub against the main branch.

PR Guidelines

  • One feature per PR -- keep changes focused for easier review
  • Include test output -- paste the pytest summary in the PR description
  • Describe the "why" -- explain what problem the change solves, not just what code changed
  • Link related issues -- reference any GitHub issues with Fixes #123 or Closes #123

Review Timeline

Maintainers review pull requests within 3--5 business days. You may be asked for changes before merging.


Roadmap

Planned features for future versions (from the CHANGELOG):

  • HTML report output -- visual audit reports for non-technical stakeholders
  • Batch audit mode -- audit multiple URLs in a single run
  • PyPI distribution -- install via pip install geo-optimizer
  • GitHub Actions reusable workflow -- one-line CI integration

If you would like to work on any of these, open an issue first to discuss the approach.


License

GEO Optimizer is licensed under the MIT License. By contributing, you agree that your contributions will be licensed under the same terms.