AI Commands
AI-powered analysis commands for the VertaaUX CLI including suggest, explain, triage, fix-plan, patch-review, release-notes, compare, and doc
AI commands use server-side LLM processing to provide intelligent analysis of audit results. All AI commands require authentication (vertaa login or VERTAAUX_API_KEY).
Common Patterns
All AI commands accept input via:
- Stdin pipe:
vertaa audit --json | vertaa <command> - File:
vertaa <command> --file audit.json - Job ID:
vertaa <command> --job <audit-job-id>
All support --format json for machine-readable output and --verbose for expanded detail.
Authentication
AI commands call the VertaaUX API, which requires authentication. Set up credentials with:
# Interactive login (device code flow)
vertaa login
# CI/non-interactive login
vertaa login --token <api-key>
# Or set environment variable
export VERTAAUX_API_KEY=your-key-heresuggest
Convert natural language intent into exact CLI commands.
Synopsis
vertaa suggest <intent...>Description
The suggest command takes a natural language description of what you want to do and returns the exact CLI command(s) to accomplish it. It first checks a local command catalog for fast matching, then falls back to the API for complex or ambiguous intents.
Unlike other AI commands, suggest does not require audit data input -- it only needs your intent as a positional argument.
Options
| Option | Description | Default |
|---|---|---|
-f, --format <format> | Output format: human, json | human |
Examples
# Basic intent
vertaa suggest "check accessibility"
# Multi-word intent
vertaa suggest "audit my site for CI"
# Complex workflow
vertaa suggest "compare two pages and get a report"
# Setup tasks
vertaa suggest "set up CI quality gate"
# JSON output for tooling
vertaa suggest "what failed in my audit" --format jsonOutput
Human format shows each suggested command with an explanation:
$ vertaa audit https://example.com --mode deep
Run a comprehensive UX and accessibility audit
$ vertaa audit https://example.com --fail-on error
Audit with CI quality gateJSON format returns structured suggestions with confidence scores:
{
"suggestions": [
{
"command": "vertaa audit https://example.com --mode deep",
"explanation": "Run a comprehensive UX and accessibility audit",
"source": "local",
"confidence": 85
}
]
}How It Works
- Your intent is fuzzy-matched against a local command catalog
- If a strong match is found (score >= 0.2), local results are returned immediately
- If no strong local match, the intent is sent to the API for LLM-powered suggestion
- If the API is unavailable, partial local matches are returned as a fallback
explain
AI-powered audit summary, or evidence bundle for a specific finding.
Synopsis
# AI mode -- full audit summary
vertaa explain [options]
# Evidence mode -- single finding
vertaa explain <finding-id> [options]Description
The explain command operates in two modes:
AI mode (no finding-id): Accepts full audit JSON and produces a 3-bullet summary plus per-issue explanations with fix suggestions. Use --verbose to include full evidence (selectors, WCAG references) for each issue.
Evidence mode (with finding-id): Shows the full evidence bundle for a specific finding including rule description, impact, screenshots, DOM snapshots, and suggested fixes. This is backward compatible with the original explain behavior.
Options
| Option | Description | Default |
|---|---|---|
--job <job-id> | Job ID (fetch audit data or specific finding) | -- |
--file <path> | Load audit JSON from file | -- |
-f, --format <format> | Output format: human, json | human |
--verbose | Show full evidence per issue (AI mode only) | false |
Examples
# AI summary from piped audit
vertaa audit https://example.com --json | vertaa explain
# AI summary with verbose evidence
vertaa explain --job abc123 --verbose
# AI summary from file
vertaa explain --file audit.json
# Evidence for a specific finding (legacy mode)
vertaa explain color-contrast-001 --job abc123
# Evidence from file
vertaa explain axe-label --file results.json
# JSON output
vertaa audit https://example.com --json | vertaa explain --format jsonOutput (AI Mode)
Human format shows a summary and per-issue breakdown:
Summary
> 12 accessibility issues found, 3 critical
> Color contrast failures on 5 elements
> Missing alt text on hero images
Issues
ERROR Missing alt text (img-alt)
Hero image lacks descriptive alt text
Fix: Add alt="Description of hero image" to <img>
WARNING Low contrast (color-contrast)
Body text has 3.2:1 ratio (needs 4.5:1)
Fix: Change text color to #333 or darken backgroundtriage
Prioritize audit findings into P0/P1/P2 buckets with effort estimates.
Synopsis
vertaa triage [options]Description
The triage command analyzes audit findings and organizes them into priority buckets:
- P0 Critical -- Must fix immediately (accessibility blockers, security issues)
- P1 Important -- Should fix soon (significant UX issues)
- P2 Nice to Have -- Fix when possible (minor improvements)
Each item includes an effort estimate (trivial, small, medium, large) and a quick-wins list for items fixable in under 5 minutes.
Options
| Option | Description | Default |
|---|---|---|
--job <job-id> | Fetch audit data from a job ID | -- |
--file <path> | Load audit JSON from file | -- |
-f, --format <format> | Output format: human, json | human |
--verbose | Expand each bucket with full details | false |
Examples
# Triage from piped audit
vertaa audit https://example.com --json | vertaa triage
# Verbose output with full details
vertaa triage --job abc123 --verbose
# From file with JSON output
vertaa triage --file audit.json --format json
# Compact summary (default)
vertaa audit https://example.com --json | vertaa triageOutput
Default (compact) shows bucket counts and titles:
P0 Critical (3)
Missing alt text, Keyboard trap, Missing form labels
P1 Important (5)
Low contrast, Missing landmarks, Heading order, ...
P2 Nice to Have (4)
Redundant links, Missing skip link, ...
Quick Wins (< 5 min each)
* Add alt text to hero image
* Add aria-label to search formVerbose expands each item with reasoning and effort:
P0 Critical (3)
> Missing alt text (img-alt)
Screen readers cannot convey image content
Effort: trivialfix-plan
Generate a structured remediation plan from audit findings.
Synopsis
vertaa fix-plan [options]Description
The fix-plan command produces an ordered remediation plan with:
- Prioritized steps ordered by severity and impact
- Effort estimates per item (trivial, small, medium, large)
- Fix type classification (code, config, content, design)
- Step-by-step instructions for each fix
- Code hints where applicable
- Estimated total effort for the full plan
Options
| Option | Description | Default |
|---|---|---|
--job <job-id> | Fetch audit data from a job ID | -- |
--file <path> | Load audit JSON from file | -- |
-f, --format <format> | Output format: human, json | human |
Examples
# Fix plan from piped audit
vertaa audit https://example.com --json | vertaa fix-plan
# From job ID
vertaa fix-plan --job abc123
# JSON output for tooling integration
vertaa fix-plan --file audit.json --format json
# Pipe to jq for filtering
vertaa audit https://example.com --json | vertaa fix-plan --json | jq '.data.items[] | select(.effort == "trivial")'Output
Human format shows a numbered remediation plan:
Remediation Plan (8 items)
Estimated total effort: 4-6 hours
1. [critical] Add alt text to images
Effort: trivial Type: code
1) Find all <img> tags without alt attributes
2) Add descriptive alt text based on image context
3) Use alt="" for decorative images
Hint: document.querySelectorAll('img:not([alt])')
2. [high] Fix color contrast ratios
Effort: small Type: code
1) Identify elements with contrast below 4.5:1
2) Adjust text color or background color
Hint: Use #333 for body text on white backgroundspatch-review
Review a patch/diff against audit findings for safety.
Synopsis
# From piped diff
<diff-source> | vertaa patch-review [options]
# From file
vertaa patch-review --diff-file <path> [options]Description
The patch-review command reads a diff (patch) and evaluates it against audit findings to determine if the changes are safe. It returns one of three verdicts:
- SAFE -- Changes address findings without introducing issues
- UNSAFE -- Changes introduce new problems or worsen existing ones
- NEEDS_REVIEW -- Changes are ambiguous and need human review
Optionally provide audit findings via --job or --findings for context-aware review.
Options
| Option | Description | Default |
|---|---|---|
--job <job-id> | Fetch findings from a job ID | -- |
--findings <path> | Load findings JSON from file | -- |
--diff-file <path> | Load diff from file instead of stdin | -- |
-f, --format <format> | Output format: human, json | human |
--dry-run | Show what would be analyzed without calling the API | false |
Examples
# Review a GitHub PR diff
gh pr diff 123 | vertaa patch-review --job abc123
# Review a git diff
git diff HEAD~1 | vertaa patch-review --job abc123
# From diff file with findings file
vertaa patch-review --diff-file fix.patch --findings audit.json
# Dry run to check inputs
gh pr diff 123 | vertaa patch-review --job abc123 --dry-run
# JSON output for CI integration
gh pr diff 123 | vertaa patch-review --job abc123 --format jsonOutput
Human format shows verdict, summary, and details:
Verdict: SAFE (confidence: 92%)
Patch correctly adds alt attributes to 3 images and fixes heading hierarchy.
Concerns:
info @ line 42: Added alt text is generic ("image") -- consider more descriptive text
Findings addressed (3):
+ img-alt
+ heading-order
+ landmark-main
Findings remaining (2):
- color-contrast
- link-nameCI Integration
Use patch-review in GitHub Actions to gate PRs:
- name: Review patch safety
run: |
gh pr diff ${{ github.event.pull_request.number }} \
| vertaa patch-review --job $AUDIT_JOB_ID --format json \
| jq -e '.data.verdict == "SAFE"'release-notes
Generate developer and PM release notes from audit diff data.
Synopsis
vertaa release-notes [options]Description
The release-notes command takes diff data (new vs fixed findings between two audits) and generates two sets of release notes:
- Developer Notes -- Technical details about what changed
- PM / User-Facing Notes -- Non-technical summary for stakeholders
Input can be piped from vertaa diff, loaded from a file, or fetched by providing two job IDs.
Options
| Option | Description | Default |
|---|---|---|
--file <path> | Load diff JSON from file | -- |
--job-a <id> | First audit job ID (baseline) | -- |
--job-b <id> | Second audit job ID (current) | -- |
-f, --format <format> | Output format: human, json, markdown | markdown |
Examples
# From piped diff
vertaa diff --job-a abc --job-b def --json | vertaa release-notes
# From two job IDs directly
vertaa release-notes --job-a abc --job-b def
# From file
vertaa release-notes --file diff.json
# Human-readable output
vertaa release-notes --file diff.json --format human
# Save markdown to file
vertaa release-notes --job-a abc --job-b def > release-notes.mdOutput
Markdown format (default) produces a ready-to-use document:
# Accessibility Improvements - Sprint 42
## Developer Notes
- Fixed 5 color contrast violations (WCAG 2.1 AA)
- Added landmark roles to main navigation
- Resolved keyboard trap in modal dialog
## User-Facing Notes
- Improved readability across the application
- Better screen reader navigation support
- Fixed modal dialog accessibilitycompare
Before/after audit comparison with score deltas and LLM narrative.
Synopsis
# URL mode (runs audits)
vertaa compare <urlA> <urlB> [options]
# File mode (LLM analysis)
vertaa compare --before <file> --after <file> [options]Description
The compare command operates in two modes:
URL mode: Takes two URLs, runs audits on both, and shows a score/category delta table. This is the legacy behavior, equivalent to running two audits and diffing the results.
File mode (with --before/--after): Sends both audit JSONs to the LLM compare endpoint for a narrative analysis including score deltas, improvements, regressions, and a written summary. Use --verbose for category breakdowns and the full narrative.
Options
| Option | Description | Default |
|---|---|---|
--before <path> | Baseline audit JSON file (LLM mode) | -- |
--after <path> | Current audit JSON file (LLM mode) | -- |
--mode <mode> | Audit depth: basic, standard, deep (URL mode) | basic |
--wait | Wait for audits to complete (URL mode) | false |
--timeout <ms> | Wait timeout in milliseconds | 60000 |
--fail-on-score <n> | Exit non-zero if score below n | -- |
-f, --format <format> | Output format: human, json | human |
--verbose | Show category deltas and full narrative | false |
Examples
# URL comparison (runs two audits)
vertaa compare https://staging.example.com https://prod.example.com --wait
# File-based LLM comparison
vertaa compare --before baseline.json --after current.json
# Verbose with category breakdown
vertaa compare --before baseline.json --after current.json --verbose
# JSON output
vertaa compare --before baseline.json --after current.json --format json
# CI gate: fail if score drops below 80
vertaa compare https://a.com https://b.com --wait --fail-on-score 80Output (File Mode)
Human format shows headline, deltas, and changes:
Accessibility Score Improved After Navigation Refactor
Overall delta: +12
Improvements (5)
+ Fixed color contrast on navigation links
+ Added skip-to-content link
+ Improved heading hierarchy
Regressions (1)
- New modal missing focus trap
Unchanged: 8Verbose adds category deltas and a full narrative analysis.
doc
Generate a Team Playbook document from recurring audit findings.
Synopsis
vertaa doc [options]Description
The doc command analyzes audit findings to produce a Team Playbook -- a structured document covering:
- Recurring patterns across findings
- Root causes behind common issues
- Correct implementations with code examples
- Copy/paste checklists for developers
Use the --team flag to customize the playbook header for a specific team.
Options
| Option | Description | Default |
|---|---|---|
--job <job-id> | Fetch audit data from a job ID | -- |
--file <path> | Load audit JSON from file | -- |
--team <name> | Team name for the playbook header | -- |
-f, --format <format> | Output format: json, markdown | markdown |
Examples
# Generate playbook from piped audit
vertaa audit https://example.com --json | vertaa doc
# From job ID with team name
vertaa doc --job abc123 --team "Frontend Team"
# From file
vertaa doc --file audit.json --team "Design System"
# Save to file
vertaa doc --job abc123 --team "Mobile" > playbook.md
# JSON for programmatic use
vertaa doc --file audit.json --format jsonOutput
Markdown format (default) produces a ready-to-use playbook:
# Frontend Team Playbook
## Color Contrast
### Pattern
Body text and interactive elements frequently fail WCAG AA contrast ratios.
### Root Cause
Design tokens use light grays (#999) for secondary text.
### Correct Implementation
Use minimum 4.5:1 ratio for normal text, 3:1 for large text.
### Checklist
- [ ] Verify all text colors meet AA contrast ratios
- [ ] Test with browser contrast checker
- [ ] Update design tokens for secondary textRelated
All Commands
Full command reference
Pipelines
Composable pipeline workflows
Configuration
Configure defaults with .vertaaux.yml
Was this page helpful?