How we certify
every remediation.
Five independent validators — three open-source rule engines, one rendered-browser harness, and an AI semantic reviewer — plus an immutable ledger that records every validation event. The methodology behind every OctoComply Document Remediation Service certification.
Most accessibility vendors don't show their work.
The accessibility vendor landscape is full of tools that promise compliance without explaining how they verify it. Overlays self-validate against their own internal scoring. Auto-taggers report their own success rate. Many remediation services hand you a finished file and an invoice and ask you to trust the result.
That posture doesn't survive a serious compliance review. When DOJ or HHS OCR investigators ask "how do you know this is accessible?" your answer has to be more than "the vendor said so."
OctoComply's answer is documented: five independent validators, each catching what the others miss, plus an immutable ledger that records every validation event with the date, tool version, and result. The good-faith effort is visible in the artifact itself.
Each one catches what the others miss.
No single tool catches every WCAG violation. Each validator below has documented strengths and blind spots. We run all five on every remediated document, in sequence, until each one reports zero issues — and we record every run in the ledger.
axe-core 4.10
Deque Systems · open source
Industry-standard WCAG rule engine
What it catches
- ✓WCAG 2.0 / 2.1 / 2.2 Level AA automated rules
- ✓Color contrast violations
- ✓Form label associations
- ✓Image alt text presence
- ✓ARIA usage correctness
- ✓Semantic HTML structure
Why we use it
axe-core is the most-trusted automated WCAG validator in the accessibility industry. Its results are routinely accepted as evidence in compliance reviews and litigation. We run the full Level AA rule set on every page of every remediated document.
What it does NOT catch
Static analysis cannot judge content meaning, reading-order correctness, or whether alt text is actually descriptive. That's why we don't rely on it alone.
Google Lighthouse 12
Google · open source
Document structure and mobile-accessibility audit
What it catches
- ✓Touch target sizes (WCAG 2.5.5, 2.5.8)
- ✓Document title and lang attribute
- ✓Valid HTML structure
- ✓ARIA attribute usage
- ✓Viewport configuration
- ✓Tab order
Why we use it
Lighthouse catches a different cross-section of accessibility issues than axe-core — particularly mobile and touch-target concerns. Running both independently means an issue has to slip past two distinct rule engines to ship.
What it does NOT catch
Lighthouse's accessibility category covers fewer WCAG criteria than axe-core overall. It's a complement, not a replacement.
pa11y · HTML_CodeSniffer
Squiz · open source
Independent third rule engine
What it catches
- ✓WCAG 2.1 AA via the HTML_CodeSniffer ruleset
- ✓Rules implemented differently from axe-core
- ✓Form and table accessibility patterns
- ✓Heading hierarchy issues
Why we use it
Two independent rule engines reaching the same conclusion is meaningfully stronger evidence than either alone. pa11y/HTML_CodeSniffer is from Squiz, with a separate codebase and ruleset from Deque's axe-core — when they agree, we have high confidence.
What it does NOT catch
Shares the fundamental blind spots of all static analysis: can't judge meaning, intent, or accuracy.
Playwright (Chromium)
Microsoft · open source
Real-browser rendered checks
What it catches
- ✓Reflow behavior at 320px viewport (WCAG 1.4.10)
- ✓Touch target size as actually rendered (WCAG 2.5.8)
- ✓Focus indicator visibility (WCAG 2.4.7)
- ✓Color-only information conveyance (WCAG 1.4.1)
Why we use it
These checks require rendering — they can't be done by parsing HTML alone. We launch a real Chromium browser, render each page, and inspect the result. This is the closest automated equivalent to a human user testing the page.
What it does NOT catch
Doesn't evaluate content semantics or accuracy. Doesn't catch issues that only affect assistive technology beyond what a sighted keyboard user would notice.
Claude semantic review
Anthropic · proprietary AI
Qualitative review where judgment matters
What it catches
- ✓Alt text quality (is it descriptive?)
- ✓Link text quality (does it make sense out of context?)
- ✓Heading hierarchy semantics (do levels match content?)
- ✓Reading order coherence
- ✓Color-conveyance patterns
- ✓Content clarity for general audiences
Why we use it
Automated tools can verify alt text EXISTS. Only a reviewer with language understanding can verify alt text is MEANINGFUL. The same is true for link text, heading semantics, and reading order. Claude provides this qualitative layer at a cost-per-document that makes it economical to run on every remediation.
What it does NOT catch
AI semantic review is not a substitute for human review on high-stakes documents. We treat it as a strong qualitative pass, not the final word — which is why the certification ledger has an explicit Human Review slot.
Human review
Optional · explicitly recorded
Final-pass sign-off when stakes warrant it
For high-stakes documents, an accessibility specialist reviews the remediation output by hand. The reviewer's name, date, and findings are recorded in the certification ledger as a separate entry.
When human review hasn't been performed yet, the certification page says so explicitly — with a "pending" entry and a note that the slot will be filled when review is performed. We don't claim human sign-off we haven't actually done.
Every event is timestamped. Nothing is ever deleted.
The certification page shipped with each remediated document is not a static badge — it's a living ledger. Every validation run, every iteration, every human sign-off, every periodic re-validation gets appended as a new entry. The history is the evidence.
Immutable by design
Past entries are never edited, never deleted. If a finding turns out to be a misclassification, we write a correction entry — we don't rewrite history. The ledger's integrity is part of what makes it credible.
Iterative refinement is preserved
If a document goes through multiple rounds before it's clean (say, 11 errors → 2 errors → 0), the ledger shows the path. No magic, no hidden retries. The good-faith effort is documented in the artifact itself.
Appendable forever
Periodic re-validation, human reviews, accessibility consultant sign-offs, and post-launch updates all get appended as new entries. A certification record that's 18 months old can still show fresh validation events.
Linkable and auditable
Each certification page has a permanent URL. You can share it with counsel, attach it to a procurement response, or point a federal reviewer at it. The verification doesn't require logging into our platform.
Broward MPO FY 2026-2030 TIP — see the artifact.
This is the real certification record for the Broward Metropolitan Planning Organization's FY 2026-2030 Transportation Improvement Program — a 60+ page transportation planning document with complex tables, financial projections, and project data. Click through and inspect: the ledger shows the full iteration history (including the rounds where the projects table failed validation before reaching zero issues).
The four files that ship together
cdn.octocomply.com/docs/browardmpo.org/tip-2026-2030/
What other approaches don't show.
Accessibility overlays
AccessiBe · AudioEye · UserWay
Self-validate against their own scoring. No third-party validators. No ledger. No iteration history. The validation evidence is the vendor's claim. Widely criticized by the accessibility community — not accepted as compliance evidence in most enforcement contexts.
PDF auto-taggers
Acrobat batch · CommonLook · NetCentric
Add accessibility tags to PDFs and report their own scoring. Complex tables, multi-column layouts, charts, maps, and footnotes routinely break. The validation comes from the same tool that did the tagging — there's no independent check.
Traditional remediation services
$7–$11 per page, 10-day turnaround
Manual remediation by an accessibility specialist. Final deliverable is a remediated file — typically without a detailed validation record showing what was checked, by which tools, and when. Buyers trust the deliverable because they trust the specialist, not because the evidence is visible.
OctoComply Document Remediation Service
Five independent validators. Immutable ledger. Public artifact.
The validation evidence is visible to anyone with the certification URL — buyers, counsel, federal reviewers. The iteration history is preserved. The methodology is published on this page. You don't have to trust us; you can verify.
See your own site against this methodology.
Run a free 10-page scan. Every issue surfaced is one a remediated version would have to clear all five validators on. No credit card.