The Digital Accessibility Time Bomb: Leveraging AI to Accelerate Compliance

Summary

  • For many planning agencies, manually remediating all inaccessible documents would likely take years.
  • However, planners may be able to leverage existing AI tools to clear some of the biggest accessibility bottlenecks.
  • To start, try using a vision model to draft alt text from document images and an AI assistant to assess PDF structure issues and help identify workflow changes.

The previous two parts in "The Digital Accessibility Time Bomb" series established the scope of the accessibility problem (Part 1) and the institutional barriers that prevent easy solutions (Part 2). This final part turns to what might actually work: AI tools that can help planners create and remediate accessible documents at a scale that manual processes cannot match.

This isn't about AI as a buzzword or a future possibility. The tools exist now, and planners can begin experimenting with them immediately. But understanding what these tools can and cannot do — and how they fit into a broader accessibility workflow — is essential before diving in.

Why AI Is Necessary

The math problem is straightforward. A planning department might have thousands of PDFs on its website, accumulated over 15 or 20 years. Each document that fails accessibility standards needs remediation: proper tagging, logical reading order, text alternatives for images, corrected heading structures, etc. A skilled specialist might remediate a few documents per day, depending on complexity. At that rate, clearing the backlog takes years. The DOJ deadline is just around the corner.

Meanwhile, new documents appear constantly without accessibility review. Staff reports, meeting minutes, public notices, plan amendments, and so on. Any remediation strategy that addresses only the backlog will be underwater again almost immediately.

Manual processes cannot solve a problem at this scale and velocity. AI can not perfectly, not without human oversight, but at a speed and volume that makes compliance achievable rather than theoretical.

Current AI Capabilities

Several categories of AI tools are relevant to document accessibility.

Document Analysis and Remediation Platforms

Document analysis and remediation platforms use machine learning to evaluate PDF structure, identify accessibility failures, and apply corrections. Unlike older automated tagging tools that simply guessed at structure, newer AI systems can interpret document layouts with greater sophistication, recognizing tables, distinguishing headings from body text, and identifying reading order in multi-column layouts. The best of these systems flags low-confidence corrections for human review rather than applying changes blindly.

Vision Language Models

Vision language models (VLMs) can generate text alternatives for images. When a planning document contains a site photo, a map, or a diagram, Web Content Accessibility Guidelines (WCAG) require a text description that conveys the meaningful content to users who can't see the image. Writing these alternative ("alt”) text descriptions manually is time-consuming, but VLMs can generate draft descriptions in seconds. The quality varies — complex maps or technical diagrams may need human revision — but for straightforward images, the output is often usable immediately.

Large Language Models

Large language models (LLMs) can assist with content restructuring. When a document's logical structure is unclear — headings that don't follow hierarchy, paragraphs that should be lists, unclear reading order — an LLM can suggest reorganization. This is particularly useful for legacy documents created without accessibility in mind, where the remediation task involves not just tagging but rethinking how information is presented.

Optical Character Recognition

Optical character recognition (OCR) enhanced by AI can recover text from scanned documents. Many older planning documents exist only as image-based PDFs— scans of paper originals with no text layer at all. Modern OCR powered by machine learning achieves high accuracy even on degraded scans, making it possible to create accessible versions of documents that would otherwise require complete manual retyping.

Pros and Cons of AI Approaches

The primary advantage of AI tools is scale. They can process document volumes that would be impossible to do manually, reducing backlogs from years to weeks or months. They can also be integrated into ongoing workflows, checking new documents before publication rather than discovering problems after the fact.

AI tools also provide consistency. Human remediators vary in skill and attention; automated systems apply the same standards to every document. This consistency is valuable both for quality and for demonstrating compliance.

The primary disadvantage is accuracy. AI systems make mistakes — sometimes confidently wrong mistakes that are harder to catch than obvious errors. A VLM might describe an image inaccurately. An automated tagging system might misidentify a heading level or mangle a complex table. These errors may not surface in automated accessibility checkers but will affect real users.

This is why human oversight remains essential. The most effective workflow uses AI to handle volume and flag uncertainty, while humans review edge cases and verify quality. AI handles the scale problem; humans handle the judgment problem.

Cost is another consideration. Some AI tools require significant investment in software licensing or cloud computing resources. Others are available through existing platforms that planners may already use. The cost-benefit calculation depends on document volume — a department with a few hundred PDFs might manage with simpler tools, while one with 10,000 needs industrial-strength solutions.

Finally, there's the question of integration. AI tools that exist as standalone applications require manual document handling — downloading PDFs, processing them, and re-uploading. Tools that integrate with content management systems can automate more of the workflow. When evaluating options, planners should consider how the tool fits into their agency's actual publication process, not just its remediation capabilities in isolation.

Two Experiments Planners Can Run Today

Planners don't need organizational buy-in or a significant budget to begin exploring AI capabilities. Two low-cost experiments can demonstrate what's possible.

Experiment 1: Alt Text Generation With a VLM

Take a planning document with several images (e.g., a staff report with site photos or a plan chapter with maps and diagrams). Upload the images to an AI assistant that supports image input (such as Claude or ChatGPT with vision capabilities) and ask it to generate alt text descriptions. Compare the AI-generated descriptions to what you would write manually. For straightforward images, you'll likely find the AI output is usable with minor edits. For complex graphics, you'll see where human judgment is still required. This experiment takes 30 minutes and costs nothing.

Experiment 2: Document Structure Analysis

Take a PDF that you know has accessibility problems — a scanned document or one with poor tagging — and upload it to an AI assistant (such as Microsoft Copilot). Ask it to describe the document's structure: What are the main sections? What's the logical heading hierarchy? Are there tables, and what do they contain? The AI's interpretation will show you how machine-readable your document actually is, and its structural suggestions may reveal organization improvements you hadn't considered. This experiment surfaces the gap between visual presentation and logical structure that's at the heart of most accessibility failures.

These experiments won't remediate your backlog, but they'll build intuition about AI capabilities and limitations. That intuition is valuable when evaluating vendor claims, making the case for resources, or designing workflows that incorporate AI tools effectively.

Building Toward Real Capacity

Individual experiments are a starting point, not a solution. The accessibility challenge facing planning departments requires institutional capacity that doesn't yet exist in most local jurisdictions: systematic document inventory, integrated remediation workflows, ongoing quality assurance, and staff who understand both accessibility requirements and the tools available to meet them.

AI makes this capacity buildable in a way it wasn't before. The scale problem that made accessibility feel impossible becomes manageable when the right tools are applied systematically. But tools alone aren't enough — they need to be embedded in processes, supported by training, and governed by clear standards.

The April 2026 deadline is a forcing function. It creates urgency that accessibility has lacked for decades. Planners who understand what AI can do, and who've begun building familiarity through small experiments, will be better positioned to help their organizations respond. The path forward requires both the technology and the institutional will to use it. The technology is ready. The institutional will is what planners can help create.

Top image: D3Damon / iStock / Getty Images Plus


About the Authors
Ivis García, PhD, AICP, is an associate professor in the Department of Landscape Architecture and Urban Planning at Texas A&M University and president of the Association of Collegiate Schools of Planning (ACSP).
Jason Holt is a full-stack developer for Async Education, LLC, specializing in building accessible, evidence-based learning platforms that translate complex professional knowledge into effective study tools.

April 6, 2026

By Ivis Garcia, AICP