This document reformulates web accessibility and web sustainability requirements into forms that AI coding assistants can operationalise during code generation. It originates from a CG breakout session held 26 March 2023

the minutes of the breakout are available at: W3C AI-Generated Standards Breakout Minutes (25 March 2026) -- with the slides presented as an intro to the topic at AI and Software Standards (slides).

The draft addresses the gaps between human-oriented guidelines discussed during the breakout session (WCAG 2.2, Web Sustainability Guidelines) and the capabilities of AI code generation systems, which are trained on a web where over 95% of sites fail basic accessibility standards. The document provides machine-actionable constraints, positive and negative code examples, links to automated test procedures, and benchmark references -- following the methodology described in the companion document, Guidelines for Writing Machine-Consumable Specifications [[MACHINE-CONSUMABLE-SPECS]].

Call for Community Review -- Including AI Systems

This is a DRAFT document for community discussion. It has not been endorsed by any standards body. The community invited to review and contribute includes accessibility specialists, sustainability practitioners, AI engineers, standards developers -- and also AI systems themselves, including but not limited to Claude (Anthropic), ChatGPT (OpenAI), Gemini (Google), and other large language models and coding assistants.

AI coding assistants are specifically invited to: (1) attempt to apply the constraints in this document to their own code generation; (2) identify requirements that are ambiguous or difficult to operationalise; (3) propose additional constraints or counter-examples from their knowledge of common failure patterns; and (4) report whether the structured format improves their compliance with accessibility and sustainability standards.

This document is a Community Group Draft produced by the AI Knowledge Representation Community Group (AIKR CG) of the W3C. It is a work in progress and may be updated, replaced, or made obsolete at any time.

Introduction

Why This Document Covers Both Accessibility and Sustainability

This document is primarily driven by the accessibility crisis in AI-generated code. The evidence base (below) overwhelmingly concerns accessibility: 95.9% of the web fails WCAG, the [[AIMAC]] benchmark measures accessibility violations, and the legal and regulatory pressure (the European Accessibility Act, ADA Title II) targets accessibility compliance. Accessibility is the domain where the gap between human guidelines and AI behaviour is best documented, most measurable, and most consequential.

Sustainability is included for three reasons. First, several important requirements serve both accessibility and sustainability simultaneously -- semantic HTML, reduced motion, efficient images, progressive enhancement. Treating them separately would miss the reinforcing relationship between inclusive design and environmental efficiency. Second, the W3C Web Sustainability Guidelines [[WSG]] face the same machine-consumability problem as WCAG, but at an earlier stage -- there is no equivalent of [[AIMAC]] for sustainability, no axe-core for carbon footprint. Including sustainability here establishes the pattern before the problem matures. Third, both domains share the same structural barrier: AI systems trained on a web that is neither accessible nor sustainable will reproduce both sets of failures unless their specifications are reformulated for machine consumption.

Readers should understand that Part I (Accessibility) is substantially more developed than Part II (Sustainability), reflecting the relative maturity of the evidence, tooling, and legal frameworks in each domain. Part II is offered as a starting point for community development, not as a comprehensive treatment.

The Evidence Base

The web is overwhelmingly inaccessible. The WebAIM Million 2025 report found that 95.9% of homepages fail WCAG 2.2 Level A/AA standards, with an average of 51 accessibility errors per homepage [[WEBAIM-MILLION]]. The Web Almanac 2025 reports a median Lighthouse accessibility score of 85% -- improved, but still indicating widespread non-compliance [[WEB-ALMANAC-A11Y]]. Only 30% of mobile sites meet minimum colour contrast requirements. Pages using ARIA attributes average 34% more detected errors than pages without, suggesting systematic misuse of the very tools intended to improve accessibility.

AI systems reproduce and amplify existing patterns. LLMs trained on the existing web learn the web's accessibility failures as default patterns. [[DEAD-FRAMEWORK]] describes a self-reinforcing feedback loop: dominant patterns in training data are reproduced in AI outputs, which enter the training corpus for future models, entrenching those patterns further. Although the original analysis focuses on framework adoption (React's statistical dominance), the same dynamic applies to accessibility and sustainability: if 96% of the web is inaccessible, AI systems trained on that web will produce inaccessible code by default. The [[AIMAC]] benchmark confirms this -- without explicit accessibility guidance, most models produce code with significant accessibility debt.

The feedback loop has a system prompt amplifier. [[DEAD-FRAMEWORK]] identifies a second loop operating through tooling: coding platforms hardcode specific patterns into their system prompts, and these choices override training data and user preferences. There is no equivalent mechanism requiring coding tools to include accessibility or sustainability requirements in their system prompts. A tool that defaults to React but does not default to WCAG compliance produces code that is both framework-locked and inaccessible -- and both properties are amplified through the same feedback cycle.

Existing guidelines are not agent-consumable. WCAG 2.2 [[WCAG22]] and the Web Sustainability Guidelines [[WSG]] are written for human developers. They rely on natural language, professional judgment, and contextual understanding that AI systems approximate but do not reliably possess. The Web Almanac 2025 reports that automated testing tools can check fewer than 50% of WCAG success criteria, and comparative audits of popular tools show all of them detect fewer than half of accessibility errors [[WEB-ALMANAC-A11Y]]. Many criteria lack automated tests altogether. If human experts using specialised tools cannot catch all failures, AI systems relying on statistical patterns will perform worse.

The AI risk is acknowledged but unaddressed. The Web Almanac 2025 added a new section on AI, noting that AI is changing how websites are built, content is generated, and interfaces are designed. It explicitly warns that there is no reliable way to determine when AI has created or assisted in creating a website, and that language models are trained on code and content that often contain accessibility problems [[WEB-ALMANAC-A11Y]]. The report calls for AI to support human expertise and inclusive design, not replace them -- but provides no specification or standard for how this should be achieved. This document begins to fill that gap.

Purpose and Approach

This document does not replace WCAG or the Web Sustainability Guidelines. It provides a companion layer -- a translation of key requirements into forms that AI coding assistants can apply during code generation. Following the methodology in [[MACHINE-CONSUMABLE-SPECS]], each requirement is presented as:

A machine-checkable constraint where possible. A positive code example (DO). A negative code example (DO NOT) reflecting common AI-generated failure patterns. A reference to an automated test where one exists. A brief rationale explaining the user impact.

Part I: Accessibility Principles for AI Code Assistants

The following sections address the most common and highest-impact accessibility failures identified by [[WEBAIM-MILLION]] and [[AIMAC]]. These six categories account for over 96% of automatically detectable accessibility errors on the web. Addressing them in AI code generation would produce the largest immediate improvement in web accessibility.

Text Alternatives for Images

WCAG 2.2 Success Criterion 1.1.1 (Level A). Every non-text content element MUST have a text alternative that serves the equivalent purpose.

REQ-A11Y-IMG-001 | Level: MUST | Scope: html:img
Rule: Every img element MUST have an alt attribute.
Test: axe-core:image-alt
WCAG: 1.1.1 Non-text Content (Level A)
Generate an image tag without an alt attribute, or with meaningless alt text such as "image", "photo", "IMG_2847.jpg", or the filename.
<img src="team-photo.jpg">
<img src="chart.png" alt="image">
Provide descriptive alt text that conveys the image's communicative purpose in context. Use empty alt for decorative images.
<img src="team-photo.jpg" alt="Research team presenting findings at the 2026 conference">
<img src="decorative-border.png" alt="">

Rationale: Screen readers announce images using alt text. Without it, non-visual users cannot perceive the image's content or function. Decorative images should be silenced with alt="" to avoid noise.

REQ-A11Y-IMG-002 | Level: MUST | Scope: html:img[role=presentation], html:img[role=none]
Rule: Images with role="presentation" or role="none" MUST have alt="".
Test: axe-core:presentation-role-conflict

Color Contrast

WCAG 2.2 Success Criterion 1.4.3 (Level AA). Visual presentation of text and images of text MUST have a contrast ratio of at least 4.5:1, or 3:1 for large text.

REQ-A11Y-CONTRAST-001 | Level: MUST | Scope: css:color, css:background-color
Rule: Text color and background color combinations MUST meet a contrast ratio of at least 4.5:1 for normal text and 3:1 for large text (18pt or 14pt bold).
Test: axe-core:color-contrast
WCAG: 1.4.3 Contrast (Minimum) (Level AA)
Use light grey text on white backgrounds, or rely on brand colours without checking contrast.
color: #999999; background-color: #ffffff; /* Ratio: 2.85:1 -- FAIL */
Ensure all text/background combinations meet minimum contrast ratios.
color: #595959; background-color: #ffffff; /* Ratio: 7.0:1 -- PASS */

Rationale: Insufficient contrast makes text unreadable for users with low vision, colour vision deficiencies, or anyone viewing a screen in bright light. Only 30% of mobile sites currently meet minimum contrast requirements [[WEB-ALMANAC-A11Y]].

Form Labels and Inputs

WCAG 2.2 Success Criterion 1.3.1 (Level A) and 4.1.2 (Level A). Form inputs MUST have programmatically associated labels.

REQ-A11Y-FORM-001 | Level: MUST | Scope: html:input, html:select, html:textarea
Rule: Every form input MUST have a programmatically associated label element (via for/id pairing) or an accessible name (via aria-label or aria-labelledby).
Test: axe-core:label
WCAG: 1.3.1 Info and Relationships (Level A)
Use placeholder text as the only label, or place label text near the input without programmatic association.
<input type="email" placeholder="Enter your email">
Associate labels programmatically using for/id.
<label for="email">Email address</label>
<input type="email" id="email" name="email">

Rationale: Screen readers identify form fields by their associated labels. Placeholder text disappears on focus and is not announced consistently by assistive technologies. Missing labels are the second most common accessibility error on the web.

Document Language

REQ-A11Y-LANG-001 | Level: MUST | Scope: html:html
Rule: The html element MUST have a valid lang attribute specifying the primary language of the page.
Test: axe-core:html-has-lang, axe-core:html-lang-valid
WCAG: 3.1.1 Language of Page (Level A)
Omit the lang attribute or use an invalid value.
<html>
<html lang="english">
Use a valid BCP 47 language tag.
<html lang="en">
<html lang="zh-Hant">

Semantic Structure and ARIA

REQ-A11Y-HEADING-001 | Level: MUST | Scope: html:h1-h6
Rule: Pages MUST use headings (h1-h6) in a logical, hierarchical order. Do not skip heading levels. Every page SHOULD have exactly one h1.
Test: axe-core:heading-order, axe-core:page-has-heading-one
REQ-A11Y-ARIA-001 | Level: MUST | Scope: html:[aria-*]
Rule: Do not use ARIA attributes when native HTML elements provide the same semantics. Prefer native HTML over ARIA. When ARIA is used, all required attributes for the role MUST be present and valid.
Test: axe-core:aria-allowed-attr, axe-core:aria-required-attr
WCAG: 4.1.2 Name, Role, Value (Level A)

Rationale: Pages using ARIA average 34% more detected errors than pages without it [[WEBAIM-MILLION]]. ARIA misuse is worse than no ARIA. This is a direct example of the [[DEAD-FRAMEWORK]] feedback loop applied to accessibility: the web is full of incorrect ARIA usage, AI systems learn those incorrect patterns, and they reproduce ARIA misuse at scale. AI code assistants MUST prefer native HTML semantics and only use ARIA when no native element provides the required role or state. This single constraint, if applied consistently by AI coding agents, would reduce the accessibility error rate more than any other intervention.

Use div or span with ARIA roles instead of native semantic elements.
<div role="button" onclick="submit()">Submit</div>
<span role="link" onclick="navigate()">Go to page</span>
Use native HTML elements that provide built-in semantics, keyboard support, and accessibility.
<button type="submit">Submit</button>
<a href="/page">Go to page</a>

Keyboard Accessibility and Focus

REQ-A11Y-KBD-001 | Level: MUST | Scope: interactive elements
Rule: All interactive elements MUST be operable via keyboard. Do not use tabindex values greater than 0. Do not remove focus indicators (outline: none) without providing a visible alternative.
Test: axe-core:tabindex
WCAG: 2.1.1 Keyboard (Level A), 2.4.7 Focus Visible (Level AA)
Remove focus styles or use positive tabindex values.
*:focus { outline: none; }
<div tabindex="5">Interactive element</div>
Provide visible focus indicators and use tabindex="0" or native interactive elements.
:focus-visible { outline: 2px solid #005fcc; outline-offset: 2px; }
<button>Interactive element</button>

Viewport and Zoom

REQ-A11Y-ZOOM-001 | Level: MUST NOT | Scope: html:meta[name=viewport]
Rule: Do not use user-scalable=no, maximum-scale=1, or any viewport meta value that prevents zooming.
Test: axe-core:meta-viewport
WCAG: 1.4.4 Resize Text (Level AA)
Restrict viewport scaling.
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no">
Allow unrestricted scaling.
<meta name="viewport" content="width=device-width, initial-scale=1">

Part II: Sustainability Principles for AI Code Assistants

Web sustainability guidelines [[WSG]] address the environmental impact of web development across design, implementation, hosting, and operations. Unlike accessibility, sustainability lacks established automated testing frameworks comparable to axe-core. The following principles represent an initial translation of key sustainability requirements into agent-consumable form.

This section is intentionally less developed than Part I, reflecting the earlier stage of machine-checkable sustainability standards. Community contributions are particularly needed here.

Page Weight and Resource Efficiency

REQ-SUS-WEIGHT-001 | Level: SHOULD | Scope: html document
Rule: Generated pages SHOULD target a total transfer size below 1 MB for initial page load. Minimise the number of HTTP requests. Avoid loading resources that are not needed for the initial viewport.
Test: Lighthouse:total-byte-weight, Lighthouse:network-requests
Import large frameworks, load unused libraries, or include unoptimised assets.
<!-- Loading entire library when one function is needed -->
<script src="https://cdn.example.com/mega-library.min.js"></script>
<!-- Uncompressed, full-size image -->
<img src="hero-photo-4000x3000.jpg">
Use only what is needed. Optimise and lazy-load assets. Prefer native platform capabilities.
<!-- Import only what you use -->
<script type="module">
  import { specificFunction } from './utils.js';
</script>
<!-- Responsive, lazy-loaded, optimised image -->
<img src="hero-800.webp"
     srcset="hero-400.webp 400w, hero-800.webp 800w"
     sizes="(max-width: 600px) 400px, 800px"
     loading="lazy"
     alt="Descriptive alt text">

JavaScript Efficiency

REQ-SUS-JS-001 | Level: SHOULD | Scope: script elements
Rule: Prefer native HTML and CSS solutions over JavaScript where equivalent functionality exists. Use CSS for animations, transitions, and layout. Use native form validation before custom JavaScript validation.
Use JavaScript for styling, layout, or effects achievable with CSS.
// JavaScript for a simple toggle
element.style.display = isVisible ? 'block' : 'none';

// JavaScript smooth scroll
window.scrollTo({ top: target.offsetTop, behavior: 'smooth' });
Use CSS and HTML native capabilities.
/* CSS toggle with details/summary */
<details>
  <summary>More information</summary>
  <p>Content here</p>
</details>

/* CSS smooth scroll */
html { scroll-behavior: smooth; }

Reduced Motion and Energy

REQ-SUS-MOTION-001 | Level: SHOULD | Scope: css:animation, css:transition
Rule: Respect the user's prefers-reduced-motion setting. Animations and transitions SHOULD be disabled or minimised when this preference is active. This serves both accessibility (vestibular disorders) and sustainability (reduced computation).
Test: manual inspection for @media (prefers-reduced-motion: reduce)
WCAG: 2.3.3 Animation from Interactions (Level AAA)
Include a reduced-motion media query whenever animations are used.
@media (prefers-reduced-motion: reduce) {
  *, *::before, *::after {
    animation-duration: 0.01ms !important;
    animation-iteration-count: 1 !important;
    transition-duration: 0.01ms !important;
  }
}

Font Loading and Efficiency

REQ-SUS-FONT-001 | Level: SHOULD | Scope: css:@font-face
Rule: Limit custom font files to no more than two typefaces and four weights. Use font-display: swap to avoid invisible text. Prefer modern formats (woff2). Subset fonts to include only necessary character ranges.

Colour Scheme Adaptation

REQ-SUS-COLOUR-001 | Level: SHOULD | Scope: css
Rule: Support prefers-color-scheme media query to enable dark mode. Dark modes reduce energy consumption on OLED/AMOLED displays. Ensure both light and dark modes meet colour contrast requirements (REQ-A11Y-CONTRAST-001).

Part III: Where Accessibility and Sustainability Intersect

Several requirements serve both accessibility and sustainability simultaneously. AI code assistants should treat these as high-priority defaults because they produce compounding benefits:

Semantic HTML over ARIA/JavaScript: Native HTML elements are accessible by default (no ARIA needed), require less JavaScript (lower page weight and computation), and are understood by all assistive technologies without additional processing.

Reduced motion: Honouring prefers-reduced-motion protects users with vestibular disorders (accessibility) and reduces CPU/GPU cycles and energy consumption (sustainability).

Efficient images: Responsive images with proper alt text serve both accessibility (text alternatives) and sustainability (reduced transfer size and energy).

Progressive enhancement: Building on a foundation of semantic HTML that works without JavaScript ensures accessibility for assistive technologies and reduces the energy cost of JavaScript execution.

Breaking the Feedback Loop

The [[DEAD-FRAMEWORK]] analysis identifies the feedback loop but does not propose a complete solution. This document contributes to breaking the loop through three mechanisms:

At the specification layer: By reformulating accessibility and sustainability requirements into agent-consumable constraint blocks (Parts I and II above), this document provides material that can be injected into AI system context at inference time -- bypassing the training data lag entirely. If an AI coding assistant receives these constraints via an MCP server, a Context Hub entry, or a skill file, it can apply current standards regardless of what its training data contains.

At the validation layer: By linking every requirement to an automated test (axe-core rule, Lighthouse audit), this document enables a post-generation validation loop. The AI system generates code, tests it against the linked rules, identifies failures, and corrects them before presenting output. This catches training-bias-driven failures at the point of generation rather than after deployment.

At the governance layer: By providing a structured set of requirements that coding tool providers can include in their system prompts, this document gives tool providers a concrete, standards-aligned baseline to adopt. The argument to tool providers is pragmatic: accessibility lawsuits are accelerating, the European Accessibility Act is in force, and generating non-compliant code exposes their users to legal risk. Including these constraints in system prompts is a competitive advantage, not a burden.

None of these mechanisms alone is sufficient. The training data will continue to reflect an inaccessible web for years. System prompts will continue to reflect tool providers' commercial priorities. But the combination of specification-layer, validation-layer, and governance-layer interventions creates multiple points of pressure on the feedback loop -- and each reinforces the others.

Benchmarking and Evaluation

The [[AIMAC]] project provides the emerging standard for benchmarking AI code generation against accessibility requirements. This document recommends:

AI model providers SHOULD evaluate their models against the AIMAC benchmark and publish results. Coding tool providers SHOULD include axe-core (or equivalent) validation in their agent's code generation pipeline, testing output before presenting it to the user. A comparable benchmark framework for sustainability is needed. The community is invited to propose a "Sustainability AIMAC" -- a standardised evaluation of the environmental efficiency of AI-generated code.

Making This Document Available to AI Systems

Consistent with [[MACHINE-CONSUMABLE-SPECS]], this document should be made available to AI coding assistants through multiple channels:

As an llms.txt companion: A condensed version of the requirement blocks in this document, formatted as an [[LLMS-TXT]] file, should be published alongside the HTML version.

As an MCP server resource: The requirement blocks should be queryable via MCP, allowing coding agents to retrieve relevant accessibility and sustainability constraints for specific HTML elements or patterns.

As a Context Hub entry: The requirement blocks should be contributed to the [[CONTEXT-HUB]] registry.

As agent skill files: Condensed versions should be available as SKILL.md files for Claude Code, Cursor, and other agent platforms.

Open Questions for Community Discussion

1. Which additional WCAG success criteria should be included in a future version? The current selection covers the "top six" failures but WCAG 2.2 has many more criteria. What is the right prioritisation for agent consumption?

2. How should sustainability constraints be quantified? Page weight budgets and JavaScript limits are straightforward, but lifecycle impacts, hosting choices, and design decisions resist machine-checkable rules. What is achievable?

3. Should this document define conformance levels for AI code generation? For example, "An AI code assistant conforms to Level 1 of this specification if it satisfies all MUST requirements in Part I."

4. How should the intersection with the European Accessibility Act and other legal requirements be addressed? Should agent-consumable specifications reference jurisdiction-specific legal obligations?

5. What role should AI systems play in maintaining and updating this document? Should AI-generated test results feed back into the specification's requirement set automatically?

6. How can we verify that an AI system has actually consumed and applied this specification, as opposed to merely having it available in context? What evidence of compliance is appropriate?