Meeting Minutes
Wendy Reid: All right, why don't we get started then, now that we have a quick update on just some stuff that's been going on in the background. So very first thing to talk about you may notice, if you're paying really close attention to the GitHub, you're gonna notice that I'm going to have to take down the draft of the spec, this is a short-term thing, there's some licensing considerations with W3C document usage. So today I'm going to be sending an email to the team to ask for to get that addressed. In the meantime, I have to take down the copy that we have online, just so we're all above board. Then once that's approved, we can put it back up again and start working on it, but for now, I have to take it down. So if you notice that, and you're wondering why, that is why. Other piece of business is that the charter, I haven't received any additional comments on the charter as it is. I'll send out a formal email for this, so we don't have to do it right now. I will send out an email asking for everyone to basically approve or disapprove of the charter, and I'll put all the language in the email, but so keep an eye out for that, and we'll run that for one week. And barring anyone having any strong opposition, we will have an official charter. Just keep an eye out for that in your emails. The topic I wanted to discuss for today was research about ATAG. Amongst our deliverables that we talk about in the charter, one of them is essentially having a document that covers any research that we do about ATAG, I think this is an opportunity for us to research the current landscape. Any challenges that have, like, led to why we're in the place that we are today. Then also research looking ahead on possible features, or considerations that we need to make in future authoring tool standards.
Jutta Treviranus: Yeah, so for anyone not familiar with ASC, um, usually what happens is there is they fund both research to inform standards, so they have two arms of ASC. One is reaching out to the community because they want community-informed standards, and one of their commitments is nothing without us, so there is a funding arm that funds research projects that do not actually develop the standards, but create research that informs the standards. Then there's a separate arm of ASC that formally recruits the committees and, um, actually creates the standards. That also has a commitment to representation from the disability community, or the people that are going to be impacted by the standards.
One of the things that frequently happens within those committees is to also review what is the research that has happened to inform this standard. Now, I don't think there is very much that is specific to this particular activity within the set of research within ASC, but, I know in having participated in other ASC committees, we also do a jurisdictional scan or a, environmental scan of what is all of the research available to inform the activity that we have. So, one of the things that we will likely do before the ASC component of this begins is to create that jurisdictional scan with a bibliography and a lit review of what are all the research projects that have… or papers, or publications, or activities globally that have happened relevant to the activity, or to developing the standard.
Lisa Liskovoi: And you tell just to add to that, that as part of this project specifically, well, as part of a lot of our work, but as part of this project also is we have some co-designs that we've proposed to do around authoring tools, so we could potentially have a a number of sessions around different topics that are related to this, and that can be this group and others, and we can invite people from various communities as well. Ned, I don't know if you want to add anything to that.
Ned Zimmerman: Yeah, I mean, we sort of have a vague outline of four different co-design topics that could be used as a starting point. Looking at using authoring tools, accessing or accessibility of AI-generated content, processes for creating accessible content, and then a fourth one broadly framed as imagining inclusive tools for authoring, that brings in learnings from the other three sessions. So that's a very rough starting point of what we had proposed for this, but I think that's something that we could also discuss and see how we might want to refine those ideas for co-design, and turn them into sessions that this group and others could participate in
Charles Hall: Just to make sure everyone is aware of something that would be super useful here that was very useful in WCAG3, was the research that we started with was on the document itself, the current version. Which is basically what we're doing right now. We're proving a point that ATAG needs to be revisited, so research helps prove that point. Some of the research that we did on WCAG led to the requirements of WCAG3 to fix those things that were found to be problematic from the research we did on WCAG2. So, specifically, set up research to find out what's the gap, what's the shortcomings, what's the problems that we're trying to solve in the current document
Wendy Reid: Yeah, that's a great point. The foundational research for WCAG3 has proven incredibly useful over the years. The work of the Silver Task Force was essential
Charles Hall: Yeah, it was great to participate in that
Wendy Reid: I suppose, something for us to consider is how might we conduct some of that ourselves. And maybe we can look at the work from silver as a foundation for how to do the same thing on our side. Charles, maybe you can talk a little bit more about some of the activities, but I know there were things like surveys done, interviews with different stakeholders, a lot of different activities. I think even some targeted outreach to specific types of roles to get their input on the usage of WCAG2
Charles Hall: Yeah, there were several research methodologies used like that; surveys, polls, heuristic analysis and usability studies. There was academic research that we were given the raw data from to pull our own insights. It was pretty broad
Wendy Reid: One thing for us to consider, speaking of academic research, is doing something like a literature review. In the 10 years since ATAG2 was published, has there been any research on it directly, or on, I guess maybe even related matters to authoring on the web. What were the findings of that research. Can any of it help inform us?
Lisa Liskovoi: I'm wondering if it would be helpful for us to start a GitHub thread, just to document all of the different methodologies that we're considering using, and then I'm sure folks have examples of each one of those. That we can reference.
Wendy Reid: Yeah, for sure. Let me open that up right now.
Jutta Treviranus: And just to add to some of the topics we might want to explore, um, there's quite a bit that's recently been published relating to how much content is authored by AI and the trends there. Some of the critique of what is happening when in some of the studies or reports have shown that it's more than 50% of the content. I've been attending a bunch of AI summits, and yesterday there was a report that found that it was over 76% of the content was either fully created using AI or some AI involvement.
Shivaji Kumar: And that's likely to increase, right?
Jutta Treviranus: Yeah.
Wendy Reid: I've created a discussion thread where we can collect different methodologies or approaches, or resources. I'll dig up the link for Silver and add them here as a reference point.
Lisa Liskovoi: Do we know if the methodology in Silver is documented, or is it just the outcomes?
Charles Hall: You'll need to get direct access from the chairs. All of the all of the actual research lives in the original silver, Google Drive folder. There's both the research and the outcomes of the research. Were published in Google Docs for the task force to work from. And those advised their requirements, which are then public.
Wendy Reid: I was like, speaking of research, what are some areas that we're interested in looking at? Especially, what kind of open questions do people feel like they have?
Charles Hall: I think we should reframe the questions that we've already had. Why was it not successful in being adopted by policy around the world?
Mike Gifford (CivicActions): Or, for that matter, procurement, even more important.
Wendy Reid: I'm curious about, what is the relationship between policy and downstream considerations like procurement? I think, my theory, my hypothesis is that a big reason ATAG is not as adopted as it is, is because it's not in policy. So organizations, when they do their risk assessments and their budgeting, and they're thinking "what do we need to focus on?" They see that there's nothing requiring them to do something, and it's like, cool, we either don't have to do that, or we can push it down the road to a day when maybe we've got extra bandwidth, and of course, people almost never have extra bandwidth.
Shivaji Kumar: Yeah, and also, there's not enough of awareness about ATAG among the policy makers, managers. So, kind of that's one big barrier, right? We talk about WCAG 2.1, 2.2 but no one ever talks about ATAG at the enterprise level.
MJ - Mary Ann Jawili: So, I'm actually having this conversation at my new employer right now, a customer actually asked about ATAG conformance, and then my team was like, oh, maybe we should have a document that covers ATAG conformance. So I guess I talked them out of it. I was like, well, you know, we have the ACR, and that has sections for Section 508 and EN 301 549, that talk about authoring tools, and that should be enough to cover ATAG. At least, you know, the main things about being able to create accessible content. I'd like for us to talk about is, like, do we see people eventually, creating or including a tag as its own section in a conformance report or something.
Jutta Treviranus: Yeah, so, if we were to analyze what is the reason why it became, or why ATAG was not seen or given as much attention within policy, attention by the community at large, it can get very political, right? Because it, in large part, within W3C, what we're seeing is some very, very powerful authoring tool developers who, of course, do not wish to be regulated, but then, in addition to that, we're also seeing that at WCAG conformance creates an entire industry, because it, by virtue of WCAG being the dominant standard. The responsibility for creating WCAG-compliant or accessible content is pushed to the consumer rather than the organization to… not to the browsers, not to the authoring tool developers. Who are very, very large lobbying groups. Then added to that is we have the entire accessibility industry, which is dependent upon funding to evaluate and repair these sites, and of course, thereby also requiring, or benefiting from a highly technical WCAG. So if authoring tools were and this is a very cynical view, but I think to some extent it is somewhat realistic. If we had a strong ATAG within policy, within procurement, then both that accessibility industry would not be as large, but also, conforming to ATAG would also mean that those powerful companies and lobbyists would need to adhere to a set of regulations, rather than being able to push that to the consumer or to the organizations that are creating the web content. And that's a very cynical view, but I think that that is part of the reality.
Mike Gifford (CivicActions): I think you're right, Jutta, in terms of the incentives, right? It's all about the incentives, and producing reports and producing errors, and highlighting errors is where the accessibility industry is being there. I mean, a parallel element I've been really keen on trying to go off and get accessibility fixed in open source projects, because again, if you fix it upstream, then you're solving millions of errors downstream with a single fix, you can resolve many different problems. But it's really hard to get anyone to get any money to fixing problems upstream. There's no money in it. There's lots of money in fixing the problem a thousand times, but none for fixing it once for everyone. But in terms of the the incentives, like, I shared in the chat, the ACR editor tool that I helped to build with civic actions, because this is an effort where we've actually got in Section 508 in the U.S. government policy around changing how trying to remove the VPAT, because the VPAT is a very flawed process. But many people here probably aren't even aware of what an open ACR document is because it hasn't gotten enough attention within the accessibility industry, even though it's on the Section 508.gov website. And the reason it hasn't gotten any attention isn't because it isn't in policy, because it is. It's because it's not in procurement. Because procurements aren't asking for it. And if procurement documents are asking for it, if you need to be able to provide an open ACR document, then yes, absolutely, we're gonna go and see the industry responding to that, but they're not. I mean, Deque's got a session in the next week or two, talking about VPATs, and it's like, why are we still talking about VPATs? This is a legacy dinosaur format that's bad, there are better ways to do this that are recommended by Section 508.gov. So, I mean, again, it's how do we change the incentives, and that involves following the money.
Charles Hall: I just wanted to respond quickly to Jutta's point that I agree with the political bias challenge. But the question of adoption, I didn't intend to mean, literally, we do research on that. It was an example of some of the questions we had asked previously. So, my recommendation was take the questions we've already started to ask and form them as research.
Ned Zimmerman: Something that a few of these comments have made me think of as well, and it's a bit of an assumption on my part, maybe, but I think it might hold true, is that I've seen in work on projects at the IDRC that I've worked on around accessibility legislation, and other things in this arena that there tends to be a lot of emphasis placed on accessibility for consumers, and I'm using that term to mean people who, in the context of the web, people who are accessing web content. Not necessarily creating it, even though many quote-unquote consumers are using authoring tools to create content. And so, I wonder if there may be sort of a bias in there built-in around the idea we need to make websites that are accessible for people who are visiting them, but there may be an assumption on the part of enterprises in some contexts, that the people who are using the authoring tools within their own organizations that their employees aren't. The people they're thinking about when they're thinking about accessibility requirements. We've seen this a lot, I think, around some of the accessibility legislation and how, like, there needs to be that emphasis on accessibility for people working within organizations, um, and, you know, making sure that they are given the same accessibility supports and accessibility affordances that consumers would be. But there is so much emphasis on the consumer that I think sometimes that can be missed, and I feel like that difference between content authoring and content consumption, it's a false binary, but it is a binary that people still think about. And I have a bit of experience having worked on an authoring tool before I worked at the IDRC. And the emphasis was always on, is it created, is the consumer side of it accessible, not is the authoring experience accessible, at least from vendors or people who are contacting my old employer as potential customers, so anyway, just wanted to leave it there.
Miriam Fukushima: Part of the problem is really that the workflow is more convoluted in ATAG than in WCAG. That a lot of it is behind closed doors, like, every website that is in the open web can be critical, but like a lot of, uh, times I hear the argument, yeah, but there's a login. Only our people see it, and our people don't need it. So, yeah, there's hard to argue, yeah, but maybe you have an employee, you know, an employer with one time who needs it, and then it should be able to do and to conform, and they're like, yeah, but we know we don't need it. And we know our target group, and Um, a lot of it is behind closed doors, or is for a certain target group, and that goes back to where it flows the money, and for how many people, and how lucrative is it, if you want to regulate it, it has to be somehow open for regulation and controllable. And if a lot of it is behind logins, or, like, accounts, and not on the open web, just accessible for everyone to see, to judge, then it's a lot harder to enforce these kind of regulations. Also, a lot harder to actually determine, okay, who uses it, and for what purposes. It's for the output, if it's for creating content and, I think these are the three main aspects, like the workflow in itself, the application, and the regulation. That make it really hard and, um, I think in one of the first sessions, or the first two or three, we, like, we talked about it a lot and had a lot of good questions also, and that's where I agree with Charles, we should really put those questions also open for research.
Wendy Reid: I've run into this now at work every day, like, ironically, as a direct result of AI. I've been quite vocal with my colleagues about the rise of slide presentations completely generated by AI. I get it, making slides is annoying, and making them look nice is hard, and these tools make beautiful diagrams and things. But they end up inevitably to be a list of flat images with the text embedded, and there's the obvious argument of, hey, this isn't accessible, someone going in with a screen reader, or other tools won't be able to parse the text. The other thing that I also kind of lean on just because I think it resonates with people, unfortunately, a little bit more, sadly, is, these slides are not indexable. Our internal search engines that collect the data can't read your images, so if you want your training deck to be available for people. It's not, because it's just 12 images. Accessibility plays into everything, it's important for everything. There is a huge gulf internally around needing to make it accessible for the outside. I do like the idea of us putting together so maybe that can be another use for the same thread, or to create a different thread around the questions that we have. And once we even if they're rough, they don't have to be perfectly framed into, you know, research-ready hypotheses or anything, but what are our questions? And then looking at those questions, how do we phrase them in ways that make them researchable.
Latasha Willis: Sure, I can do that. Let me explain. I work for a healthcare organization where the state government oversees it, I'll put it like that. So, it makes us a government entity, so the April 22nd deadline, as far as accessibility, whatever, you know, that applies to us, right? So, our developer is in the process of trying to make sure that anything that us regular web content people can't edit, to deal with, you know, ADA issues. He's working on this, and he's run to this issue as far as any hero videos at the top of the screen. We use a service called Site Improve, to check for ADA issues. And there's this error message that has to do with, uh, being able to play and pause a background video. Because people who have attention deficit issues may need to play in pause in order to see what's in the video. But if it's decorative, that's all I'm trying to figure out, it's video, but it's decorative. He did update the ARIA label, you know, to say what it is. Because the main content is overlaid on the top of it, it was just playing in the background, you know what I mean? I can see if I can find that website so you can see what I'm referring to, but it's a little weird to me.
Wendy Reid: So there's actually a secondary, so there's obviously the attention use case, but for that one, there's also people with vestibular disorders can find videos quite disconcerting, depending on the content of the video, to the point of causing migraine, or seizures. Yeah, migraines are usually the more common symptom, or motion sickness are usually the more common, like side effects. That's why that play-pause requirement is in there for a number of reasons.
Latasha Willis: Yeah, on that page I'm talking about. I'll see if I can drop it in the chat, see if I can see what I'm talking about.
Wendy Reid: Yeah.
Charles Hall: Latasha, is your question how can this be authored?
Latasha Willis: Well, it's really a two-part question, and that is part of the question. If someone was trying to add pre-roll video as far as, like, you know, if they had to be the ones to do it.
Charles Hall: I see. So someone internal is using an internal authoring tool to add this video to web content.
Latasha Willis: Right.
Mike Gifford (CivicActions): I mean, I think that any application that allows you to upload a hero image, with a video should, by default, go off and have the pause, it should not start by default, and it should be something that has the controls built into it. But that's the responsibility of the tool builder, not the… it's not even the responsibility as I see it, of the authoring tool… sorry, it's not the authoring interface, it's the implementation of it. What does the CMS that you're using do to when a video is uploaded? How is it… how is it restricting videos that are played in hero images? So, the author doesn't really have any direct role in that, other than they need to upload the video, and have the developer go and make sure that that's processed is processed in a way that is accessible.
Wendy Reid: Yeah, I was thinking in the ATAG context, which is your second question, there's, I mean, a lot of this can be automated, right? Because upon upload of a video file, you know that it's a video.
Latasha Willis: Yeah.
Wendy Reid: You know the length and, like, the duration of it. And so you'll know, I've got a, you know, you should have a rule built in that's like if longer than 5 seconds, you must have this and also as an organization, you may say, like, we should always have a play-pause button, that's just better user experience, but that's built in, right? Anytime a video is present, you put a play-pause button on it. A lot of the, like, work can be done kind of in an automated fashion, and then just the person who's doing the editing. Their side of it might be that they also need to upload a caption file, or marking it as decorative, or giving alt text if it is more appropriate in that case, to give alt text. The ATAG perspective, on the tool side, a lot would be on, this is a video, and it's detected, I have some things I have to do.
Latasha Willis: Yeah, that's what I'm really trying to boil it down to. I put a link to the page in question that I'm asking about. As far as how that should be approached.
Wendy Reid: ATAG requires output to be accessible, or for it to be possible to create output which is accessible. Both, is maybe the best description?
Charles Hall: Yeah, I think we said output can be exposed to the author during the authoring process.
Ned Zimmerman: My read on this was that there's some nuance, around the accessibility of content produced by an authoring tool. Because there are production of accessible content. There are, just as an example, in the content auto-generation during the authoring session section, um, just to try to give a quick example of this. There's different ways to meet that criteria. One is that the content that's created is accessible. One is that authors are prompted during the process for required accessibility information. One is that automatic checking is performed after the content authoring, and the fourth is that checking is suggested. And so, it seems to me that there is some leeway there for how you meet that requirement. At least… because it says at least one of those following has to be true. So you could have a situation where there's a prompt to check for accessibility after the content is authored, but the content that's been authored may not actually be accessible. If I'm understanding that correctly, but others may have a better interpretation.
Evelyn Wightman: So, you could upload a video, and it would say, hey, you should maybe do these things. And you could say, no, I don't want to do that, and publish the video hero.
Ned Zimmerman: Yeah, that's how I understand it, but maybe that's something that could be revisited. Because I do think this is something Lisa and I were discussing with respect to AI, there's a big difference between something that's been in my head around this for the last few weeks is that when you're authoring a document, fully manually, typically, it's a process that is, at least in some fashion, linear and piece by piece. So you may not be authoring it all at once, all in the same… in top to bottom order, or left to right order. But you're going through each piece of content and creating it yourself, inserting images, inserting media, and so forth. If you're getting an LLM or an AI tool to generate a whole document for you based on a prompt, you don't have the opportunity to make sure that each piece is being created accessibly as you go through it. And so, it sort of back-loads all of that accessibility checking onto the author once the LLM or the AI tool has finished generating the document. And so that kind of fundamentally shifts the way this works a bit, because now, it's not like you have the opportunity to review it as you're creating it, piece by piece now. It's like a whole big thing that you have that you have to go through top to bottom, not having seen individual parts of it as you're authoring it. So that's something that's been in my head as a big change that an update to ATAG would need to address.
Wendy Reid: I've been thinking about the same thing as well, where in the prior paradigm you had what the tool could do, and what the author was responsible for and there was some automation possible, like in the video example, it could detect that the video was a certain length, it could detect that it was a video in the first place. And it could implement some things, but also prompt the user and say, hey, uh, I need a captions file, and the user could say, I don't care about any of that, but at least it was prompted for. In the new era, there's a very real workflow of even if it's user-generated content, say that same CMS where I upload the video and it says that's a video, oh, it has audio, it's this length, I gotta do these things. It can now say, I've generated a captions file for you. I've written a description for you. Would you like to check it? And obviously, the person can still say, ah, I don't care and skip, and there might be inaccuracies. And the author is responsible for those inaccuracies. But we at least have a still better, possibly, a better situation.
Shivaji Kumar: Now, if I can chime in, I implemented somewhat similar workflow, you know, for all text generation. So this was using API from Azure. But then when we developed our internal process, the alt text was generated by Azure, but then when it appeared in our platform. We did build a check in there, you know, so that… which basically prompted the author that, hey, do you want to accept this alt text description, yes or no? If you said yes, then it moves forward. If not, then it opened an edit box, okay, if you want to edit it. And then you could move forward. So there were the idea was to have human in the loop, no matter what you do.
Miriam Fukushima: Yeah, it's kind of like how we did it in our CMS as well, just to make sure that editors are not suddenly overwhelmed with lots of stuff they have to backlog, as to have some parts not required, but then teach them over time and say in a year or two, you should be done with your revision of content, and then we make certain fields required, like alt text and whatnot. You have learned by now, but not that overnight, all images are set up for vision because fields are required and not filled out, or something like that.
Wendy Reid: Saif, I saw your question, and I think this is something I've also been thinking about, browsers and AI chat apps soon have inherent AI-based capabilities to make everything accessible? I think this is one of the hottest topics. Since the LLMs kind of came on the scene, which is people talking about the possibility of things like generative UI. A website, like, a web interface, that can adapt to your needs immediately, you no longer rely on what's created by platforms now. I think the most interesting part about that is that there's massive privacy considerations, because, like, you have to essentially kind of give your information to to an agent to say, hey, I have the following needs, adapt the web to those. So you have to give out personal information. The other thing that I've been thinking about more lately just as I'm doing, like, research on search interfaces and stuff, is that the average user may know how to articulate their needs, but they may not know how to articulate them to the level to specify what their needs are, and how they translate to interfaces, generating interfaces. Do they know the names of different parts of a website. Do they know how to ask for radio buttons instead of checkboxes? Do they know that they prefer things appearing in modals, or dialogues, and not on the screen itself. Probably not. And so you still need expertise to bridge the gap. Which is a very interesting thing, or at least I think it is interesting.
Miriam Fukushima: I think that also plays a bit into, like, assuming… like, have other people assuming how a certain hindrance or whatnot. There's so many different nuances, and, vision impairments and, have a company assume how that is going to play out for me, because they make the interface, and then set certain things for me. And I don't have any nuance in fine-tuning that for myself, and I find it problematic to have other people assume how that plays out for me on a broad level, like, all screen data users need this.
Lisa Liskovoi: I've also been thinking about this a lot from the perspective of, like, presumably for a user agent to communicate the information it needs, it still needs to make certain assumptions about the meaning of the content, the functionality of the content, how it operates. It seems like quite a leap for me to think that you can kind of put together content unintentionally, and then AI will magically communicate that to everybody, knowing what my intentions were without me being explicit about them. And Ned and I had a look at what the OpenAI guidance for developers, and it said, oh, if you want this AI browser to work well it relies on ARIA, so it's interesting to see that you ultimately still have to communicate some of the information as an author, and what layer that comes in at, and also that a lot of it falls back to things like ARIA and probably semantic markup and all the things that we know are good for being clear about that multimodality, or whatever that is, in terms of user preferences.
Ned Zimmerman: Yeah, um, yes, that was right, that's right, Lisa. It was the guidance that I saw that came out when OpenAI released their Atlas browser that basically said, if you want the browser to be able to interact with your website, make sure you use appropriate ARIA tags in it. Which also emphasized ARIA over writing accessible content that doesn't require ARIA before you reach for ARIA, so I thought that was an interesting note. But just in terms of the generative UI piece, this is maybe a little tangential, but I just want to mention, I read something really interesting a few months ago, um, from someone who basically said they didn't think AGI was imminent, and the reason that they didn't was because there's been some research around human core knowledge, quote-unquote. Basically that there's an understanding of objects as; and I'll just quote from this, bounded, cohesive, spatiotemporally continuous entities. And this is something that even infants understand, including infants with sensory differences, also. This researcher particularly was talking about there's no difference… there are differences, but both blind and sighted infants have that same understanding of objects and how they work in the world. And the only way you can train a generative AI tool on that is basically giving them 3D models and telling them how they work. And so, I think there's a limitation I mean, when we're talking about interacting with LLMs, we're talking about interacting via text, speech and some vision. But there's a whole sensory layer there that lots of people, both disabled and non-disabled people, use and rely upon to interact with everything, including web interfaces. Yeah, I'll drop the link in. That's my current feeling about that, but, um, it's definitely an interesting conversation and an interesting thing to watch
Miriam Fukushima: Yeah, I think that kind of proves the point that in ATAG, we we make it clear that, uh, even if the agent doing the authoring is AI, the human that is responsible for the AI to fix the accessibility issues, like ATAG should include both kinds of agents for the definition and for explanations, and consider the new technology, but the responsibility to make sure that ATAG is applied is the human