Introduction
In this lesson, you will explore accessibility in AI use, use AI to generate accessible content, and reflect on how well AI accomplished the task.
Read
Read “Inclusivity in Generative AI Should Be an Attribute, Not an Add-On” (~10 minutes).
Activity
Visually impaired individuals rely on alt-text to experience digital images. Alt-text is a short (usually 1-2 sentences) text-based description of an image that communicates the most important aspects of that image. Here’s an example of an image with corresponding alt-text:

Imagine that you are designing a web site for new employee orientation. Log in to Microsoft Copilot (https://copilot.microsoft.com) using your W&M credentials. Find an image that you think might be useful for this orientation. Upload an image and prompt Copilot to generate alternative text for the photo.
Example prompt: “You are designing an accessible web site for new employees. Generate 1-2 sentences of alternative text for the uploaded image. Highlight the most important aspects of the image.”
Discuss
Does the AI-generated alt-text accurately capture the image and its elements? Is anything missing that should be highlighted?
2 replies on “thing 7: Whose voices? Accessibility in AI”
The alternative text generated was very basic. The group of people in the image I uploaded were actually walking away from Wren and the alt-text generated said they were walking toward Wren – which was interesting. Also, limited reference to/details about the Wren Building itself. Only that it is historic.
I uploaded an image of the reception area to the Bee McLeod Recreation Center with the example prompt and it perfectly told me what was in the image. It highlighted the high ceilings, information on the walls, and the images in the floor emblem. However, I needed to give it more context because it did not know the significance or purpose of the building. The image is indexed on Google Maps so I am surprised it didn’t recognize the location.