Get AI summaries of any video or article — Sign up free
Your AI Can't See 16% of Humanity. Here's the 5-Minute Check That Proves It. thumbnail

Your AI Can't See 16% of Humanity. Here's the 5-Minute Check That Proves It.

6 min read

Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

AI image and language systems have improved in disability representation, but prompt sensitivity and training-data gaps still lead to disability being erased or distorted.

Briefing

AI accessibility is improving fast—but it still erases disability when models are trained on incomplete data and when products treat accessibility as a checklist. Elsa Honison, a deaf-blind disability advocate and writer, describes how early image-generation tools repeatedly “corrected” her into a non-disabled appearance, while newer systems like ChatGPT and Claude increasingly respect disability traits in outputs—an advance that matters because representation affects inclusion, community belonging, and whether disabled people can safely use AI without feeling misseen.

Honison’s turning point came from experimenting with early Microsoft Copilot image features. When she asked for a selfie-style image that included her hearing aids and cataracted eye, the first outputs were distorted and unsettling; later versions gave children blindness but not adults, and the system kept apologizing for her disability. More importantly, it refused to render her “blind eye” as she experiences it, repeatedly changing it into two eyes. That pattern—disability being treated as an error to fix—became a prompt for deeper testing and advocacy.

Over the past 24 months, Honison says the tone and accuracy have shifted. ChatGPT and Claude no longer apologize for disability, and Claude can answer questions about disability language choices, including person-first versus identity-first preferences. She also cites a BBC article about a prosthetic wearer using AI to draw prosthetics; after trying again, she found AI could “mostly” render her cataracted eye and hearing aids depending on the prompt, without trying to convert her into a non-disabled person. The implication is not just better graphics, but a growing ability to treat disability as a normal visual and informational attribute rather than something to remove.

The conversation widens from image generation to the broader risk that AI systems lack disabled people in training data. Honison points to MIT’s Morality Machine (2019), which tested trolley-problem ethics using bodies crossing streets but reportedly had no disabled people in training data—meaning wheelchairs and white canes were absent. That gap forces models to reason about “disabled” outcomes using proxies like old age or children, revealing how missing representation can become dangerous.

At the same time, AI can be a powerful adaptive aid when it puts control in the hands of disabled users. Honison describes practical uses: taking pictures of distant text and having AI read it as large print; zeroing in on specific handwriting within a packet; reading prescription bottles; and replacing volunteer-dependent tools like Be My Eyes with AI-driven assistance that avoids privacy and safety concerns. For ADHD, she notes people use AI agents to track medications and plan next steps. The common thread is autonomy: AI can “fill in the spaces” without requiring constant help from another person.

For builders, Honison argues accessibility can’t be solved by WIKG-style checkboxes or “AI overlays” that assume the system will catch everything later. She warns that accessibility tech debt is real and that “fix it in post” often becomes permanent. With local inference on the horizon—glasses-like always-on layers and LLM-friendly chips—there’s a temptation for app makers to punt accessibility to the overlay, but that approach still fails when users’ needs vary and when AI can’t be forced to adapt for every disability. She also calls out limitations of screenshot-based accessibility critiques: an LLM may detect contrast but miss whether links are properly labeled or whether content is readable by screen readers.

Finally, she urges AI-assisted development tools, including vibe coding platforms like Lovable, to bake accessibility “nuts and bolts” into the workflow so new builders learn the skills rather than relying on everyone to guess what disabled users need. Her closing advice is blunt: don’t trust ChatGPT to write perfectly accessible code—double-check it—and keep disabled people visible in the systems being built.

Cornell Notes

Elsa Honison describes how AI accessibility has improved, especially in how newer models handle disability in image and language tasks. Early image tools often “corrected” her into a non-disabled appearance and even apologized for her disability, while newer systems like ChatGPT and Claude increasingly render disability traits and support disability-aware language. She argues that missing disabled representation in training data can create real-world harm, citing MIT’s Morality Machine as an example of disabled people being absent from training bodies. At the same time, AI can be a major adaptive aid when it gives disabled users autonomy—reading text, interpreting images, and supporting ADHD routines without requiring a human volunteer. For builders, accessibility can’t be a checkbox or an afterthought; screenshot-based critiques and “fix it later” approaches miss critical issues like screen-reader compatibility.

Why did early image-generation tools struggle to represent Honison’s disability, and what changed over time?

Honison tested early Microsoft Copilot image generation by prompting for a selfie-like image that included her hearing aids and cataracted eye. The first outputs were distorted, and later versions gave children blindness but not adults. More consistently, the model refused to render her “blind eye,” repeatedly converting her into a two-eyed appearance and apologizing for her disability. Over the following 24 months, she reports that ChatGPT and Claude stopped apologizing and began respecting disability traits more reliably; she also notes that after seeing a BBC prosthetics-related example, she could prompt AI to draw her cataracted eye and hearing aids with mostly accurate results depending on the prompt.

What does the Morality Machine example illustrate about accessibility and AI training data?

Honison points to MIT’s Morality Machine (2019), which tested trolley-problem ethics using AI with different kinds of bodies. The reported issue was that disabled people were absent from the training data—no wheelchairs, no white canes. That absence forces the system to reason about “disabled” outcomes using proxies like old age, children, or even a dog, showing how missing representation can turn into a lack of information and potentially unsafe decisions.

How can AI act as an accessibility tool without requiring constant human assistance?

Honison describes using AI for tasks that would otherwise require another person. For vision-related needs, she uses AI to read distant text from a picture and convert it into large print, to locate specific handwriting within a packet, and to read prescription bottles. She contrasts this with Be My Eyes, which relies on interacting with a volunteer; she disliked it because it requires sharing identity and location context with strangers, which can feel unsafe in situations like navigating an airport alone or having someone read credit cards. AI removes that dependency by providing assistance directly.

What principle guides Honison’s view of accessibility beyond one-size-fits-all solutions?

She argues accessibility should be driven by the disabled person’s needs and autonomy rather than external forces. Many adaptive aids require continuous reliance on something outside the user—like guide dogs that must be used and managed daily. In contrast, she frames AI as enabling control: it can provide the missing support when needed and disappear when not, helping users access information and experiences without constant third-party involvement. She also warns that not every product can be equally accessible for every disability, so builders must understand their specific audience and design for equal access to information and experience.

Why are screenshot-based accessibility critiques limited, even when an LLM can judge contrast?

Honison says she tried handing an LLM a screenshot and asked for an accessibility critique. The model recognized some issues like contrast, but it missed others because screenshots don’t contain the underlying structure needed for assistive technologies. Examples she gives: it couldn’t detect whether a link was an accessible link (because that requires HTML semantics, not just pixels), and it couldn’t determine whether the page could be read by a screen reader because that information isn’t available in an image.

What does Honorable’s warning about “AI overlays” and “fix it in post” mean for product teams?

Honison worries that always-on intelligence layers—enabled by local inference on laptops or glasses-like overlays—could tempt builders to delay accessibility work. The idea would be: the overlay will “see the screen” and handle accessibility later. She argues this creates expensive tech debt and doesn’t solve the full range of accessibility needs. She also extends the warning to web accessibility: overlays and after-the-fact fixes don’t replace building accessibility into the product from the start.

Review Questions

  1. What specific behaviors in early image-generation outputs led Honison to conclude that disability was being erased, and how did newer models differ?
  2. How does missing disabled representation in training data (as in Morality Machine) translate into real accessibility or safety failures?
  3. Why can an LLM miss screen-reader and link-structure issues when given only a screenshot?

Key Points

  1. 1

    AI image and language systems have improved in disability representation, but prompt sensitivity and training-data gaps still lead to disability being erased or distorted.

  2. 2

    Missing disabled people in AI training data can produce unsafe or incomplete outputs, illustrated by MIT’s Morality Machine lacking wheelchairs and white canes.

  3. 3

    AI can function as an adaptive aid when it gives disabled users autonomy—reading text, interpreting images, and supporting routines without requiring a human volunteer.

  4. 4

    Accessibility design must be driven by understanding the target users’ abilities and constraints; WIKG-style checklists and one-size-fits-all solutions don’t cover the full disability spectrum.

  5. 5

    Screenshot-based accessibility critiques are inherently limited because they can’t reliably detect semantic issues needed for screen readers (like link structure).

  6. 6

    “Fix it in post” and “punt to an always-on AI overlay” strategies create accessibility tech debt that becomes costly and incomplete.

  7. 7

    Accessibility support should be built into AI-assisted development tools (including vibe coding platforms) so new builders learn the fundamentals rather than guessing.

Highlights

Early Microsoft Copilot image generation repeatedly converted Honison into a two-eyed appearance and apologized for her disability—an example of models treating disability as an error to correct.
Claude’s disability-aware capabilities include answering questions about disability language choices (person-first vs identity-first), signaling progress beyond visual rendering.
Honison’s autonomy argument: AI can provide support without requiring constant reliance on another person, unlike volunteer-based tools such as Be My Eyes.
Accessibility tech debt is a recurring risk: delaying accessibility work or relying on always-on overlays can leave critical issues unsolved.
Screenshot-based LLM critiques can miss key accessibility failures because assistive-technology-relevant structure isn’t present in pixels.

Topics

  • AI Accessibility
  • Disability Representation
  • Adaptive Aids
  • Accessibility Testing
  • Vibe Coding

Mentioned