Your AI Can't See 16% of Humanity. Here's the 5-Minute Check That Proves It.
Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AI image and language systems have improved in disability representation, but prompt sensitivity and training-data gaps still lead to disability being erased or distorted.
Briefing
AI accessibility is improving fast—but it still erases disability when models are trained on incomplete data and when products treat accessibility as a checklist. Elsa Honison, a deaf-blind disability advocate and writer, describes how early image-generation tools repeatedly “corrected” her into a non-disabled appearance, while newer systems like ChatGPT and Claude increasingly respect disability traits in outputs—an advance that matters because representation affects inclusion, community belonging, and whether disabled people can safely use AI without feeling misseen.
Honison’s turning point came from experimenting with early Microsoft Copilot image features. When she asked for a selfie-style image that included her hearing aids and cataracted eye, the first outputs were distorted and unsettling; later versions gave children blindness but not adults, and the system kept apologizing for her disability. More importantly, it refused to render her “blind eye” as she experiences it, repeatedly changing it into two eyes. That pattern—disability being treated as an error to fix—became a prompt for deeper testing and advocacy.
Over the past 24 months, Honison says the tone and accuracy have shifted. ChatGPT and Claude no longer apologize for disability, and Claude can answer questions about disability language choices, including person-first versus identity-first preferences. She also cites a BBC article about a prosthetic wearer using AI to draw prosthetics; after trying again, she found AI could “mostly” render her cataracted eye and hearing aids depending on the prompt, without trying to convert her into a non-disabled person. The implication is not just better graphics, but a growing ability to treat disability as a normal visual and informational attribute rather than something to remove.
The conversation widens from image generation to the broader risk that AI systems lack disabled people in training data. Honison points to MIT’s Morality Machine (2019), which tested trolley-problem ethics using bodies crossing streets but reportedly had no disabled people in training data—meaning wheelchairs and white canes were absent. That gap forces models to reason about “disabled” outcomes using proxies like old age or children, revealing how missing representation can become dangerous.
At the same time, AI can be a powerful adaptive aid when it puts control in the hands of disabled users. Honison describes practical uses: taking pictures of distant text and having AI read it as large print; zeroing in on specific handwriting within a packet; reading prescription bottles; and replacing volunteer-dependent tools like Be My Eyes with AI-driven assistance that avoids privacy and safety concerns. For ADHD, she notes people use AI agents to track medications and plan next steps. The common thread is autonomy: AI can “fill in the spaces” without requiring constant help from another person.
For builders, Honison argues accessibility can’t be solved by WIKG-style checkboxes or “AI overlays” that assume the system will catch everything later. She warns that accessibility tech debt is real and that “fix it in post” often becomes permanent. With local inference on the horizon—glasses-like always-on layers and LLM-friendly chips—there’s a temptation for app makers to punt accessibility to the overlay, but that approach still fails when users’ needs vary and when AI can’t be forced to adapt for every disability. She also calls out limitations of screenshot-based accessibility critiques: an LLM may detect contrast but miss whether links are properly labeled or whether content is readable by screen readers.
Finally, she urges AI-assisted development tools, including vibe coding platforms like Lovable, to bake accessibility “nuts and bolts” into the workflow so new builders learn the skills rather than relying on everyone to guess what disabled users need. Her closing advice is blunt: don’t trust ChatGPT to write perfectly accessible code—double-check it—and keep disabled people visible in the systems being built.
Cornell Notes
Elsa Honison describes how AI accessibility has improved, especially in how newer models handle disability in image and language tasks. Early image tools often “corrected” her into a non-disabled appearance and even apologized for her disability, while newer systems like ChatGPT and Claude increasingly render disability traits and support disability-aware language. She argues that missing disabled representation in training data can create real-world harm, citing MIT’s Morality Machine as an example of disabled people being absent from training bodies. At the same time, AI can be a major adaptive aid when it gives disabled users autonomy—reading text, interpreting images, and supporting ADHD routines without requiring a human volunteer. For builders, accessibility can’t be a checkbox or an afterthought; screenshot-based critiques and “fix it later” approaches miss critical issues like screen-reader compatibility.
Why did early image-generation tools struggle to represent Honison’s disability, and what changed over time?
What does the Morality Machine example illustrate about accessibility and AI training data?
How can AI act as an accessibility tool without requiring constant human assistance?
What principle guides Honison’s view of accessibility beyond one-size-fits-all solutions?
Why are screenshot-based accessibility critiques limited, even when an LLM can judge contrast?
What does Honorable’s warning about “AI overlays” and “fix it in post” mean for product teams?
Review Questions
- What specific behaviors in early image-generation outputs led Honison to conclude that disability was being erased, and how did newer models differ?
- How does missing disabled representation in training data (as in Morality Machine) translate into real accessibility or safety failures?
- Why can an LLM miss screen-reader and link-structure issues when given only a screenshot?
Key Points
- 1
AI image and language systems have improved in disability representation, but prompt sensitivity and training-data gaps still lead to disability being erased or distorted.
- 2
Missing disabled people in AI training data can produce unsafe or incomplete outputs, illustrated by MIT’s Morality Machine lacking wheelchairs and white canes.
- 3
AI can function as an adaptive aid when it gives disabled users autonomy—reading text, interpreting images, and supporting routines without requiring a human volunteer.
- 4
Accessibility design must be driven by understanding the target users’ abilities and constraints; WIKG-style checklists and one-size-fits-all solutions don’t cover the full disability spectrum.
- 5
Screenshot-based accessibility critiques are inherently limited because they can’t reliably detect semantic issues needed for screen readers (like link structure).
- 6
“Fix it in post” and “punt to an always-on AI overlay” strategies create accessibility tech debt that becomes costly and incomplete.
- 7
Accessibility support should be built into AI-assisted development tools (including vibe coding platforms) so new builders learn the fundamentals rather than guessing.