Back to blog
Accessibility

How synthetic personas are changing accessibility research

20 March 2026 6 min

Accessibility research has always had a recruitment problem. Finding screen reader users, low-vision users, or elderly first-time smartphone users for moderated research sessions is logistically complex, ethically sensitive, and slow.

The result? Most accessibility testing is automated — tools like axe and Lighthouse catch technical WCAG failures but miss experiential ones.

The experience gap

A WCAG audit can tell you that your contrast ratio is 4.3:1 (below the 4.5:1 AA threshold). It can't tell you that a 72-year-old first-time smartphone user will abandon your form because it requires too many cognitive steps.

Synthetic accessibility personas

AI-generated accessibility personas bridge this gap. They simulate the experience of using your product with specific impairments and assistive technologies.

A screen-reader persona navigates your interface and flags when ARIA labels are missing, when focus order is illogical, or when interactive elements lack keyboard support.

An ADHD persona flags cognitive overload: too many form fields, unclear progress indicators, distracting animations.

An elderly persona identifies reading complexity issues, small touch targets, and confusing navigation patterns.

Not a replacement — a complement

Synthetic accessibility personas don't replace real user research. They supplement it. They catch the failures that automated tools miss, before you recruit real users for validation.

Try this with Synthia

Generate cultural personas and run research studies for any of the markets discussed in this article.

Start researching for free