Skip to main content
Pixel art showing a smart home control app being used simultaneously by diverse users: screen reader user, keyboard-only user, person in bright sunlight, voice control user, elderly user with large text, and person on older phone with weak signal. A curb cut ramp in the foreground shows diverse people benefiting from the same accommodation.

CS 3100: Program Design and Implementation II

Lecture 28: Accessibility and Inclusivity

©2026 Jonathan Bell, CC-BY-SA

Learning Objectives

After this lecture, you will be able to:

  1. Distinguish between accessibility and inclusivity, and explain why both matter
  2. Identify how invisible assumptions about users exclude people by gender, SES, ability, and context
  3. Apply the POUR framework (Perceivable, Operable, Understandable, Robust) to evaluate software
  4. Describe what assistive technologies need from your code to function
  5. Critically evaluate accessibility claims and identify what counts as real evidence

We Designed for Users — But Which Ones?

In L24 we learned to evaluate usability. In L27 we learned to design with users in mind.

But when we design, we make assumptions about who our users are:

  • They can see the screen
  • They can use a mouse
  • They have a reliable device with fast internet
  • They're comfortable with technology jargon
  • They trust that they can undo mistakes

These assumptions are so embedded in our mental models that we don't notice them — until someone who doesn't fit them tries to use our software and can't.

Running Example: Adding a New Device

Let's follow one user task through this entire lecture: adding a new smart light to SceneItAll.

The flow seems simple:

  1. Click "Add Device" button
  2. App scans the network and discovers a new light
  3. Name the device and assign it to a room
  4. Set initial brightness and color
  5. Save and confirm the device appears in your dashboard

Now imagine this flow used by:

  • A blind user navigating with a screen reader
  • A user with tremors who can't drag a brightness slider precisely
  • A user on a shared tablet at an assisted living facility with 10 minutes left
  • A user whose first language isn't English, trying to parse "Zigbee pairing timeout"

Same feature. Same five steps. Completely different experience — depending on who the user is.

Accessibility ↔ Inclusivity: A Spectrum, Not a Binary

Accessibility

Ensuring software works for users with disabilities. Often a legal requirement (ADA, WCAG).

Inclusivity

Ensuring software works across all dimensions of human diversity: age, gender, culture, language, SES, cognitive style.

ScenarioWhere on the spectrum?
A blind user navigating with a screen readerAccessibility
An elderly user struggling with small touch targetsBoth
A user in bright sunlight who can't see their phoneSituational accessibility
A non-native English speaker confused by idiomatic button labelsInclusivity

This Is Not Optional: Robles v. Domino's Pizza

The situation: Guillermo Robles, who is blind, wants to order a pizza. Not a complicated request. He uses a screen reader — the same tool millions of blind people use to navigate the web every day. But Domino's website and app are unusable: images have no alt text, links have no descriptions, and redundant navigation traps his screen reader in loops. He can't browse the menu. He can't use a coupon. He can't place an order.

The timeline:

  • 2016: Robles sues under the ADA. District court dismisses — "the ADA doesn't apply to websites."
  • 2019: Ninth Circuit reverses — the ADA does apply, because Domino's website is a "service of a public accommodation" connected to its physical restaurants.
  • 2019: Domino's petitions the Supreme Court. The Court declines to hear the case.
  • 2021: Judge orders Domino's to comply with WCAG 2.0 accessibility standards.
  • 2022: Domino's settles. Six years of litigation over fixes that would have taken a developer days.

A man wanted to order a pizza. It took six years and the Supreme Court to make that possible.

Sources: BOIA: The Robles v. Domino's Settlement · Eater: Domino's and the Supreme Court

The Curb Cut Effect: Design for the Margins, Improve It for Everyone

Left: curb cut ramp being used by wheelchair user, parent with stroller, traveler with luggage, cyclist, delivery person, with historical timeline from 1945 to today. Right: software equivalents — captions help deaf users and people in noisy places; keyboard nav helps motor-impaired and power users; plain language helps cognitive disabilities and ESL speakers; high contrast helps low vision and outdoor use.

Beyond Ability: Software Also Excludes by Cognitive Style

Accessibility focuses on sensory and motor abilities. But software also excludes people through assumptions about how they think and behave:

  • Risk tolerance — Will users experiment freely, or stick to what's safe because they can't afford to lose work?
  • Self-efficacy — When something goes wrong, do they blame the software or themselves?
  • Relationship to authority — Do they see an error message as a suggestion to work around, or a verdict they can't challenge?
  • Communication literacy — Can they parse jargon, idioms, and complex instructions?
  • Access to technology — Own device with fast internet, or shared library computer with 30 minutes?

Research frameworks like GenderMag (Burnett et al.) and SESMag study how these facets vary by gender and socioeconomic status — and how software designed for one end of the spectrum systematically excludes the other.

These aren't deficits in the user — they're assumptions in the software.

Same Feature, Different Experience: Adding a Device

Fee (high self-efficacy, high risk tolerance)

  1. Clicks "Add Device" — immediately starts network scan
  2. Sees "Zigbee pairing timeout" — shrugs, moves the device closer to the hub and retries
  3. Notices the default room is wrong — changes it to "Bedroom" without hesitation
  4. Drags the brightness slider to 70%, hits "Save" without reading the confirmation
  5. Total time: 60 seconds

Dav (low self-efficacy, low risk tolerance)

  1. Looks for "Add Device" — not sure which button it is (icon only, no label)
  2. Sees "Zigbee pairing timeout" — thinks they broke something. Doesn't know what Zigbee is. Doesn't retry.
  3. Wants to change the room but the dropdown says "Assign to Area" — not sure what "Area" means in this context
  4. Reads the confirmation dialog. Sees "Override existing device configuration?" — panics and clicks Cancel
  5. Gives up. Device not added.

The interface didn't change. The user did. Every design decision — icon-only buttons, technical error messages, jargon in dialogs — is a filter that selects for one kind of user and excludes another.

What's Different Between These Two Interfaces?

Side-by-side comparison: a less accessible device setup interface (icon-only buttons, slider-only input, technical error messages, color-only status, jargon labels) versus a more accessible version (labeled buttons, slider plus text input, plain-language errors, text+color status, friendly labels, undo option).

Both interfaces add a device. Both have the same features. Look at the two versions and identify: what assumptions does the left version make about its users that the right version doesn't?

How Do Users With Disabilities Use Software?

Before we talk about how to design for accessibility, let's understand what users are actually working with. These are not exotic tools — they ship with the devices you already own.

Assistive Tech: Vision

Screen readers convert the visual interface to audio. They announce text, buttons, headings, form fields, and their states — all read aloud. Users navigate entirely by keyboard commands, not by looking at the screen.

Built in: VoiceOver (Mac/iPhone), TalkBack (Android), Narrator (Windows)

Screen magnifiers enlarge a portion of the screen (2x–16x). Users see only a small area at a time and pan around — they never see your whole interface at once.

Built in: Zoom (Mac/iPhone), Magnifier (Windows/Android)

High contrast / color adjustments change the color palette for users who are colorblind or have low vision. Your interface needs to still make sense when colors shift.

Built in: Display settings on every major OS

Assistive Tech: Motor & Hearing

Keyboard-only navigation — Tab between elements, Enter to activate, arrow keys within components. No mouse, no touchpad. Used by people with motor impairments, repetitive strain injuries, and power users who prefer keyboard shortcuts.

Voice control — Users speak commands: "Click Import," "Press Tab," "Scroll down." The software maps spoken words to UI elements by their labels. If your button doesn't have a label, voice control can't target it.

Switch devices — A single button (or sip-and-puff tube) that cycles through interface elements one at a time. Interaction is extremely slow — every unnecessary focusable element adds real time. Poor focus order isn't just annoying; it's exclusionary.

Captions and transcripts — Text alternatives for audio content. Used by deaf and hard-of-hearing users, but also by anyone in a noisy environment or watching without sound.

POUR: A Framework for Accessible Software

The Web Content Accessibility Guidelines (WCAG) organize accessibility around four principles:

Perceivable 👁️

Information must be presentable in ways users can sense

Operable

Interface must be usable through multiple input methods

Understandable 🧠

Information and operation must make sense to users

Robust ⚙️

Must work reliably with assistive technologies

While originally for web content, POUR applies to any software interface — desktop apps, mobile apps, anything with a UI.

Perceivable: Can the User Sense It?

Never use color alone to convey meaning.

Wordle (before colorblind mode)

🟩🟨⬜ — Green = correct, Yellow = wrong position, Gray = not in word

~8% of men are red-green colorblind. Green and yellow squares were indistinguishable.

Wordle (with high-contrast mode)

🟧🟦⬜ — Orange = correct, Blue = wrong position

Added distinct shapes + different colors that are distinguishable by colorblind users.

Text alternatives: Every image needs alt text. Every icon needs a label. Instagram auto-generates: "Photo may contain: 2 people, outdoor, smiling." Without it, a screen reader just says "image."

Operable: Can the User Interact With It?

Every action possible with a mouse must also be possible with a keyboard.

The volume slider problem:

Many media players have a volume slider that works perfectly with a mouse — click and drag. But try it with a keyboard:

  • Some sliders: arrow keys adjust volume in steps ✅
  • Some sliders: Tab skips right over it — no keyboard interaction at all ❌
  • Some sliders: you can focus it but the arrow keys scroll the page instead ❌

Who needs this?

  • Users with motor impairments who can't use a mouse
  • Users with tremors who can't perform precise drag gestures
  • Power users who prefer keyboard shortcuts
  • Anyone whose hands are full, wet, or injured

The fix is almost always: use standard UI components. They get keyboard support for free.

Understandable: Does It Speak the User's Language?

Developer language

  • Error 403: Forbidden
  • NullPointerException in Parser.parse() at line 247
  • detached HEAD state
  • Syncing... (retry 3/5, exponential backoff)

User language

  • "You don't have permission to view this page. Try logging in."
  • "We couldn't read that image. Try a clearer photo."
  • "You're looking at an older version. Switch to the latest?"
  • "Reconnecting... this may take a moment."

Remember Nielsen's Heuristic #2: Match between system and the real world. Speak the user's language, not yours.

Robust: Does It Work With Assistive Technology?

Your app doesn't exist in isolation. It communicates with the operating system's accessibility API, which communicates with assistive technologies:

Standard UI components (buttons, text fields, checkboxes) participate in this chain automatically.
Custom widgets built from scratch are invisible to assistive tech unless you wire them up manually.

What You Build vs. What Assistive Tech Sees

Back to our running example — the "Add Device" button in SceneItAll:

Standard button component

What the user sees: a clickable "Add Device" button

What a screen reader announces: "Add Device, button"

What a keyboard user experiences: Tab highlights it, Enter activates it

What voice control hears: "Click Add Device" → works

Custom-built fake button

What the user sees: something that looks identical

What a screen reader announces: ...nothing. It's invisible.

What a keyboard user experiences: Tab skips right over it. No way to activate it.

What voice control hears: "Click Add Device" → "No matching element found"

They look the same on screen. They are completely different to assistive technology.

Five Things That Make Software Work With Assistive Tech

  1. Use standard components, not custom fakes.
    A real button component is automatically announced as "button" by a screen reader. Something that merely looks like a button but isn't one is silent.
  2. Everything visible needs a text label.
    Icons, images, status indicators — if a sighted user gets information from it, there must be a text equivalent.
  3. Every mouse action has a keyboard equivalent.
    Tab to navigate, Enter to activate, Escape to dismiss. Arrow keys within composite widgets.
  4. When something changes on screen, announce it.
    If content updates dynamically (loading complete, error appeared, list filtered), screen readers need to be told.
  5. Focus goes where the user expects.
    Dialog opens → focus moves in. Dialog closes → focus returns to the button that opened it. Never leave users stranded.

Keyboard Navigation: The Hidden Contract

The standard keyboard navigation contract:

KeyAction
TabMove to next interactive element
Shift+TabMove to previous interactive element
EnterActivate the focused element (click a button, follow a link)
SpaceToggle (checkboxes, expand/collapse)
Arrow keysNavigate within a component (menu items, radio buttons, tabs)
EscapeClose/dismiss (dialogs, dropdowns, menus)

Focus must be visible. Users need to see which element is currently focused. If you remove the focus outline (a common CSS sin), keyboard users are flying blind.

"We're Accessible!" — But Are They?

Companies love to claim accessibility compliance. But claims vary enormously in credibility.

How would you evaluate whether an app is actually accessible?

An ironic scene: a shiny 'WCAG Compliant' badge on a software interface that has tiny text, no focus indicators, unlabeled images, and mouse-only interactions — clearly not accessible despite the badge.

Three Tiers of Accessibility Evaluation

Three-tier pyramid: bottom is Automated Testing (catches 30%), middle is Expert Evaluation, top is Testing with Disabled Users (gold standard). Arrow shows increasing rigor upward.

What Automated Tools Catch vs. Miss

Catches

  • Missing alt text on images
  • Insufficient color contrast ratios
  • Missing form labels
  • Missing document language
  • Duplicate IDs

Mechanical checks with clear right/wrong answers

Misses

  • Alt text that says "image.png"
  • Tab order that makes no sense
  • Error messages nobody understands
  • Keyboard traps you can't escape
  • Whether the app actually makes sense to use

Anything requiring human judgment

Passing automated tests ≠ accessible. It means you cleared the lowest bar.

What Doesn't Count as Accessibility Evaluation

🚩 Compliance checklist nobody tested

"We went through the WCAG checklist and checked every box." Checking boxes doesn't demonstrate that software is accessible — only that someone believes guidelines were followed.

🚩 Empathy simulation

"Everyone on the team tried using the app blindfolded for 10 minutes." Valuable for building awareness, but tells you how sighted people experience artificial vision loss, not how blind users experience your software. Blind users have years of screen reader expertise you don't.

🚩 One testimonial without methodology

"A blind user said they could use it." Without systematic tasks, multiple participants, and consideration of diverse experiences, individual testimonials don't generalize.

Exercise: Google Maps, Keyboard Only (~8 min)

Setup: Close your trackpad. Put your mouse aside. Open Google Maps in your browser.

Try these three tasks using only your keyboard:

  1. Search for a restaurant near campus
  2. Get walking directions from Snell Library to that restaurant
  3. Switch to satellite view

Hints (Google Maps does have keyboard support — but can you find it?):

  • Press Ctrl + / to see all available shortcuts
  • Arrow keys move the map; + / - to zoom
  • Tab until the map is focused (look for a highlighted square)

As you work, note:

  • Where did you get stuck?
  • Which POUR principle was violated?
  • What worked surprisingly well — and how long did it take you to discover it?

Reference: Google Maps Accessibility Guide

Debrief: What Did You Find?

Let's hear what you experienced:

  1. Where did you get stuck? Which task was hardest?
  2. What worked well? Anything surprisingly smooth?
  3. Which POUR principle was violated most often?
  4. Would an automated testing tool have caught the problems you found?

Google has a massive accessibility team and billions of users. It's still genuinely hard. This is why real evaluation matters.

What This Means for Your Group Project

Your GA0 accessibility plan wasn't hypothetical. In GA1, your feature must be usable without a mouse.

Minimum bar for your feature:

  • ✅ Use standard UI components (buttons, text fields, dropdowns) — they get keyboard support for free
  • ✅ Every action is reachable via Tab and activatable via Enter/Space
  • ✅ Focus order matches visual order — Tab moves logically through your interface
  • ✅ Focus is always visible — never remove focus indicators
  • ✅ Dialogs and popups manage focus correctly (trap focus in, return focus out)

When reviewing a teammate's PR:

Try their feature keyboard-only before approving. If you can't complete the core workflow without a mouse, request changes.

Key Takeaways

  1. Design for the margins, improve it for everyone. The curb cut effect applies to software: accessibility features benefit all users.
  2. Exclusion isn't always about ability. SES, gender, culture, and cognitive style shape how people use software. If you only design for your own experience, you exclude a lot of people.
  3. POUR is your framework. Perceivable, Operable, Understandable, Robust — use it to evaluate any interface.
  4. Color alone is never enough. Always provide redundant cues (text, shape, position).
  5. Keyboard operability is non-negotiable. Every mouse action must have a keyboard equivalent.
  6. Automated testing catches ~30%. Real evaluation requires humans trying real tasks.
  7. Try your own feature without a mouse before you ship it.

Looking Ahead

Next up: GUI Programming (L29)

  • UI components, layouts, event handling
  • Everything from today applies: standard components, keyboard navigation, focus management

Your group project:

  • GA0 (due Mar 26): Your accessibility plan should reference POUR
  • GA1: Your feature implementation must support keyboard navigation
  • Code reviews: Try your teammate's feature keyboard-only

For more depth:

  • Consider enrolling in CS 4973: "Accessibility and Disability"
  • Watch Crip Camp (streaming on Netflix) — a documentary about the disability rights movement that led to the ADA

Today we learned to see the users we've been missing. Next, we start building interfaces they can actually use.