Skip to main content
Pixel art showing the evolution of a study room booking app through three UCD stages: paper sketch with user feedback and crossed-out ideas, wireframe mockup with more feedback, and polished final app with happy user. Tagline: Test Early, Change Cheap.

CS 3100: Program Design and Implementation II

Lecture 27: User-Centered Design

©2026 Jonathan Bell, CC-BY-SA

Learning Objectives

After this lecture, you will be able to:

  1. Describe the value of user-centered design in software development
  2. Describe the UCD process: prototyping and evaluation for usability
  3. Apply UCD as a requirements elicitation technique

Recall from L24: What Usability Means

Five aspects that trade off against each other

Key concepts from L24:

  • Marcus & Dorothy — power user vs. occasional user personas with conflicting needs
  • Mental models — users and designers think about the same interface differently
  • Nielsen's 10 Heuristics — systematic expert evaluation checklist
  • Stakeholder trade-offs — you can't optimize for everyone

L24 taught us to evaluate usability. Today: how to design for it from the start.

Heuristic Evaluation Catches Violations, Not Misunderstandings

Split comparison: Left shows a UX expert approving a study room booking app's building-first navigation on all 10 heuristics. Right shows a stressed student who just wants a room at 3pm, confused by the 'Select Building' screen.

Connection to L24: Heuristic evaluation is valuable — but experts evaluating against principles are not the same as real users trying to accomplish real goals. Use both: heuristics catch obvious violations quickly and cheaply; UCD catches deeper mental model mismatches that experts can't see.

The Designer's Mental Model Is Not the User's Mental Model

A study room 'Book Room' button with two diverging thought bubbles: the designer imagines a building-floor-room spatial hierarchy, while the student imagines entering a time and seeing available rooms. A gap between them is labeled 'invisible until you watch real users.'

Neither model is wrong. But when the interface assumes the designer's model, users who think differently get lost — and no amount of expert review catches this.

Connection to L24: Remember Marcus and Dorothy? Marcus (power user) might think building-first because he understands system architecture. Dorothy (visiting grandparent) thinks time-first — she just wants a room at 3pm. Same gap: expert vs. casual user mental models.

Building the Wrong Thing Is the Most Expensive Mistake in Software

Paper prototype fix: minutes   $

Mockup fix: hours   $$

Code fix: days   $$$

Redesign after shipping: weeks   $$$$

Users abandon your product: everything   $$$$$

Every team that skipped user feedback and built for 6 months has the same story: "We thought we knew what users wanted."

UCD ActivityTests Which L24 Aspect
Paper prototypingLearnability — can users figure it out?
Think-aloud testingEffectiveness — can they complete tasks?
Iteration with usersSatisfiability — do they enjoy using it?
Testing after time gapRetainability — can they remember how?

You Can't Think Your Way to Good Usability

What teams assume

  • "We're smart engineers"
  • "We use the domain ourselves"
  • "We read the requirements carefully"
  • "We applied all 10 heuristics"
  • "We'll get it right"

What actually happens

  • Designer assumed building-first navigation → Students wanted time-first search
  • Designer assumed students know which buildings have whiteboards → Students had no idea
  • Designer assumed "Reserve" was self-explanatory → Students searched for "Book" or "Get a room" (H2 violation: terminology doesn't match user language)
  • Designer assumed 30-minute fixed slots → Students wanted custom time ranges like "3pm to 4:30pm" (H1 violation: system constraints not visible)

These aren't edge cases. These are the primary workflow. Connection to L24: Experts applying Nielsen's heuristics are still experts — they share the designer's mental model. Heuristic evaluation catches violations; only real users reveal misunderstandings.

Design With Users, Not For Users

Extractive (L9)

  • "What features do you need?"
  • Requirements document
  • Build for months
  • Acceptance test at the end

Participatory (UCD)

  • "Show me how you find a room today"
  • Iterate on prototypes together
  • Feedback at every stage
  • Users are design partners

Connection to L9: We introduced the participatory approach for requirements. UCD extends it into the entire design and development process.

The Timing Paradox: We Need Feedback Early but Evidence Comes Late

Cost to change:

Low ✓
High ✗

Quality of user evidence:

Low ✗
High ✓
← Early (design phase)Late (production) →

UCD's answer: iterate with prototypes of increasing fidelity — get user feedback when changes are still cheap.

UCD Is an Iterative Cycle, Not a One-Time Consultation

Each iteration increases fidelity — from paper sketches to working software. Users aren't consulted once at the beginning and again at the end. They're involved continuously.

Prototype Fidelity Should Match Your Current Level of Uncertainty

Paper sketchesLow fidelityMinutes to createZero cost to change
Interactive mockupsMedium fidelityHours to createLow cost to change
Working prototypesHigh fidelityDays to createMedium cost to change
Production softwareFull fidelityWeeks/months to createHigh cost to change

High uncertainty? Use low-fidelity prototypes — don't invest in details until the concept is right. Concept validated? Increase fidelity to test interaction details.

Paper Prototypes: Minutes to Create, Zero Cost to Throw Away

A paper prototype of a study room booking app's building-first flow: five hand-drawn screens arranged left to right showing Building List → Floor Selection → Room List → Time Slots → Confirmation. A student's finger hovers uncertainly over Screen 1. Pencil, eraser, and crumpled v1 nearby.

How it works: Draw each screen on paper. A facilitator "plays computer" — when the user "taps" a button, swap in the next paper screen.

Why it works: Users feel comfortable criticizing paper. Fast to modify during the session. Forces focus on concepts, not visual polish.

Paper Prototypes Reveal Conceptual Confusion Before You Write Code

Facilitator (shows paper Screen 1: building list): "You have a group meeting at 3pm today. Book a study room."

Student: "Um... which building should I pick? I don't care which building. Is there a way to just see what's free at 3?"

Facilitator: "What would you expect to see first?"

Student: "A time picker? Or just... a list of rooms that are available at 3. I don't want to check every building one by one."

The entire navigation concept is wrong — discovered in 5 minutes with paper. Not in 5 sprints with code.

Wizard-of-Oz Prototypes: Real Interface, Human Behind the Curtain

Split scene: Left shows a student happily using a time-first room booking app that instantly shows available rooms. Right shows a facilitator behind a curtain frantically cross-referencing building spreadsheets to simulate the availability backend. Sign reads: 'No backend — just me checking spreadsheets fast.'

The interface looks real. The responses look real. But a human is simulating the hard parts. Test the user experience before building the technology.

Working Prototypes Reveal What Only Real Interaction Can

Time Picker Feel

Does the time selector feel snappy on mobile?

Can students quickly jump between days?

Paper and mockups can't test this.

Availability Updates

Does the room list update smoothly when the time changes?

Is the loading state clear?

This is about milliseconds and feedback.

Keyboard Navigation

Can students tab through the time picker and room list?

Do screen readers announce availability?

Accessibility requires real code.

How it works: The UI is fully implemented and responsive. But instead of a real availability API, it returns predefined room lists. Instead of a real booking backend, it uses mock data.

The UI is real. The backend can be mocked. Test the interaction, not the infrastructure.

AI Accelerates Prototype Creation, Not User Understanding

What AI can do

  • Generate UI code quickly: "Create a room booking interface with a time picker and room list"
  • Create realistic sample data: "Generate 50 study rooms across 5 buildings with capacities and amenities"
  • Produce design variations: "Show me three different layouts for the available rooms list"
  • Write Wizard-of-Oz scripts: "Write a server that returns mock room availability data"

What AI cannot do

  • Replace actual user testing: AI generates prototypes, but can't tell you if students understand them
  • Know your specific users: AI produces "average" designs — your users (commuters? residents? grad students with lab access?) have specific needs
  • Predict user confusion: The whole point of UCD is that you can't predict user behavior from first principles

AI builds prototypes in seconds. Only users can tell you if they work.

Launch PollEverywhere

Please open PollEverywhere and join the poll.

5 scenarios — apply L24 concepts (personas, mental models, heuristics) to UCD situations involving campus dining apps, room booking design, and user testing with prototypes.

Think-Aloud Protocol: Hear the User's Mental Model in Real Time

"As you use this, please tell me what you're thinking. What are you looking at? What are you trying to do?"

"OK, I need a room at 3pm... I see the time picker, that makes sense..."

"I'll select 3pm today... nice, it's showing me rooms..."

"Room 310, 4 seats... wait, there are 5 of us. Does '4 seats' mean 4 max, or is that just the table?"

"OK I'll pick Room 205, 8 seats, that's safe. Booking... confirmed!"

"Wait, it says 3:00 to 3:30? We need at least an hour. How do I change that?" (H1: system status not visible — user didn't know about fixed slots)

The color coding reveals what to listen for: confidence, hesitation, success, problems discovered. Notice how UCD reveals issues that map back to Nielsen's heuristics — but only real users reveal which heuristics matter most.

Running a User Test: Practical Tips

Do

  • "What are you thinking?"
  • "What do you expect to happen?"
  • "Walk me through what you're trying to do"
  • Wait through silence — let them work it out
  • Take notes on behavior, not just words

Don't

  • "Did you see the time picker?" ← leading
  • "You need to click there" ← helping
  • "That's not how it works" ← correcting
  • Explain the design before they try it
  • Test with only people on your team

A 25-minute session: 5 min warmup ("think aloud as you go") → 15 min tasks (3-4 concrete tasks) → 5 min debrief ("what surprised you?"). 3-5 users catches most major problems.

Task Completion Testing: Measure Where Users Succeed and Fail

Study room booking tasks given to 10 test users:

Book a room at 3pm
90%
Find a room with a whiteboard
80%
Book for longer than 30 min
65%
Cancel and rebook a room
30%
Success rate
Completed the task?
Time on task
How long?
Error rate
Wrong paths tried?
Assistance needed
Asked for help?

Findings Require Root Cause Analysis, Not Just Symptom Fixing

Finding: "Students can't book for longer than 30 minutes"

Same symptom, three different root causes, three different fixes. Connection to L24: Root causes often map to specific heuristics (H1: visibility, H2: user language) — but UCD reveals which root cause is actually affecting your users.

How Many Iterations? It Depends — and That's OK

  • Move to higher fidelity when major conceptual issues are resolved
  • Iterate at the same fidelity when you're still finding fundamental problems
  • Stop when additional testing reveals only minor, diminishing issues

There's no magic number of iterations. The goal is confidence that the concept is right before investing in higher-fidelity work. CS 2484 and 4530 explore this process in much greater depth.

Prototypes Reveal What Interviews Miss

Interview (L9 extractive approach)

"What features do you need in a room booking app?"

  • "Search for rooms"
  • "Book a room"
  • "See my bookings"

3 basic features described.

Interviews capture what users think to mention.

Prototype testing (UCD)

Watch a student use a paper prototype to book a room.

  • "I think about time, not building" ← conceptual model
  • "I need this room every Tuesday" ← recurring booking
  • "Does '4 seats' mean max?" ← ambiguous labeling
  • "Can I extend if we run over?" ← edge case
  • "Can I see where my friends booked?" ← social feature

5+ requirements and conceptual insights discovered.

Prototypes reveal what users actually do.

Interviews discover features. Prototypes discover requirements — including the organizing principles and workflows that make features usable. Connection to L24: Prototypes test all five usability aspects directly — can users learn it, be effective, stay productive?

Users Interacting With Prototypes Reveal Requirements You Never Imagined

A prototype testing session where a student generates unexpected requirements in speech bubbles: recurring bookings, extending reservations, unclear capacity labeling, social features. A facilitator across the table frantically takes notes. Each bubble is tagged as a different type of requirement discovery.

UCD isn't just a usability technique — it's a requirements elicitation technique. Real users with real tasks reveal functional requirements, edge cases, and workflow mismatches that no requirements document would capture.

Translating Observations Into Requirements

After a prototype session, document what you learned:

User said: "We need this room every Tuesday"  →  Requirement: Recurring weekly booking   (Functional)

User did: Ignored building list, looked for time picker  →  Requirement: Time-first navigation as default   (Conceptual model)

User asked: "Does '4 seats' mean max?"  →  Requirement: Capacity must distinguish comfortable vs. maximum   (Labeling)

User struggled: Couldn't extend past 30 min  →  Requirement: Custom duration input, not fixed slots   (Workflow mismatch)

Every observation has a source (what the user said/did), a requirement (what the system needs), and a type (functional, conceptual, labeling, workflow). This is how UCD produces requirements, not just usability feedback.

UCD Reduces All Three Dimensions of Requirements Risk

Understanding Risk

Instead of interpreting requirements documents, you watch users interpret your interface.

Having a representative set of users test your prototype directly reduces this risk.

Scope Risk

Prototyping reveals hidden complexity. The "simple" room booking turns out to need recurring slots, duration control, capacity clarification, and social features.

Better to discover this expansion during paper prototyping than during implementation.

Volatility Risk

Early user feedback lets you pivot before committing.

If prototype testing reveals students fundamentally misunderstand your booking workflow, you can redesign the concept before building it.

Connection to L9: Same three risk dimensions we identified in requirements analysis — now with a concrete mitigation strategy at every stage of development.

Better to Discover Hidden Scope on Paper Than in Production

Two project timelines: Top (With UCD) shows requirements discovered as small sticky notes during paper prototyping, leading to smooth delivery. Bottom (Without UCD) shows same requirements discovered as explosions during implementation, causing rework and delays.

Your Design Sprint: GA0 Starts Today

A design sprint is a time-boxed period where a team applies UCD before writing production code. Here's yours:

Each team member (15 pts)

  1. User Persona — a realistic user of your feature (goals, pain points, context)
  2. Low-fidelity wireframes — 3-5 hand-drawn screens showing key interactions
  3. Accessibility considerations — keyboard nav, screen readers, WCAG

As a team (15 pts)

  1. Architecture diagram — ViewModels ↔ Services
  2. Integrated wireframes — navigation flow between features
  3. UI terminology table — consistent user-facing labels
  4. Feature Buffet selection — pick 2-3 for GA2
  5. "Our Feature" concept — your original idea (designed, not built)

Due Thursday March 26. Full spec: GA0: Design Sprint. Assign features, then start sketching.

Key Takeaways: UCD Creates a Continuous Feedback Loop Between Users and Implementation

  1. Expert evaluation has limits — heuristic evaluation catches violations, but only real users reveal the gap between your mental model and theirs

  2. Building the wrong thing is the most expensive mistake — 2 hours of paper prototyping can save weeks of rework

  3. Iterate with increasing fidelity — paper → mockups → working prototypes, testing with users at each stage. Move up when the concept is validated.

  4. Prototypes reveal what interviews miss — interviews discover features; prototypes discover requirements, mental models, and workflows

  5. Document findings as requirements — every observation has a source (what the user said/did), a requirement (what the system needs), and a type (functional, conceptual, labeling, workflow)

  6. UCD reduces understanding, scope, and volatility risk (L9) — by making requirements visible through user behavior, not documents

Looking Ahead

Today: Assign features to team members, start sketching wireframes

Early next week: TA mentor meeting — bring your feature assignments and initial sketches for feedback

Thursday March 26: GA0 Design Sprint due

Next lecture: Accessibility and Inclusivity — designing for the full range of human ability

Today we learned to design with users. Next, we'll ask: which users — and are we designing for all of them?