Skip to main content
Pixel art of a student at a crossroads with four signposts: Debugging (HW4), Architecture (HW5), Testing, and Other Topics. Architectural diagrams and code in the background.

CS 3100: Program Design and Implementation II

Lecture 25: Exam 2 Review

©2026 Jonathan Bell, CC-BY-SA

Press S to see speaker notes

Today's Agenda

  1. Debugging — four approaches, HW4 design issues (~15 min)
  2. Architecture — HW5 decisions, hexagonal, ADRs, monolith, serverless (~18 min)
  3. Poll — 5 MC questions, 5 min to answer + 5 min discussion (~10 min)
  4. Testing — test double concepts, testability best practices, anti-patterns (~8 min)
  5. Rapid-fire — requirements, GRASP, networks, teams/AI/OSS (~12 min)
  6. Exam logistics + Q&A (~2 min)

Exam scope: L9-L23.

Section 1: Debugging a Codebase You Don't Own

HW4 focus — the skills that separate debugging from guessing

"I Don't Understand This Code" Is the Starting Point, Not the Problem

Unproductive path

  1. See error
  2. Ask AI: "fix this"
  3. Get new code
  4. See different error
  5. Repeat until deadline

Result: you can't explain your code in a TA meeting — or on the exam

Productive path

  1. See error
  2. Ask: "what should this code do?"
  3. Trace: control flow → data flow
  4. Identify the gap between expected and actual
  5. Fix with understanding

Result: you understand the code you submit

The rubber duck principle: if you can't explain it out loud, you don't understand it yet.

Reading Unfamiliar Code: Three Steps Before You Touch Anything

  1. Read the public interface first. What can callers do? What does this class promise? The internals follow from the contract.
  2. Trace the call chain. Pick one method. Follow it: what does it call? What does it return? What are the preconditions it assumes?
  3. Ask: what would break this? Null input? Empty collection? Duplicate entries? These are your test cases — and your debugging suspects.

For HW4: before writing a test, you must be able to state what the method is supposed to do — in your own words.

Four Debugging Approaches — Know When to Use Which

Rubber Duck Debugging

Explain the code out loud to an imaginary listener, line by line. Forces you to articulate what you think the code does — the gap between your explanation and reality is the bug.

Best for: logic errors you can't see by staring; understanding code you didn't write.

Print-Statement Debugging

Insert System.out.println (or logging) at key points to observe actual runtime values. Answers: "what is this actually holding at this moment?"

Best for: confirming or refuting assumptions about data flow; quick feedback when a debugger is inconvenient.

Scientific Method

  1. Observe the failure
  2. Form a hypothesis ("I think X causes Y")
  3. Predict what you'd see if the hypothesis is true
  4. Run an experiment (one change at a time)
  5. Conclude: hypothesis confirmed or falsified → repeat

Best for: complex, non-obvious bugs where you need to rule out causes systematically.

Trial-and-Error Debugging

Make a change, run, observe. Repeat until it works.

Risk: you may fix the symptom without understanding the cause — the bug returns in a different form, or you introduce a new one.

When acceptable: known, well-understood environment with fast feedback loops.

Two Design Issues That Appeared Frequently in HW4

God Class — class-level issue

One class that knows and does too much. It owns data that belongs in other classes, contains logic for multiple concerns, and becomes the bottleneck for any change.

class RecipeLibrary {
// parses JSON
// manages ingredients
// handles scaling
// persists to disk
// formats output
}

Symptom: methods 30+ lines long; adding any feature requires touching this class.

Under-decomposed Methods — method-level issue

A method that does three things should be three methods. Long methods with multiple steps, comments separating logical phases, or deeply nested logic are the tell.

public void addRecipe(Recipe r) {
// step 1: validate
if (r == null) throw ...
if (r.name.isEmpty()) throw ...
// step 2: normalize
r.name = r.name.trim().toLowerCase();
// step 3: store
recipes.put(r.name, r);
}

Each "step" is a candidate for its own private method.

Exercise 1: Identify the Issue (3 min)

public class CookbookManager {
private Map<String, List<Recipe>> categories = new HashMap<>();
private ObjectMapper mapper = new ObjectMapper();

public void loadFromFile(String path) throws IOException {
JsonNode root = mapper.readTree(new File(path));
for (JsonNode cat : root.get("categories")) {
String name = cat.get("name").asText();
List<Recipe> recipes = new ArrayList<>();
for (JsonNode r : cat.get("recipes")) {
Recipe recipe = new Recipe(
r.get("title").asText(),
r.get("servings").asInt()
);
for (JsonNode ing : r.get("ingredients")) {
recipe.addIngredient(ing.get("name").asText(),
ing.get("qty").asDouble(), ing.get("unit").asText());
}
recipes.add(recipe);
}
categories.put(name, recipes);
}
}

public List<Recipe> getByCategory(String cat) { ... }
public void saveToFile(String path) throws IOException { ... }
public Recipe scale(Recipe r, double factor) { ... }
public String formatRecipe(Recipe r) { ... }
}

Identify: (1) a class-level design issue, and (2) a method-level design issue. Are they the same issue or separate?

Section 2: Architecture Tradeoffs for HW5

The skill: make a design decision and justify the tradeoff

What Makes a Decision Architectural?

The heuristic: will this be expensive to change later?

Architectural — expensive to change

  • Does your domain model depend on your CLI?
  • Is business logic in your service layer or in your command handlers?
  • Does your service layer expose a single registry or separate adapters?
  • Where do you draw the port boundary?

Affect multiple components. Hard to reverse once you have callers.

Design — cheap to change

  • Method names in your service class
  • Whether RecipeService or CookbookService is the right name
  • Whether you use a for loop or stream().filter()
  • Field ordering within a class

Local to one class. Easy to refactor with IDE support.

Architectural decisions shape how components communicate and what they depend on. Once callers exist, reversing them means changing callers too.

The Most-Asked HW5 Design Question

"Should I have a single service registry, or separate service adapters?"

Option A: Single Registry

// CLI gets one object
ServiceRegistry services = ...;
services.getRecipeService().scale(...);
services.getLibraryService().add(...);
  • CLI only needs one dependency
  • All services discoverable in one place
  • Registry interface grows over time (Hyrum's Law risk)

Prefer when: entry points are many (GUI + CLI + API all need services) and you want a single wiring point. Simplicity of wiring outweighs coupling risk.

Option B: Separate Adapters

// CLI gets what it needs
RecipeService recipeOps = ...;
LibraryService libraryOps = ...;
recipeOps.scale(...);
libraryOps.add(...);
  • CLI is explicit about what it depends on
  • Each service can evolve independently
  • More constructor parameters / wiring

Prefer when: testability matters (each command can inject only what it uses) or services evolve at different rates and you want to limit what each caller can see.

Neither is universally right. The exam will give you a goal ("prioritize testability" / "minimize wiring complexity") and ask you to choose and justify.

Hexagonal Architecture: The Mental Model for HW5

Key question: does your domain core import anything from the HTTP layer or the real clock? If yes, the dependencies are backwards.

Quality Attribute Tradeoffs: The Three That Matter Most for HW5

AttributeWhat it means for HW5In tension with
ChangeabilityCan I swap CLI for GUI without touching domain logic?Simplicity (more indirection)
TestabilityCan I test domain logic without a real CLI or real files?Simplicity (more interfaces)
SimplicityIs the codebase easy to understand and navigate?Changeability + Testability

Adding ports and adapters increases changeability and testability — but adds indirection. That tradeoff is worth making when you have a real reason: multiple entry points, testability requirements, or known future change.

The flexibility trap: adding interfaces you'll never swap is the worst of both worlds — complexity without benefit.

Finding Service Boundaries: Four Heuristics Applied to HW5

  1. Rate of change: What changes weekly (CLI commands, output format)? What changes rarely (recipe scaling logic)? Separate things that change at different rates.
  2. Actor ownership: Who interacts with what? The user interacts with the CLI. The domain model is owned by the business logic. Different actors → different components.
  3. Interface segregation: Don't expose methods to components that don't use them. A RecipeScalingService that also has persistToFile() is giving CLI callers access to storage concerns they shouldn't touch.
  4. Testability: Can each component be tested without deploying the others? If testing your service requires a running CLI, you've broken this heuristic.

When multiple heuristics point to the same boundary, that's a strong signal you've found a natural seam.

Exercise 2: Identify the Architectural Issue (3 min)

public class RecipeCommandHandler {
// Handles CLI command: "scale <recipe> <factor>"
public void handleScale(String[] args) {
String recipeName = args[1];
double factor = Double.parseDouble(args[2]);

// Load from file
Recipe r = new ObjectMapper()
.readValue(new File("recipes/" + recipeName + ".json"), Recipe.class);

// Scale the recipe
Recipe scaled = new Recipe(r.name + " (x" + factor + ")");
for (Ingredient ing : r.ingredients) {
scaled.addIngredient(ing.name, ing.quantity * factor, ing.unit);
}

// Print result
System.out.println(scaled.name);
for (Ingredient ing : scaled.ingredients) {
System.out.printf(" %s: %.2f %s%n", ing.name, ing.quantity, ing.unit);
}
}
}

Which heuristic does this violate? What quality attribute does that affect? How would you fix it?

Monolith, Partitioning, and Serverless: The Architectural Spectrum

Monolith

Single deployment unit, shared memory, unified codebase.

Strengths: simplicity ★★★, responsiveness ★★★, easy debugging

Weaknesses: scalability ★☆☆, deployability ★☆☆, fault tolerance ★☆☆

Fix a typo → redeploy everything. Crash in one module → whole system down.

Modular monolith: same operational simplicity, but enforced internal boundaries (modules with public APIs). Probably what you've been building.

Technical vs. Domain Partitioning

Technical:          Domain:
controllers/ grading/
services/ GradeController
repositories/ GradeService
models/ submissions/
SubmissionController

Technical: group by role. Makes sense when teams own technical layers (frontend/backend/DBA).

Domain: group by business capability. Changes to a feature stay in one folder. Generally preferred.

Conway's Law: architecture mirrors team structure — teams that own technical layers produce technical partitions.

Serverless / FaaS

"Technical partitioning with a vendor."

You manage: your function code, event triggers, environment variables

Provider manages: servers, OS, runtime, scaling, networking, redundancy

// Your whole "server" is:
public class ScaleHandler implements
RequestHandler<S3Event, String> {
public String handleRequest(
S3Event event, Context ctx) {
// 15 lines of business logic
}
}

Best for: event-driven, stateless, bursty load. Not for: long-running, sustained high load, real-time.

Architecture Decision Records (ADRs): Capturing the Why

Diagrams show what the architecture is. ADRs capture why it is that way — and what you gave up.

Three required elements:

  1. Context — what situation drove this decision? What constraints or forces were in play?
  2. Decision — what did you choose, and what alternatives did you consider?
  3. Consequences — what do you gain? What do you lose? What becomes harder?

An ADR that only lists benefits isn't doing its job.

Example (Pawtograder security):

Context: Grading scripts contain instructor solutions. Student code runs on the same infrastructure. Students could potentially exfiltrate them.

Decision: Download grading scripts at runtime over an authenticated channel rather than bundling them in the runner image.

Consequences:

  • ✓ Students can't inspect the runner image for secrets
  • ✓ Scripts can be updated without rebuilding the runner
  • ✗ Adds a network dependency that can fail (reliability risk)
  • ✗ Requires authenticated download infrastructure

Mid-Lecture Poll — 5 minutes

Answer at pollev.com/jbell

Section 3: Testing — The Question That Matters

Not: "is this a spy or a fake?" — But: "what is this test double doing?"

Test Doubles: Two Questions That Actually Matter

The exam won't ask you to label a test double. It will ask you to reason about what a test is doing.

Question 1: Should this test provide fake data to the SUT?

Use a fake/controlled dependency when the real one is unpredictable, slow, or unavailable — and the test's goal is to verify what the system under test does with that input.

// Real question: "does service recommend
// charging when price is low?"
// We don't care about the real API —
// just make it return something predictable.
EnergyPriceApi stubApi = () -> 0.05; // cheap!
Clock fixedClock = Clock.fixed(
Instant.parse("2026-03-16T03:00:00Z"),
ZoneId.of("UTC"));

EnergyPriceService svc =
new EnergyPriceService(stubApi, fixedClock);

// Assert on the SUT's output:
assertThat(svc.shouldCharge()).isTrue();

Question 2: Should this test verify the SUT made specific calls?

Use a verifying double when the correctness of the code is that it called a collaborator correctly, not just what value it returned. The dependency IS the observable behavior.

// Real question: "does service send an alert?"
// The return value tells us nothing —
// we need to know the notifier was called.
AlertService mockAlerts =
mock(AlertService.class);

EnergyPriceService svc =
new EnergyPriceService(stubApi, fixedClock,
mockAlerts);
svc.checkAndAlert();

// Assert the SUT's BEHAVIOR, not output:
verify(mockAlerts)
.sendAlert(argThat(a -> a.level == HIGH));

Ports and Adapters → Testability

The whole point: at each port, you can swap the real adapter for a test adapter.

Production:
EnergyPriceService → EnergyPriceApi (port)

HttpPriceAdapter (adapter — real HTTP call)

Test:
EnergyPriceService → EnergyPriceApi (same port)

StubPriceApi (test adapter — returns fixed $0.05)

This only works if EnergyPriceService depends on the port (interface), not the adapter (concrete class). If it has new HttpClient() inside it, you cannot swap it.

Exam question type: "Here's a class that's hard to test. What architectural change makes it testable?"

Designing for Testability: Best Practices and Anti-Patterns

Best practices ✓

  • Inject dependencies through the constructor — type them as interfaces, not concrete classes
  • One responsibility per class — small classes with clear purposes are easy to test in isolation
  • Pure functions where possible — no side effects, same output for same input; trivial to test
  • Ports and adapters — keep I/O at the boundary; domain logic has no filesystem or network calls
  • Avoid global state — static fields and singletons make test order matter

Anti-patterns ✗

public class EnergyPriceService {
// Anti-pattern 1: hardwired concrete dependency
private EnergyPriceApi api =
new HttpPriceAdapter(); // can't swap

public boolean shouldCharge() {
// Anti-pattern 2: new inside method
HttpClient client = HttpClient.newHttpClient();

// Anti-pattern 3: direct System.out
System.out.println("Checking price...");

// Anti-pattern 4: static/global state
return GlobalConfig.get("threshold") > 0.10;
}
}

Each makes it impossible to test without a live HTTP server or capturing stdout.

Section 4: Rapid-Fire Wrap

L12, L13, L20, L22–24 — key concepts, exam-relevant framing

Requirements Analysis and Domain Modeling (L9, L12)

Extractive vs. Participatory Requirements

ExtractiveParticipatory
Design powerAnalystShared with stakeholders
Stakeholder roleSubject (interviewed)Partner (co-designs)
RiskAnalyst misunderstands domainSlower; conflicting views

When participatory matters: complex domains where analysts lack expertise, or where user buy-in affects adoption.

Domain Modeling: captures real-world entities, relationships, and constraints. Vocabulary should match what stakeholders say — not RecipeDTO, just Recipe.

Representational Gap

The distance between how the domain looks in reality and how it's modeled in code.

Small gap → good: Recipe has ingredients, mirroring the real world. Changes to domain thinking translate naturally to code changes.

Large gap → bad: DataRecord with a String type field — domain structure lost in abstraction.

Goal: keep the gap small. Design choices that create unnecessary layers between the real domain and the code make the system harder to reason about and change.

GRASP Patterns: Assigning Responsibilities (L12, L17)

Information Expert

Give a responsibility to the class that has the information needed to fulfill it.

// Who calculates total calories?
// → Recipe has the ingredients
class Recipe {
List<Ingredient> ingredients;

// ✓ Recipe IS the expert
public int totalCalories() {
return ingredients.stream()
.mapToInt(Ingredient::calories).sum();
}
}
// ✗ Not RecipeService — it would
// need to reach into Recipe's data

Creator

B should create A when B: aggregates A, closely uses A, or has the initializing data for A.

// Who creates Ingredient instances?
// Recipe aggregates Ingredients
class Recipe {
public void addIngredient(
String name, double qty, String unit) {
ingredients.add(
new Ingredient(name, qty, unit)); // ✓
}
}
// ✗ Not RecipeService — it doesn't
// aggregate Ingredients

Controller

Sits between UI and domain. Receives system events, delegates to domain objects. Contains no business logic.

class ScaleCommand implements Command {
private final RecipeService service;

public void execute(String[] args) {
String name = args[1];
double factor = Double.parseDouble(args[2]);

// Just delegates — no logic here
Recipe r = service.scale(name, factor);
System.out.println(format(r));
}
}

Thin controller: translate, delegate, done.

Networks and Distributed Systems (L20): The Eight Fallacies

The fallacies (what you assume wrongly):

  1. The network is reliable
  2. Latency is zero ← the chatty API killer
  3. Bandwidth is infinite
  4. The network is secure ← CIA triad
  5. Topology doesn't change
  6. There is one administrator ← Palo Alto Networks
  7. Transport cost is zero
  8. The network is homogeneous

Patterns that address them:

ProblemPattern
Unreliable networkRetry + exponential backoff + jitter
Retry idempotencyIdempotency key
Cascading failureCircuit breaker
Partial successGraceful degradation
Too many callsChunky vs. chatty APIs

The visceral number: 100 API calls × 100ms latency = 10 seconds. Chatty APIs don't scale.

Teams, AI and OSS

Teams (L22)

  • Brooks' Law: Adding people to a late project makes it later — each new person adds O(n) communication paths
  • Conway's Law: Systems mirror team structure — teams that own technical layers produce technical partitions; teams that own features produce domain partitions
  • HRT: Humility, Respect, Trust — the behavioral foundation; team project failures are almost always collaboration, not technical skill

AI Best Practices (L13, syllabus)

AI is appropriate when: generating boilerplate you understand, exploring unfamiliar APIs, asking for explanations, speeding up tasks you could do yourself.

AI crosses the line when: you submit code you cannot explain, you let AI make architectural decisions without understanding them, or you use it to bypass the learning the assignment targets.

The test: Can you walk a TA through every line of your submission and explain why it's there? If not, you've used AI in a way that undermines your own learning — and that will show on the exam.

OSS (L23)

  • Dependency risk: One implementation line → 10 JARs from 4 organizations you're now trusting
  • Licensing: GPL propagates (copyleft) — including a GPL library may require your project to become GPL. MIT/Apache do not propagate.
  • Log4Shell: A logging library turned log messages into Runtime.exec(). Transitive dependencies carry full security risk.

Exam 2: What to Expect Wednesday

Details
DateWednesday, March 18 — class time, same room
FormatWritten; one cover sheet (same as Exam 1)
LengthMore questions than Exam 1 — budget your time
ScopeL9–L23 (Domain Modeling through Open Source)
Question styleConceptual reasoning, identify issues in code, evaluate tradeoffs
Not on the examLabeling test doubles by exact name (spy/fake/stub/mock)
HeadphonesNot permitted — earplugs available on request