Skip to main content
A pixel art road stretching left to right. Far left: Big Ball of Mud swamp. Off-ramps lead to Layered, Hexagonal, Monolith, and Modular Monolith architectures. An inset radar chart shows nine quality attributes at varying levels with the question: Which tradeoffs can you live with? The road continues into fog: Distributed Systems Ahead, Here Be Dragons. Tagline: How Do We Organize Our Code?

CS 3100: Program Design and Implementation II

Lecture 19: Architectural Styles — From Hexagons to Monoliths

©2026 Jonathan Bell, CC-BY-SA

Announcements

Team Formation Survey Released!

  • Starting Week 10: teams of 4 for CookYourBooks GUI project
  • Tell us your preferences + availability
  • Due Friday 2/26 @ 11:59 PM
  • Complete the Survey →

HW4 Due Thursday Night

Learning Objectives

After this lecture, you will be able to:

  1. Define quality attributes that architectural styles affect: maintainability, scalability, deployability, fault tolerance, and more
  2. Distinguish between architectural styles and architectural patterns
  3. Recognize and compare architectural styles like Hexagonal, Layered, Pipelined, and Monolithic
  4. Explain the tradeoffs of monoliths, modular monoliths, and microservices
  5. Analyze how architectural choices affect quality attributes differently for specific scenarios

Important framing: You are NOT expected to become master architects by the end of this lecture. The goal is to understand systems that use these styles and reason about how architectural decisions impact quality attributes. When you encounter a hexagonal or layered architecture in the wild, you'll be able to read it — not necessarily design it from scratch.

How Do We Organize Our Code?

This is the question at the heart of every architectural decision — from your first class project to production systems serving millions of users. Every pattern and style we study today is an answer to this question.

Big Ball of Mud — a tangled sprawl of buildings connected by spaghetti wires, labeled with mixed concerns like 'Grading + User Auth + Email'. A developer stands in the middle asking 'Where does this go?' Warning signs everywhere: Global State, Duplicated Logic, Circular Dependencies. Quote from Foote & Yoder 1997.

The Monolith: Where Most Projects Start

Here's a secret: you've been building monoliths your entire programming career. Every Java project, every Python script, you've written in a single codebase — all monoliths. The term only becomes meaningful in contrast to systems that aren't monoliths.

A monolith is a system deployed as a single unit. All functionality — user interface, business logic, data access — lives in one codebase, compiles into one artifact, and runs in one process. This is where we start — and for many successful systems, where we stay.

Single Deployment

One build. One deploy. One running process.

Shared Memory

Components talk via method calls, not networks.

Unified Codebase

One repo, one build system, one language.

What "Single Deployment Unit" Really Means

In a monolith, everything ships together. One git push, one CI pipeline, one artifact, one deploy.

This means:

  • Fix a typo in the grading UI? Redeploy the whole app.
  • Update a dependency for course management? Redeploy the whole app.
  • Every change goes through the same pipeline.

The consequence:

  • You can't deploy grading fixes without also deploying whatever else changed
  • A broken test in course management blocks a grading deploy
  • Deployment frequency is limited by the slowest-moving part

What "Shared Memory" Really Means

In a monolith, components communicate by calling methods on objects that live in the same process. This is so natural you've never had to think about it:

What a monolith feels like (illustrative)

// All in one process, one memory space
Course course = courseRepo.findById(courseId);
Assignment assignment = course.createAssignment(
name, dueDate);

// This is a method call — nanoseconds, guaranteed
Grader grader = GraderFactory.buildFor(
assignment, config);

// One database transaction wraps everything
transaction(() -> {
assignment.setGrader(grader);
for (Registration reg : course.getRegistrations())
notificationService.notifyNewAssignment(
reg, assignment);
});
// If ANY step fails, ALL steps roll back

What you get for free:

  • Speed: Method calls take nanoseconds
  • Reliability: If you call a method, it runs
  • Transactions: Wrap multiple operations in one atomic unit — all succeed or all roll back
  • Objects by reference: Pass an Assignment object around; everyone sees the same data
  • Debugging: Set a breakpoint, step through the entire flow in one debugger session

These guarantees are invisible until you lose them. When components move to different processes or different machines, every one of these guarantees disappears.

What "Unified Codebase" Really Means

In a monolith, everyone works in the same repository, with the same language, the same build system, and the same dependency tree.

Benefits of one codebase

  • Refactoring is easy: rename a method and your IDE finds every caller
  • Code sharing is free: import any class from any package
  • Consistency: one style guide, one set of linters, one test framework
  • Onboarding: new developers learn ONE system, not twelve

Costs of one codebase

  • Merge conflicts: Unless there's a strong enforcement of modularity, it's easy to step on each other's toes
  • Slow builds: the whole app rebuilds even for small changes
  • Technology lock-in: The whole system uses one language, one framework
  • Blast radius: a bad commit affects everything

Monolith: Quality Attribute Profile

Where Monoliths Excel

  • Simplicity ★★★ — One thing to build, test, deploy, monitor
  • Responsiveness ★★★ — In-process calls are orders of magnitude faster than network calls
  • Testability ★★☆ — One environment to set up, but may need full infrastructure
  • Changeability ★★☆ — IDE refactoring across entire codebase, but changes may ripple

Where Monoliths Struggle

  • Scalability ★☆☆ — Must scale the entire app; heavy work competes with everything else
  • Deployability ★☆☆ — Every deploy is all-or-nothing; a bug anywhere blocks everything
  • Fault Tolerance ★☆☆ — A crash in any component takes down the entire process
  • Modularity ★☆☆ — Boundaries are conventions, not enforcement (without discipline → Big Ball of Mud)

Notice the modularity problem: without enforced boundaries, monoliths tend toward the Big Ball of Mud we saw earlier. Is there a way to get monolith simplicity WITH better modularity?

The Modular Monolith: Best of Both Worlds?

A modular monolith keeps the simplicity of a single deployment but adds enforced internal boundaries. All the operational simplicity of a monolith, with intentional structure to prevent the Big Ball of Mud.

Simplicity ★★★

Still one deploy, one build

Modularity ★★★

Enforced internal boundaries

Changeability ★★☆

Changes isolated to modules

Scalability ★☆☆

Still one process to scale

Organizing Modules: Technical or Domain Partitioning?

Whether you're building a modular monolith or just organizing packages, there's a fundamental choice: group code by technical role or by domain capability?

Technical Partitioning

autograder/
├── controllers/
│ └── SubmissionController.java
├── services/
│ └── GradingService.java
├── repositories/
│ └── SubmissionRepository.java
├── models/
│ ├── Submission.java
│ └── Grade.java
└── views/
└── submission_result.html

Organized by technical role — controllers together, models together

Domain Partitioning

autograder/
├── grading/
│ ├── GradingService.java
│ ├── Grade.java
│ └── GradeRepository.java
├── submissions/
│ ├── SubmissionController.java
│ ├── Submission.java
│ └── SubmissionRepository.java
└── courses/
├── CourseController.java
├── Course.java
└── CourseRepository.java

Organized by business capability — everything for grading together

Partitioning Tradeoffs

QuestionTechnicalDomain
"How does Java grading work?"Jump between controllers/, services/, models/Everything in grading/java/
Adding Rust support?New files in controllers/, services/, models/All changes in grading/rust/
Team independence?Every feature touches multiple packages"Rust support team" owns their vertical slice

Connection to L18 heuristics:

  • Actor Ownership → Domain partitioning aligns with who owns what
  • Rate of Change → Technical partitioning separates things that change together
  • The "right" choice depends on your team structure and change patterns

Conway's Law (L22 preview): Organizations design systems that mirror their communication structure. If you have a "frontend team" and "backend team," you'll get technical partitioning. If you have a "grading team" and "courses team," you'll get domain partitioning.

Quality Attributes: The "-ilities"

How do we decide if an architecture is "good"? Not by how it looks — by how it behaves. Quality attributes are the measurable properties of a system that stakeholders care about.

You've already met some of these. Today we'll define a full vocabulary and use it throughout the lecture to evaluate every architectural style we encounter.

Simplicity · Modularity · Testability

Maintainability · Changeability · Deployability

Scalability · Responsiveness · Fault Tolerance

Specifying Quality Attributes: Scenarios

In L9, we learned that vague requirements are dangerous. Quality attributes have the same problem — "the system should be scalable" is as useless as "the system should be fair."

We use a common form — a quality attribute scenario — to make every attribute testable and unambiguous.

Six-part quality attribute scenario framework flowing left to right: Source (who causes it) → Stimulus (what event) → Environment (under what conditions) → Artifact (what part of the system) → Response (what should happen) → Measure (how do we know it's good enough). Callout: One framework describes every quality attribute.

Why "Scalable" Isn't Specific Enough

Imagine someone says: "The grading system should be scalable." What does that actually mean? Consider three very different situations Pawtograder might face:

Scenario A: Spike

200 students submit all at once at 11:59pm deadline

What happens?

  • 200 parallel GitHub Actions runners spin up
  • Each builds, tests, parses, scores independently
  • All are accepted before the deadline, complete in ~30 minutes
  • API receives 200 results simultaneously

Scenario B: Sustained

1800 students submit over 1 hour during an exam

What happens?

  • ~30 new runners start every minute
  • ~30 complete every minute (steady state)
  • Load is spread over time
  • API handles ~30 results/min continuously

Scenario C: Trickle

1800 students submit over 24 hours for a homework

What happens?

  • ~1-2 runners at any time
  • Never more than a handful concurrent
  • Minimal system stress
  • API barely notices

All three scenarios involve "grading many submissions" — but they place completely different demands on the system. A system that handles Scenario C perfectly might completely fail at Scenario A. This is why we need a vocabulary for being specific about what we mean.

Review: Quality Attributes You Already Know

You've already encountered these quality attributes in earlier lectures. They're foundational — and they apply at the architectural level too:

Simplicity

How easy is the system to understand and reason about? Fewer moving parts, fewer deployment units, fewer technologies.

L7: Low coupling and high cohesion make code easier to understand. Readable code is understandable code. These principles scale up to architecture.

Modularity

How well is the system divided into independent, interchangeable components? High cohesion within modules, low coupling between them.

L7: Coupling and cohesion at the class level. Now we scale it up to system components and deployment boundaries.

Testability

How easily can we verify the system behaves correctly? Can components be tested in isolation? Do we need real infrastructure to run tests?

L16: Hexagonal architecture was motivated by this — domain logic testable without real databases or APIs.

We've already seen a tension between simplicity and modularity: adding interfaces and abstractions increases modularity but decreases simplicity. This tension continues at the architectural level.

New Attributes: Deployability, Responsiveness, Fault Tolerance

These three attributes become critical when comparing monoliths vs. distributed systems. We'll explore them in depth in L20 — for now, just the vocabulary:

Deployability

How easily can we release changes to production?

  • High: Independent deploys, small blast radius, quick rollback
  • Low: All-or-nothing deployment, coordinate across teams

Monoliths: one deploy = everything. Distributed: deploy pieces independently.

Responsiveness

How quickly does the system respond to requests?

  • High: In-process calls (nanoseconds), shared memory
  • Lower: Network calls (milliseconds), serialization overhead

Monoliths win here — no network between components.

Fault Tolerance

How does the system behave when something fails?

  • High: Failed component doesn't crash others, graceful degradation
  • Low: One crash = entire system down

Distributed systems isolate failures — but introduce NEW failure modes.

Notice the pattern: Monoliths tend to have better responsiveness but worse deployability and fault tolerance. Distributed systems flip this. These tradeoffs are central to L20.

Umbrella Attribute: Maintainability

Maintainability is the umbrella term for how easily a system can be changed over time. It decomposes into the other attributes:

This is the "big picture" attribute — we'll see how styles affect it throughout this lecture and L20.

When someone says "this system is hard to maintain," ask: Is it hard to understand (simplicity)? Hard to change safely (changeability)? Hard to test (testability)? Hard to modify without affecting other parts (modularity)? Decomposing "maintainability" gives us precision.

New Attribute: Scalability

How does the system handle growth in load, data, or users?

Scaling strategies vary by architecture — we'll explore this in depth in L20.

Vertical Scaling

"Buy a bigger server" — more CPU, more RAM.

  • Simple to implement
  • No code changes required
  • Has a ceiling (biggest server available)
  • Heavy work STILL competes for shared resources

Horizontal Scaling

"Add more instances" — offload work to independent workers.

  • No theoretical ceiling
  • Heavy work happens elsewhere
  • Core system stays responsive
  • Requires architecture that supports it

Key insight: Scalability isn't just "can the system handle more load?" It's "does the rest of the system stay responsive while handling that load?" We'll return to this distinction in L20-21.

Quality Attributes Trade Off Against Each Other

Here's the uncomfortable truth: you can't maximize every quality attribute. They're in tension with each other.

Simplicity vs. Modularity

Adding interfaces and abstractions → more modular → less simple

We saw this in L7-L8: ISP means more interfaces to understand.

Simplicity vs. Scalability

Horizontal scaling requires workers, queues, coordination → more complexity

A monolith is simple but hits a ceiling. Distributed systems scale but aren't simple.

Deployability vs. Responsiveness

Independent services → independent deploys → network calls → more latency

High deployability often means more service boundaries = more network overhead.

Fault Tolerance vs. Simplicity

Isolation requires boundaries → boundaries add complexity

A monolith is simpler but a single point of failure. Distributed systems isolate failures but add coordination complexity.

Architecture is choosing which attributes matter most for your system. If someone tells you their architecture maximizes everything, they're selling something.

Architectural Styles vs. Patterns

Architects use two terms that sound similar but mean different things:

Architectural Style — The Shape

A bundle of characteristics about a system:

  • How components are organized
  • How they communicate
  • How the system is deployed
  • Where data lives

"Microservices" or "monolith" — a name for a whole worldview

Architectural Pattern — The Solution

A contextualized solution to a recurring problem:

  • Service Locator for dependency resolution
  • Repository for data access abstraction
  • Strategy for extensible behavior

Patterns are used WITHIN a style

Styles describe the overall shape; patterns are reusable solutions you apply within that shape.

Continuing from L18: Two Systems, Same Problem

In L18, we identified component boundaries for Pawtograder and compared them to Bottlenose. Both solve the same problem — grade student code — but make different architectural choices.

Pawtograder

  • "Thick action" architecture
  • Grading Action normalizes results
  • Sends through a narrow API
  • Leverages GitHub Actions infrastructure

Bottlenose

  • Web application monolith
  • Platform-driven grading logic
  • Delegates execution to Orca (Docker)
  • All-in-one deployment

Both must: accept submissions, run tests, compute scores, report feedback. Today we'll see how architectural styles help us understand WHY they made different choices — and what those choices cost.

Recap: Hexagonal Architecture (from L16)

You already know one key style from L16: Hexagonal Architecture (Ports and Adapters). Let's quickly review its core idea before introducing more styles:

Domain Core

Business logic, rules, entities — knows nothing about infrastructure

Ports

Interfaces that define how the domain interacts with the outside world

Adapters

Implementations that connect ports to real technologies

Layered Architecture

The layered architecture organizes code into horizontal strata, each with a distinct responsibility. The classic formulation has four layers:

The key rule: dependencies flow downward. Presentation can call Application, Application can call Domain, Domain can call Infrastructure — but never the reverse.

Layered Architecture Emerges from Heuristics

The same L18 heuristics that led to hexagonal architecture can also lead to layered — depending on what they reveal:

Rate of Change → Layers Separate Volatility

LayerStability
PresentationChanges often (UI redesigns)
ApplicationChanges moderately (new workflows)
DomainChanges rarely (core rules are stable)
InfrastructureChanges when tech changes

Dependency direction protects stable layers from volatile ones

Actor Ownership → Layers Map to Roles

ActorPrimary Layer
UI/UX designerPresentation
Product ownerApplication (use cases)
Domain expertDomain
DevOpsInfrastructure

Different expertise naturally falls into different layers

When do heuristics lead to layers vs. hexagons?

  • Layered emerges when responsibilities stack vertically (UI → logic → data) and teams map to technical roles
  • Hexagonal emerges when the domain needs multiple entry points (web, CLI, tests) and multiple exit points (DB, API, files)
  • They're not mutually exclusive — many systems exhibit BOTH perspectives

Layered Architecture: Quality Attributes

Why organize into layers? Because it directly serves several quality attributes:

Separation of Concerns

Each layer has one job. The Domain layer doesn't know if it's being called from a web UI, a CLI, or a test harness. You can swap your database without touching business rules.

Testability

Test each layer in isolation. Domain logic can be tested with no database. Application logic can use stub infrastructure. Presentation can be tested against a mock service layer.

Replaceability

Change your UI framework without rewriting business logic. Swap PostgreSQL for MongoDB at the Infrastructure layer. Add a REST API alongside your web UI — both call the same Application layer.

The pitfall: Changes that span layers — adding a new field that flows from the UI through services into the database — require touching every layer. This "layer tax" is the cost of separation. It's worth it for large systems, but can feel heavy for small ones.

Layers in Our Running Examples

Both Pawtograder and Bottlenose exhibit layers — with different technologies at each level:

Pawtograder's Grading Action

Bottlenose (Monolith)

Layered vs. Hexagonal: Both separate domain from infrastructure. Layered emphasizes horizontal strata; Hexagonal emphasizes dependency direction (domain at center). You'll often see both lenses applied to the same system.

Pipelined Architecture (Pipes and Filters)

Data flows through stages. Each stage transforms its input into output for the next. Pawtograder's grading pipeline is a perfect example:

Benefits

  • Each stage testable independently
  • Adding mutation testing = insert a stage between "Run Tests" and "Grade Units"
  • Classic examples: compilers, Unix pipes, ETL

Constraints

  • Works best when data flows one direction
  • Awkward for interactive/bidirectional workflows
  • Cross-cutting concerns may touch every stage

Testability ★★★

Each stage tested in isolation

Changeability ★★★

Insert, remove, or reorder stages

Simplicity ★★☆

Linear flow, easy to follow

Fault Tolerance ★☆☆

Stage failure stops the pipeline

Pipelined Architecture Emerges from Heuristics

When does applying heuristics lead to a pipeline? When the problem has a natural transformation flow:

Rate of Change → Stage Independence

StageWhen It Changes
File overlayRarely (mechanism stable)
Build runnerPer language (Gradle → Cargo)
Test runnerRarely (JUnit is JUnit)
Report parsersWhen tool versions change
Scoring logicWhen rubric structure changes

Each stage changes for different reasons — natural seams for separation

Testability → Stage-Level Testing

testOverlay()    → known input → expected output
testBuild() → sample project → BuildResult
testParser() → sample XML → TestResult[]
testScoring() → TestResult[] → GradedPart[]

Each stage is a pure function: input → output. Perfect for unit testing.

When does a pipeline emerge?

Apply heuristics and ask: "Does data flow one direction? Is each transformation independently testable? Do stages change for different reasons?"

If yes → pipeline structure emerges naturally.

Styles Emerge from Heuristics

We've now seen three styles: Hexagonal, Layered, and Pipelined. Here's the key insight: these styles aren't arbitrary choices — they emerge naturally when you apply the L18 heuristics consistently.

Diagram showing how heuristics lead to different styles: Rate of Change influences all styles; Testability and ISP strongly lead to Hexagonal; Actor Ownership strongly leads to Layered; Testability and Rate of Change lead to Pipelined. Callout: Different problem characteristics lead to different styles.

But Wait: Where Do Those Answers Come From?

The heuristics ask great questions: What changes at different rates? Who owns what? Where do we need test seams? But you can only answer those questions if you understand your domain.

Without Domain Understanding

"We might need to support multiple databases..."

  • Adds abstraction layers now
  • Increases cognitive overhead
  • Makes simple queries harder to optimize
  • Pays flexibility tax EVERY DAY

Building for imaginary changes = real complexity for fantasy benefits

With Domain Understanding (L12)

Pawtograder's domain analysis revealed:

  • Config files change weekly → declarative YAML
  • Grading logic changes monthly → isolate in adapters
  • Database vendor change unlikely → couple tightly, it's fine

Invest flexibility where change actually happens

The L18 heuristics are powerful tools — but they only give good answers when applied to real domain knowledge, not hypothetical scenarios.

How Pawtograder's Architecture Emerged

We didn't start by saying "let's use hexagonal architecture." We started with domain understanding, applied heuristics, and the structure emerged:

1. Domain Understanding (L12)

QuestionAnswer
What changes most?Config files (weekly)
What's stable?API contract, core grading logic
Who are the actors?Instructors, action maintainers, sysadmins
What's unlikely to change?Database vendor, GitHub Actions platform

2. Heuristics Applied (L18)

HeuristicResult
Rate of ChangeConfig ↔ Action ↔ API boundaries
Actor OwnershipInstructor owns config, maintainer owns action
ISPNarrow ports: Builder, FeedbackAPI
TestabilityDomain testable without real API

3. The Pattern That Emerged → Hexagonal + Pipelined

  • Domain core (grading logic) at center — testable without infrastructure
  • Ports define contracts — Builder, Parser, FeedbackAPI
  • Adapters implement ports — GradleBuilder, SurefireParser, SupabaseAPI
  • Data flows through a pipeline — overlay → build → test → parse → grade → submit

We call it "hexagonal" because that's what the community named this shape. We DISCOVERED it; we didn't CHOOSE it.

The Complete Picture: L12 → L18 → L19

Architecture isn't about picking from a menu. It's a discovery process:

The style names (hexagonal, layered, pipelined) are vocabulary for communication — not a catalog to shop from. You discover architecture by understanding your domain and asking the right questions.

The Two Big Families: Monolith vs. Microservices

At the highest level, systems fall into two categories based on how they're deployed:

Monolith — One Deployment Unit

  • All code lives in a single codebase
  • Deployed as a single artifact (JAR, binary, container)
  • Components communicate via method calls
  • One database, one process, one server (typically)

Everything we've discussed so far — layered, hexagonal, modular monolith, pipelined — are variations within this family.

Microservices — Many Deployment Units

  • Code split across separate services
  • Each service deployed independently
  • Services communicate via network (HTTP, messages)
  • Separate databases, processes, servers

This is where industry has been moving — and it's the focus of L20-L21.

The Network Changes Everything

In a monolith, method calls are instant, reliable, and traceable. Over a network:

Monolith (Bottlenose)

submission.computeGrade();
// ✅ Executes in nanoseconds
// ✅ Always succeeds or throws
// ✅ Full stack trace on error
// ✅ Wrapped in a DB transaction

Distributed (Pawtograder)

feedbackApi.submit(submissionId, feedback);
// ⚠️ Might take ms... or seconds... or ∞
// ⚠️ Server might be down or overloaded
// ⚠️ Request succeeds, response lost
// ⚠️ Retry = accidentally grade twice?
// ⚠️ No cross-system transactions

Pawtograder's SupabaseAPI actually implements retry logic with exponential backoff — complexity that simply doesn't exist in a monolith. "Microservices" really means "distributed systems" — and distributed systems are hard.

Monolith vs. Microservices: Quality Attribute Tradeoffs

These are general tradeoffs — not specific to Pawtograder or Bottlenose. Any time you're deciding between these styles, this is the tension:

Quality AttributeMonolithMicroservices
Simplicity★★★ One process, one deploy, one mental model★☆☆ Many services, network complexity, distributed debugging
Modularity★☆☆ Boundaries are conventions — easy to violate★★★ Boundaries enforced by network — can't cheat
Testability★★☆ One environment, but need full infrastructure★★★ Each service testable in isolation
Deployability★☆☆ All-or-nothing deploy, slowest part limits frequency★★★ Independent deploys per service
Changeability★★☆ IDE refactoring is powerful; but changes can ripple★★☆ Isolated changes easy; cross-service changes expensive
Responsiveness★★★ In-process calls: nanoseconds★☆☆ Network calls: milliseconds, retries, timeouts
Scalability★☆☆ Vertical only — heavy work competes with everything else★★★ Horizontal — offload work to independent services
Fault Tolerance★☆☆ One crash takes down everything★★☆ Failures can be isolated (but new failure modes)

Notice the pattern: almost every row is a direct tradeoff. What monoliths win on simplicity and responsiveness, microservices win on modularity and scalability. This is why "which is better?" is the wrong question.

Applying Quality Attributes: Pawtograder vs. Bottlenose

Those were general tradeoffs. Now let's see how they play out concretely in our two running examples:

Quality AttributePawtograderBottlenose
SimplicityMultiple services, HTTP boundariesSingle deployment (mostly) — easier to reason about
ModularityHigh — hex arch, clean port/adapter boundariesLower — monolith couples concerns
TestabilityGrading runs locally without infrastructureNeeds database, Orca for integration tests
DeployabilityAction and API deploy independentlyAll-or-nothing deploy of core app
ChangeabilityAdd a language = one adapter classAdd a language = changes across layers
ScalabilityHeavy work offloaded to GitHub Actions runners; API stays responsiveAll work in one process — grading eats CPU/DB connections, whole platform slows
Fault ToleranceAPI down? Action retries. Action fails? API unaffected.Bug in grading can affect course management

Neither system "wins" — they optimize for different priorities. Pawtograder optimizes for modularity, testability, and scalability. Bottlenose optimizes for simplicity and responsiveness.

Comparing Maintainability: Change by Change

The same change hits these two architectures very differently:

ChangePawtograder ImpactBottlenose Impact
Add Rust language supportAdd one RustCargoBuilder class; no API changesAdd RustGrader subclass + UI views + Docker image + registration
Change scoring calculationModify OverlayGrader; no API or config changesModify Submission.computeGrade(); affects all graders
Add new feedback formatModify AutograderFeedback record; requires API coordinationAdd fields to InlineComment; database migration

The tradeoffs are real:

  • Pawtograder's "thick action, narrow API" isolates most changes to a single component
  • Bottlenose's monolith can optimize across components but changes ripple more widely
  • Neither is inherently better — the tradeoffs depend on which changes are most frequent and which teams own which components

The Tradeoffs Are Real

A balance scale comparing Pawtograder (hexagonal, distributed) and Bottlenose (monolith). Dashboard gauges show: Pawtograder wins on maintainability and scalability; Bottlenose wins on simplicity. Callout: No free lunch.

Key Takeaway: Architecture Is Discovered, Not Chosen

Domain understanding (L12) + heuristics (L18) + style recognition (L19) = a complete approach:

The Heuristics (L18)

Questions that reveal structure

  1. Rate of Change: What changes together? What changes independently?
  2. Actor: Who owns what? Whose changes should stay isolated?
  3. ISP: What does each client actually need?
  4. Testability: Where do we need seams for testing?

The Emergent Styles (L19)

Patterns that have names

  • Hexagonal: Domain isolation with swappable adapters
  • Layered: Stacked responsibilities, downward dependencies
  • Pipelined: Transformation flow, stage independence
  • Monolithic: Single deployment, shared memory
  • Modular Monolith: Enforced boundaries in one deployment

The Process:

Understand the domain (L12) → Apply heuristics (L18) → Boundaries emerge → Recognize the style (L19) → Communicate it

You don't pick "hexagonal" from a menu. You discover it by understanding what actually matters.

Looking Forward: Where These Ideas Go Next

Concept from TodayWhere It Goes
"The Network Changes Everything"L20: Fallacies of Distributed Computing, client-server architecture, security across trust boundaries
Monolith vs. MicroservicesL20-21: Distributed architecture styles, when to break the monolith, serverless
Quality Attribute TradeoffsL21: How platform constraints (serverless, containers) shape architecture — like GitHub Actions shaped Pawtograder
Heuristics → Emergent StylesL22: Conway's Law — team structure is another heuristic that shapes architecture

We opened with "how do we organize our code?" — and now you have styles, quality attributes, and tradeoff vocabulary to answer it. Next: what happens when your boundaries cross a network.