Architectural Styles: From Hexagons to Monoliths (AM)

Opening Survey
- How's Assignment 4 going?
- Do you know how to use the Java debugger in VS Code?
- What's one question you have?
New policy: I'm leaving the poll open for students who arrive late (but they must still be in the room (or attending remotely by invitation) to complete it).

Text espertus to 22333 if the
URL isn't working for you.
Announcements
Team Formation Survey Released!
- Starting Week 10: teams of 4 for CookYourBooks GUI project
- Tell us your preferences + availability
- Due Friday 2/26 @ 11:59 PM
- Complete the Survey →
HW4 Due Thursday Night
- Deadline is GitHub Action time, not submission time
Booking TA Video Calls
- Lucia (evenings and weekends)
- New: Many TAs
- Links are on Oakland Canvas
Facade Design Pattern
A simple interface to a complex implementation.

Facade and Hexagonal Architecture in Assignment 4
CS 3100: Program Design and Implementation II
Lecture 19: Architectural Styles — From Hexagons to Monoliths
©2026 Jonathan Bell & Ellen Spertus, CC-BY-SA
Learning Objectives
After this lecture, you will be able to:
- Define quality attributes that architectural styles affect: maintainability, scalability, deployability, fault tolerance, and more
- Distinguish between architectural styles and architectural patterns
- Recognize and compare architectural styles like Hexagonal, Layered, Pipelined, and Monolithic
- Explain the tradeoffs of monoliths, modular monoliths, and microservices
- Analyze how architectural choices affect quality attributes differently for specific scenarios
Important framing: You are NOT expected to become master architects by the end of this lecture. The goal is to understand systems that use these styles and reason about how architectural decisions impact quality attributes. When you encounter a hexagonal or layered architecture in the wild, you'll be able to read it — not necessarily design it from scratch.
How Do We Organize Our Code?
This is the question at the heart of every architectural decision — from your first class project to production systems serving millions of users. Every pattern and style we study today is an answer to this question.

The Origin of Spaghetti Code: goto
The most popular hobbyist language in the 1970s was BASIC.
10 PRINT "Choose an option:"
20 PRINT "1. Say Hello"
30 PRINT "2. Say Goodbye"
40 PRINT "3. Exit"
50 INPUT CHOICE
60 IF CHOICE = 1 THEN GOTO 100
70 IF CHOICE = 2 THEN GOTO 110
80 IF CHOICE = 3 THEN GOTO 120
90 GOTO 10
100 PRINT "Hello!"
110 GOTO 10
120 PRINT "Goodbye!"
130 GOTO 10
140 END
How many paths lead to line 10?
Spaghetti Code
Dangers of Unstructured Programming

"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration."
Photo: Hamilton Richards, CC BY-SA 3.0 ● Quote: EWD498 (1975)
xkcd on goto
![XKCD cartoon 292: goto
[Sideways view of Cueball sitting at computer, thinking.]
Cueball: I could restructure the program's flow
Cueball: or use one little 'GOTO' instead.
[Cueball starts typing.]
Cueball: Eh, screw good practice. How bad can it be?
Text on computer: goto main_sub3;
*Compile*
[We now have a view from behind Cueball. Cueball looks at the computer.]
[A raptor jumps into the panel, pushing Cueball off his chair.]](/cs3100-public-resources/img/lectures/web/l19-xkcd-goto-bad.png)
xkcd #292 "goto" by Randall Munroe, CC BY-NC 2.5
Very Large Codebases

source: information is beautiful
From Hundreds to Millions of Lines
Languages & Paradigms
- structured programming
- object-oriented programming
- type systems and generics
Tools & Infrastructure
- compilers, linkers, IDEs
- version control software
- package managers
- static analysis
Patterns and Principles
- design patterns
- SOLID principles
- architecture styles and patterns
Quality Practices
- automated unit & integration testing
- test-driven development (TDD)
- code review
- continuous integration
🏛️ Architectural Patterns — today's topic: how do we structure entire systems?
The Starting Point: Monoliths
A system deployed as a single unit
Single Deployment
One build. One deploy. One running process.
Shared Memory
Components talk via method calls, not networks.
Unified Codebase
One repo, one build system, one language.

Calling an External API Doesn't Change the Architecture
// You deploy ONE jar — this is still a monolith
public class GradeNotifier {
private final DiscordClient discordClient;
public void notify(Student s, Grade g) {
// Calling an external API doesn't change your deployment topology
discordClient.sendMessage(s.getDiscordId(), g.toString());
}
}
The API is a dependency, not part of your deployment. You don't build it, deploy it, or scale it — just like a database.
What "Single Deployment Unit" Really Means
In a monolith, everything ships together. One git push, one CI pipeline, one artifact, one deploy.
This means:
- Fix a typo in the grading UI? Redeploy the whole app.
- Update a dependency for course management? Redeploy the whole app.
- Every change goes through the same pipeline.
The consequence:
- You can't deploy grading fixes without also deploying whatever else changed
- A broken test in course management blocks a grading deploy
- Deployment frequency is limited by the slowest-moving part
"Shared Memory" Means Communication through Objects and Method Calls
// All in one process, one memory space
Course course = courseRepo.findById(courseId);
Assignment assignment = course.createAssignment(name, dueDate);
// Method call
Grader grader = GraderFactory.buildFor(assignment, config);
// One database transaction wraps everything
transaction(() -> {
assignment.setGrader(grader);
for (Registration reg : course.getRegistrations())
notificationService.notifyNewAssignment(reg, assignment);
}); // If ANY step fails, ALL steps roll back
What you get for free:
- Speed: Method calls take nanoseconds
- Reliability: If you call a method, it runs
- Transactions: Wrap multiple operations in one atomic unit — all succeed or all roll back
- Objects by reference: Pass an
Assignmentobject around; everyone sees the same data - Debugging: Set a breakpoint, step through the entire flow in one debugger session
⚠️ When components move to different processes or different machines, every one of these guarantees disappears.
What "Unified Codebase" Really Means
All code is in the same repository, with the same build system and the same dependency tree.
Benefits of one codebase
- Refactoring is easy: rename a method and your IDE finds every caller
- Code sharing is free: import any class from any package
- Consistency: one style guide, one set of linters, one test framework
- Onboarding: new developers learn ONE system, not twelve
Costs of one codebase
- Merge conflicts: Unless there's a strong enforcement of modularity, it's easy to step on each other's toes
- Slow builds: the whole app rebuilds even for small changes
- Technology lock-in: The whole system uses one language, one framework
- Blast radius: a bad commit affects everything
Monolith: Quality Attribute Profile
Where Monoliths Excel
- Simplicity ★★★ — One thing to build, test, deploy, monitor
- Responsiveness ★★★ — In-process calls are orders of magnitude faster than network calls
- Testability ★★☆ — One environment to set up, but may need full infrastructure
- Changeability ★★☆ — IDE refactoring across entire codebase, but changes may ripple
Where Monoliths Struggle
- Scalability ★☆☆ — Must scale the entire app; heavy work competes with everything else
- Deployability ★☆☆ — Every deploy is all-or-nothing; a bug anywhere blocks everything
- Fault Tolerance ★☆☆ — A crash in any component takes down the entire process
- Modularity ★☆☆ — Boundaries are conventions, not enforcement (without discipline → Big Ball of Mud)
Notice the modularity problem: without enforced boundaries, monoliths tend toward the Big Ball of Mud we saw earlier. Is there a way to get monolith simplicity WITH better modularity?
The Modular Monolith: Best of Both Worlds?
A modular monolith keeps the simplicity of a single deployment but adds enforced internal boundaries. All the operational simplicity of a monolith, with intentional structure to prevent the Big Ball of Mud.
Simplicity ★★★
Still one deploy, one build
Modularity ★★★
Enforced internal boundaries
Changeability ★★☆
Changes isolated to modules
Scalability ★☆☆
Still one process to scale
Organizing Modules: Technical or Domain Partitioning?
Whether you're building a modular monolith or just organizing packages, there's a fundamental choice: group code by technical role or by domain capability?
Technical Partitioning
autograder/
├── controllers/
│ ├── CourseController.java
│ └── SubmissionController.java
├── services/
│ └── GradingService.java
├── repositories/
│ ├── CourseRepository.java
│ ├── GradeRepository.java
│ └── SubmissionRepository.java
└── models/
├── Submission.java
├── Course.java
└── Grade.java
Organized by technical role — controllers together, models together
Domain Partitioning
autograder/
├── grading/
│ ├── GradingService.java
│ ├── Grade.java
│ └── GradeRepository.java
├── submissions/
│ ├── SubmissionController.java
│ ├── Submission.java
│ └── SubmissionRepository.java
└── courses/
├── CourseController.java
├── Course.java
└── CourseRepository.java
Organized by business capability — everything for grading together
Partitioning Tradeoffs
| Question | Technical | Domain |
|---|---|---|
| "How does Java grading work?" | Jump between controllers/, services/, models/ | Everything in grading/java/ |
| Adding Rust support? | New files in controllers/, services/, models/ | All changes in grading/rust/ |
| Team independence? | Every feature touches multiple packages | "Rust support team" owns their vertical slice |
Connection to L18 heuristics:
- Actor Ownership → Domain partitioning aligns with who owns what
- Rate of Change → Technical partitioning separates things that change together
- The "right" choice depends on your team structure and change patterns
Conway's Law (L22 preview): Organizations design systems that mirror their communication structure. If you have a "frontend team" and "backend team," you'll get technical partitioning. If you have a "grading team" and "courses team," you'll get domain partitioning.
Discord is not a Monolithic System
When you send a message on Discord, you're interacting with many independent systems (microservices), not a monolith.
It is also a distributed system, running on multiple computers.
Deployment Topology: Two Axes
single unit
multiple units
One codebase, one process, one machine.
Most class projects
Multiple processes on one machine, communicating over localhost.
Common in local development
Same codebase on multiple servers behind a load balancer.
Horizontal scaling — still a monolith architecturally
Independent services across multiple servers.
Discord, Netflix, Amazon
Monolith vs. microservices is about how code is divided.
Local vs. distributed is about where it runs.
Quality

Quality Attributes: The "-ilities"
We judge an architecture by quality attributes, measurable properties important to stakeholders.
Poll: What would it mean for Pawtograder to be scalable?

Text espertus to 22333 if the
URL isn't working for you.
Specifying Quality Attributes: Scenarios
As you know from assignments, vague requirements are dangerous. We need something more specific than "the system should be scalable".
We use a common form — a quality attribute scenario — to make every attribute testable and unambiguous.

Why "Scalable" Isn't Specific Enough
Imagine someone says: "The grading system should be scalable." What does that actually mean? Consider three very different situations Pawtograder might face:
Scenario A: Spike
200 students submit all at once at 11:59pm deadline
What happens?
- 200 parallel GitHub Actions runners spin up
- Each builds, tests, parses, scores independently
- All are accepted before the deadline, complete in ~30 minutes
- API receives 200 results simultaneously
Scenario B: Sustained
1800 students submit over 1 hour during an exam
What happens?
- ~30 new runners start every minute
- ~30 complete every minute (steady state)
- Load is spread over time
- API handles ~30 results/min continuously
Scenario C: Trickle
1800 students submit over 24 hours for a homework
What happens?
- ~1-2 runners at any time
- Never more than a handful concurrent
- Minimal system stress
- API barely notices
All three scenarios involve "grading many submissions" — but they place completely different demands on the system. A system that handles Scenario C perfectly might completely fail at Scenario A. This is why we need a vocabulary for being specific about what we mean.
New Attributes: Deployability, Responsiveness, Fault Tolerance
These three attributes become critical when comparing monoliths vs. distributed systems. We'll explore them in depth in L20 — for now, just the vocabulary:
Deployability
How easily can we release changes to production?
- High: Independent deploys, small blast radius, quick rollback
- Low: All-or-nothing deployment, coordinate across teams
Monoliths: one deploy = everything. Distributed: deploy pieces independently.
Responsiveness
How quickly does the system respond to requests?
- High: In-process calls (nanoseconds), shared memory
- Lower: Network calls (milliseconds), serialization overhead
Monoliths win here — no network between components.
Fault Tolerance
How does the system behave when something fails?
- High: Failed component doesn't crash others, graceful degradation
- Low: One crash = entire system down
Distributed systems isolate failures — but introduce NEW failure modes.
Poll: Which attributes most contribute to maintainability?
Maintainability refers to how easily a system can be changed over time.
A. simplicity
B. modularity
C. testability
D. maintainability
E. changeability
F. deployability
G. scalability
H. responsiveness
I. fault tolerance

Text espertus to 22333 if the
URL isn't working for you.
Umbrella Attribute: Maintainability
Maintainability is the umbrella term for how easily a system can be changed over time. It decomposes into the other attributes:
This is the "big picture" attribute — we'll see how styles affect it throughout this lecture and L20.
When someone says "this system is hard to maintain," ask: Is it hard to understand (simplicity)? Hard to change safely (changeability)? Hard to test (testability)? Hard to modify without affecting other parts (modularity)? Decomposing "maintainability" gives us precision.
New Attribute: Scalability
How does the system handle growth in load, data, or users?
Vertical Scaling

Horizontal Scaling

Key insight: Scalability isn't just "can the system handle more load?" It's "does the rest of the system stay responsive while handling that load?" We'll return to this distinction in L20-21.
Poll: What attributes most conflict with simplicity?
A. modularity
B. testability
C. maintainability
D. changeability
E. deployability
F. scalability
G. responsiveness
H. fault tolerance

Text espertus to 22333 if the
URL isn't working for you.
Quality Attributes Trade Off Against Each Other
Here's the uncomfortable truth: you can't maximize every quality attribute. They're in tension with each other.
Simplicity vs. Modularity
Adding interfaces and abstractions → more modular → less simple
We saw this in L7-L8: ISP means more interfaces to understand.
Simplicity vs. Scalability
Horizontal scaling requires workers, queues, coordination → more complexity
A monolith is simple but hits a ceiling. Distributed systems scale but aren't simple.
Deployability vs. Responsiveness
Independent services → independent deploys → network calls → more latency
High deployability often means more service boundaries = more network overhead.
Fault Tolerance vs. Simplicity
Isolation requires boundaries → boundaries add complexity
A monolith is simpler but a single point of failure. Distributed systems isolate failures but add coordination complexity.
Architecture is choosing which attributes matter most for your system. If someone tells you their architecture maximizes everything, they're selling something.
Architectural Styles vs. Patterns
Architects use two terms that sound similar but mean different things:
| Style | Pattern | |
|---|---|---|
| Architecture | Monolith, Microservices, Layered | Repository, Service Locator |
| Design | Object-oriented, Functional | Strategy, Builder, Adapter |
Styles describe the overall shape; patterns are reusable solutions you apply within that shape.
Architectural Patterns and Styles
Architectural Pattern — how the system is deployed and divided
Monolithic
Single process, shared memory
Microservices
Separate processes, network communication
Architectural Style — how code is internally organized
Hexagonal
Isolates core logic from external dependencies
Layered
Organizes code into horizontal tiers
Pipelined
Chains processing steps sequentially
Recap: Hexagonal Architecture (from L16)
Layered Architecture
The layered architecture organizes code horizontally with distinct responsibilities. The classic formulation has four layers:
The key rule: dependencies flow downward.
Layered Architecture Emerges from Heuristics
The same L18 heuristics that led to hexagonal architecture can also lead to layered — depending on what they reveal:
Rate of Change → Layers Separate Volatility
| Layer | Stability |
|---|---|
| Presentation | Changes often (UI redesigns) |
| Application | Changes moderately (new workflows) |
| Domain | Changes rarely (core rules are stable) |
| Infrastructure | Changes when tech changes |
Dependency direction protects stable layers from volatile ones
Actor Ownership → Layers Map to Roles
| Actor | Primary Layer |
|---|---|
| UI/UX designer | Presentation |
| Product owner | Application (use cases) |
| Domain expert | Domain |
| DevOps | Infrastructure |
Different expertise naturally falls into different layers
When do heuristics lead to layers vs. hexagons?
- Layered emerges when responsibilities stack vertically (UI → logic → data) and teams map to technical roles
- Hexagonal emerges when the domain needs multiple entry points (web, CLI, tests) and multiple exit points (DB, API, files)
- They're not mutually exclusive — many systems exhibit BOTH perspectives
Layered Architecture: Quality Attributes
Why organize into layers? Because it directly serves several quality attributes:
Separation of Concerns
Each layer has one job. The Domain layer doesn't know if it's being called from a web UI, a CLI, or a test harness. You can swap your database without touching business rules.
Testability
Test each layer in isolation. Domain logic can be tested with no database. Application logic can use stub infrastructure. Presentation can be tested against a mock service layer.
Replaceability
Change your UI framework without rewriting business logic. Swap PostgreSQL for MongoDB at the Infrastructure layer. Add a REST API alongside your web UI — both call the same Application layer.
The pitfall: Changes that span layers — adding a new field that flows from the UI through services into the database — require touching every layer. This "layer tax" is the cost of separation. It's worth it for large systems, but can feel heavy for small ones.
Layers in Pawtograder
Layered vs. Hexagonal: Both separate domain from infrastructure. Layered emphasizes horizontal strata; Hexagonal emphasizes dependency direction (domain at center). You'll often see both lenses applied to the same system.
Pipelined Architecture (Pipes and Filters)
Data flows through stages. Each stage transforms its input into output for the next. Pawtograder's grading pipeline is a perfect example:
Benefits
- Each stage testable independently
- Adding mutation testing = insert a stage between "Run Tests" and "Grade Units"
- Classic examples: compilers, Unix pipes, ETL
Constraints
- Works best when data flows one direction
- Awkward for interactive/bidirectional workflows
- Cross-cutting concerns may touch every stage
Testability ★★★
Each stage tested in isolation
Changeability ★★★
Insert, remove, or reorder stages
Simplicity ★★☆
Linear flow, easy to follow
Fault Tolerance ★☆☆
Stage failure stops the pipeline
Pipelined Architecture Emerges from Heuristics
When does applying heuristics lead to a pipeline? When the problem has a natural transformation flow:
Rate of Change → Stage Independence
| Stage | When It Changes |
|---|---|
| File overlay | Rarely (mechanism stable) |
| Build runner | Per language (Gradle → Cargo) |
| Test runner | Rarely (JUnit is JUnit) |
| Report parsers | When tool versions change |
| Scoring logic | When rubric structure changes |
Each stage changes for different reasons — natural seams for separation
Testability → Stage-Level Testing
testOverlay() → known input → expected output
testBuild() → sample project → BuildResult
testParser() → sample XML → TestResult[]
testScoring() → TestResult[] → GradedPart[]
Each stage is a pure function: input → output. Perfect for unit testing.
When does a pipeline emerge?
Apply heuristics and ask: "Does data flow one direction? Is each transformation independently testable? Do stages change for different reasons?"
If yes → pipeline structure emerges naturally.
Styles Emerge from Heuristics

But Wait: Where Do Those Answers Come From?
The heuristics ask great questions: What changes at different rates? Who owns what? Where do we need test seams? But you can only answer those questions if you understand your domain.
Without Domain Understanding
"We might need to support multiple databases..."
- Adds abstraction layers now
- Increases cognitive overhead
- Makes simple queries harder to optimize
- Pays flexibility tax EVERY DAY
Building for imaginary changes = real complexity for fantasy benefits
With Domain Understanding (L12)
Pawtograder's domain analysis revealed:
- Config files change weekly → declarative YAML
- Grading logic changes monthly → isolate in adapters
- Database vendor change unlikely → couple tightly, it's fine
Invest flexibility where change actually happens
The L18 heuristics are powerful tools — but they only give good answers when applied to real domain knowledge, not hypothetical scenarios.
How Pawtograder's Architecture Emerged
We didn't start by saying "let's use hexagonal architecture." We started with domain understanding, applied heuristics, and the structure emerged:
1. Domain Understanding (L12)
| Question | Answer |
|---|---|
| What changes most? | Config files (weekly) |
| What's stable? | API contract, core grading logic |
| Who are the actors? | Instructors, action maintainers, sysadmins |
| What's unlikely to change? | Database vendor, GitHub Actions platform |
2. Heuristics Applied (L18)
| Heuristic | Result |
|---|---|
| Rate of Change | Config ↔ Action ↔ API boundaries |
| Actor Ownership | Instructor owns config, maintainer owns action |
| ISP | Narrow ports: Builder, FeedbackAPI |
| Testability | Domain testable without real API |
3. The Pattern That Emerged → Hexagonal + Pipelined
- Domain core (grading logic) at center — testable without infrastructure
- Ports define contracts —
Builder,Parser,FeedbackAPI - Adapters implement ports —
GradleBuilder,SurefireParser,SupabaseAPI - Data flows through a pipeline — overlay → build → test → parse → grade → submit
We call it "hexagonal" because that's what the community named this shape. We DISCOVERED it; we didn't CHOOSE it.
The Complete Picture: L12 → L18 → L19
Architecture isn't about picking from a menu. It's a discovery process:
The style names (hexagonal, layered, pipelined) are vocabulary for communication — not a catalog to shop from. You discover architecture by understanding your domain and asking the right questions.
The Two Big Families: Monolith vs. Microservices
At the highest level, systems fall into two categories based on how they're deployed:
Monolith — One Deployment Unit
- All code lives in a single codebase
- Deployed as a single artifact (JAR, binary, container)
- Components communicate via method calls
- One database, one process, one server (typically)
Everything we've discussed so far — layered, hexagonal, modular monolith, pipelined — are variations within this family.
Microservices — Many Deployment Units
- Code split across separate services
- Each service deployed independently
- Services communicate via network (HTTP, messages)
- Separate databases, processes, servers
This is where industry has been moving — and it's the focus of L20-L21.
Poll: What type of architecture is Pawtograder?
A. Monolith
B. Microservices
C. I have no idea

Text espertus to 22333 if the
URL isn't working for you.
Pawtograder Architecture (Simplified)
The Network Changes Everything
In a monolith, method calls are instant, reliable, and traceable. Over a network:
Monolith (Bottlenose)
submission.computeGrade();
// ✅ Executes in nanoseconds
// ✅ Always succeeds or throws
// ✅ Full stack trace on error
// ✅ Wrapped in a DB transaction
Distributed (Pawtograder)
feedbackApi.submit(submissionId, feedback);
// ⚠️ Might take ms... or seconds... or ∞
// ⚠️ Server might be down or overloaded
// ⚠️ Request succeeds, response lost
// ⚠️ Retry = accidentally grade twice?
// ⚠️ No cross-system transactions
Pawtograder's SupabaseAPI actually implements retry logic with exponential backoff — complexity that simply doesn't exist in a monolith.
Pawtograder, like most microservice architectures is distributed — and distributed systems are hard.
Poll: Where are Monoliths Superior to Microservices?
A. Simplicity
B. Modularity
C. Testability
D. Changeability
E. Responsiveness
F. Scalability

Text espertus to 22333 if the
URL isn't working for you.
Monolith vs. Microservices: Quality Attribute Tradeoffs
Any time you're deciding between these styles, this is the tension:
| Quality Attribute | Monolith | Microservices |
|---|---|---|
| Simplicity | ★★★ One process, one deploy, one mental model | ★☆☆ Many services, network complexity, distributed debugging |
| Modularity | ★☆☆ Boundaries are conventions — easy to violate | ★★★ Boundaries enforced by network — can't cheat |
| Testability | ★★☆ One environment, but need full infrastructure | ★★★ Each service testable in isolation |
| Deployability | ★☆☆ All-or-nothing deploy, slowest part limits frequency | ★★★ Independent deploys per service |
| Changeability | ★★☆ IDE refactoring is powerful; but changes can ripple | ★★☆ Isolated changes easy; cross-service changes expensive |
| Responsiveness | ★★★ In-process calls: nanoseconds | ★☆☆ Network calls: milliseconds, retries, timeouts |
| Scalability | ★☆☆ Vertical only — heavy work competes with everything else | ★★★ Horizontal — offload work to independent services |
| Fault Tolerance | ★☆☆ One crash takes down everything | ★★☆ Failures can be isolated (but new failure modes) |
Notice the pattern: almost every row is a direct tradeoff. What monoliths win on simplicity and responsiveness, microservices win on modularity and scalability. This is why "which is better?" is the wrong question.
Key Takeaway: Architecture Is Discovered, Not Chosen
Domain understanding (L12) + heuristics (L18) + style recognition (L19) = a complete approach:
The Heuristics (L18)
Questions that reveal structure
- Rate of Change: What changes together? What changes independently?
- Actor: Who owns what? Whose changes should stay isolated?
- ISP: What does each client actually need?
- Testability: Where do we need seams for testing?
The Emergent Styles (L19)
Patterns that have names
- Hexagonal: Domain isolation with swappable adapters
- Layered: Stacked responsibilities, downward dependencies
- Pipelined: Transformation flow, stage independence
- Monolithic: Single deployment, shared memory
- Modular Monolith: Enforced boundaries in one deployment
The Process:
Understand the domain (L12) → Apply heuristics (L18) → Boundaries emerge → Recognize the style (L19) → Communicate it
You don't pick "hexagonal" from a menu. You discover it by understanding what actually matters.
Looking Forward: Where These Ideas Go Next
| Concept from Today | Where It Goes |
|---|---|
| "The Network Changes Everything" | L20: Fallacies of Distributed Computing, client-server architecture, security across trust boundaries |
| Monolith vs. Microservices | L20-21: Distributed architecture styles, when to break the monolith, serverless |
| Quality Attribute Tradeoffs | L21: How platform constraints (serverless, containers) shape architecture — like GitHub Actions shaped Pawtograder |
| Heuristics → Emergent Styles | L22: Conway's Law — team structure is another heuristic that shapes architecture |
We opened with "how do we organize our code?" — and now you have styles, quality attributes, and tradeoff vocabulary to answer it. Next: what happens when your boundaries cross a network.
Bonus Slide
