← Magpieapi (
worker (
ui (
Architecture
System Context
C4Context
title System Context — Magpie within joel.holmes.haus
Person(admin, "Admin", "Manages resources, tags, and labels via the UI")
Boundary(platform, "joel.holmes.haus Platform") {
System(ui, "joel.holmes.haus", "Go-app WASM admin SPA")
System(magpie, "Magpie", "Central resource index — stores and fans out resource metadata")
System(lynx, "Lynx", "Web archiving — publishes website resources")
System(owl, "Owl", "Library — publishes book and paper resources")
System(greyseal, "Grey Seal", "RAG service — consumes resources for conversation context")
System(shrike, "Shrike", "Search — receives resource events for keyword indexing")
}
SystemDb(postgres, "PostgreSQL", "Resources, tags, labels, resource_tags, resource_labels")
SystemQueue(kafka, "Kafka", "magpie.v1.Resource (inbound + outbound) · greyseal.v1.Resource")
Rel(admin, ui, "Uses")
Rel(ui, magpie, "ConnectRPC")
Rel(lynx, magpie, "Publishes magpie.v1.Resource")
Rel(owl, magpie, "Publishes magpie.v1.Resource")
Rel(magpie, postgres, "Reads / writes")
Rel(kafka, magpie, "inbound magpie.v1.Resource")
Rel(magpie, kafka, "outbound greyseal.v1.Resource")
Rel(kafka, greyseal, "greyseal.v1.Resource")
Rel(kafka, shrike, "magpie.v1.Resource")
Container Diagram
C4Container
title Magpie — Internal Containers
Boundary(magpie, "Magpie") {
Container(api, "cmd/api", "Go / ConnectRPC h2c :9000", "5 domain handlers: Resource · Tag · Label · ResourceTag · ResourceLabel")
Container(worker, "cmd/worker", "Go / Kafka", "5 domain consumers; ResourceConsumer also republishes to grey-seal topic")
Container(ui, "cmd/ui", "Go-app WASM :8000", "Browser SPA — resource list, tag/label management")
Container(resourceSvc, "resource.Service", "Go", "Create · List · Get · Delete")
Container(tagSvc, "tag.Service", "Go", "CRUD for tags")
Container(labelSvc, "label.Service", "Go", "CRUD for labels")
Container(rtSvc, "resource_tag.Service", "Go", "Manage tag associations")
Container(rlSvc, "resource_label.Service", "Go", "Manage label associations")
ContainerDb(resourceRepo, "ResourceRepo", "PostgreSQL / squirrel", "resources table (ON CONFLICT uuid DO NOTHING)")
ContainerDb(tagRepo, "TagRepo + LabelRepo + join repos", "PostgreSQL / squirrel", "tags · labels · resource_tags · resource_labels")
}
SystemDb(postgres, "PostgreSQL", "")
SystemQueue(kafka, "Kafka", "consumer group app-1")
Rel(api, resourceSvc, "delegates · publishes Kafka event on Create")
Rel(api, tagSvc, "delegates")
Rel(api, labelSvc, "delegates")
Rel(api, rtSvc, "delegates")
Rel(api, rlSvc, "delegates")
Rel(kafka, worker, "magpie.v1.Resource")
Rel(worker, resourceSvc, "Upsert resource")
Rel(worker, kafka, "Publishes greyseal.v1.Resource")
Rel(resourceSvc, resourceRepo, "CRUD")
Rel(tagSvc, tagRepo, "CRUD")
Rel(resourceRepo, postgres, "SQL")
Rel(tagRepo, postgres, "SQL")
System context
Magpie sits at the centre of a wider knowledge-management ecosystem. Upstream producers (e.g. Owl, Lynx, or direct API callers) submit resources; Magpie stores them and fans events out to downstream consumers including grey-seal (RAG/vector DB), shrike (search index), and others.
graph TD
Owl["Owl"] -->|"magpie.v1.Resource"| API
Lynx["Lynx"] -->|"magpie.v1.Resource"| API
CLI["CLI / direct upload"] -->|"ConnectRPC"| API
API["Magpie API :9000\nConnectRPC · HTTP/2"]
API --> PG[("PostgreSQL")]
API --> Kafka[("Kafka / Redpanda")]
Kafka -->|"greyseal.v1.Resource"| GreySeal["Grey Seal\nRAG / vector"]
Kafka -->|"magpie.v1.Resource"| Shrike["Shrike\nsearch index"]
Kafka -->|"..."| Others["other consumers"]
Process model
api (cmd/api/main.go)
- Initialises a PostgreSQL connection, runs Goose migrations on startup.
- Creates a Kafka producer connection.
- For each of the five domains (Resource, Tag, Label, ResourceTag, ResourceLabel) it wires:
- a
repo.*Repo(SQL repository), - a domain service (
lib/magpie/<domain>/service.go), - a gRPC adapter (
lib/magpie/<domain>/grpc/service.go) that also holds a KafkaProducer, - and registers a ConnectRPC handler on the HTTP mux.
- a
- Wraps every handler with CORS middleware.
- Serves HTTP/2 without TLS via
h2c.
worker (cmd/worker/main.go)
- Initialises a PostgreSQL connection (migrations skipped).
- Creates a Kafka consumer and producer connection.
- For each domain it wires a domain service and starts a Kafka consumer goroutine.
ResourceConsumeradditionally re-publishes the resource to thegrey-sealtopic.- All consumers share consumer group
app-1.
ui (cmd/ui/main.go)
- A WebAssembly application compiled with
go-appv9. - Registers client-side routes with regex UUID matchers.
- Pages call
lib/ui/api/client.gowhich constructs ConnectRPC HTTP clients pointed atAPI_URL(defaulthttp://localhost:9000). - Serves static assets and WASM bootstrap on
:8000.
Internal library layout
lib/
magpie/
resource/ # Resource domain
tag/ # Tag domain
label/ # Label domain
resource_tag/ # ResourceTag join domain
resource_label/ # ResourceLabel join domain
repo/ # PostgreSQL repositories (squirrel query builder)
schemas/magpie/v1/ # Generated Protobuf Go types + ConnectRPC stubs
ui/ # go-app WebAssembly UI
Each domain package follows the same four-file pattern:
| File | Purpose |
|---|---|
interface.go | *Service interface + compile-time assertions |
model.go | *Repository interface + compile-time assertions |
service.go | *service struct implementing the service via archaea/base generics |
consumer.go | Kafka consumer struct + ConvertProto deserialiser + run() goroutine |
grpc/service.go | ConnectRPC handler embedding base.GenericGRPCService |
Event flow
API path (write):
- Client calls
CreateResourcevia ConnectRPC. base.GenericGRPCService.Createcallsservice.Create(writes to Postgres).base.GenericGRPCService.Createcallsproducer.Publish— emits amagpiev1.ResourceProtobuf message to Kafka.
Worker path (consume):
kafka.Consumerreads from topicmagpiev1.Resource.ConvertProtounmarshals the Protobuf bytes.ResourceConsumer.run()callsresourceservice.Createto upsert the record into Postgres (ON CONFLICT (uuid) DO NOTHING).ResourceConsumer.run()maps the source enum and callsresourcePublisher.Publishwith agreysealv1.Resourceto the grey-seal topic.
Database migrations
Migrations are embedded in the binary (//go:embed migrations/*.sql) and run automatically by the API process on startup using Goose. The worker process skips migrations.