From 235cc41f68d332008f816835f264122980bddd44 Mon Sep 17 00:00:00 2001 From: Gabriel Radureau Date: Tue, 5 May 2026 10:42:35 +0200 Subject: [PATCH] =?UTF-8?q?=F0=9F=93=9D=20docs(adr):=20ADR-0028/0029/0030?= =?UTF-8?q?=20=E2=80=94=20passwordless=20auth=20+=20Mailpit=20+=20BDD=20em?= =?UTF-8?q?ail=20strategy=20(#58)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: Gabriel Radureau Co-committed-by: Gabriel Radureau --- adr/0028-passwordless-auth-migration.md | 147 ++++++++++++++++++ adr/0029-email-infrastructure-mailpit.md | 142 +++++++++++++++++ adr/0030-bdd-email-parallel-strategy.md | 187 +++++++++++++++++++++++ adr/README.md | 3 + 4 files changed, 479 insertions(+) create mode 100644 adr/0028-passwordless-auth-migration.md create mode 100644 adr/0029-email-infrastructure-mailpit.md create mode 100644 adr/0030-bdd-email-parallel-strategy.md diff --git a/adr/0028-passwordless-auth-migration.md b/adr/0028-passwordless-auth-migration.md new file mode 100644 index 0000000..7d06568 --- /dev/null +++ b/adr/0028-passwordless-auth-migration.md @@ -0,0 +1,147 @@ +# 28. Passwordless authentication: magic link → OpenID Connect + +**Date:** 2026-05-05 +**Status:** Proposed +**Authors:** Gabriel Radureau, AI Agent + +## Context and Problem Statement + +ADR-0018 (now Implemented) shipped a username + password authentication system with bcrypt hashing, JWT tokens, admin master password, and admin-assisted password reset. It works, but it carries the cost-of-passwords : we store password hashes, support password reset flows, and maintain a credential-rotation policy. Users hate passwords ; ops and security pay for them. + +Two industry-standard alternatives exist : +1. **Magic link by email** — user enters their email, receives a one-time token in a clickable link, link consumes the token and issues a session JWT. No password stored. +2. **OpenID Connect Authorization Code flow** — delegate authentication to an external Identity Provider (e.g. Authelia, Keycloak, Auth0, Google) ; our app receives an `id_token` after the OIDC dance. + +We want to **migrate to passwordless** for new sign-ups while keeping the existing username/password code path operational during the transition (no flag-day breakage). The two passwordless mechanisms above complement each other : magic link is simpler for first-party users on day 1 ; OIDC is the right answer for second-party users (other ARCODANGE products, partner integrations) and for admin SSO. + +A third constraint : ARCODANGE local development must use HTTPS for OAuth callbacks to be valid (most OIDC providers reject `http://localhost` redirect URIs in their default config). `mkcert` is the canonical local-CA tool for this. + +## Decision Drivers + +* **Reduce password-related attack surface** — no hash storage, no breach-and-reuse risk, no password reset abuse vectors +* **User experience** — passwordless is faster for the user (1 click in email vs typing/remembering password) +* **Operational simplicity** — no password reset flow to maintain ; the password-reset code can be removed once migration is complete +* **Multi-product readiness** — OIDC is the prerequisite for cross-product SSO across the ARCODANGE portfolio +* **Backwards compatibility** — must not break existing tokens or BDD scenarios mid-migration +* **Local dev parity** — HTTPS in dev so OAuth flows can be tested locally without provider-specific workarounds + +## Considered Options + +### Option 1 (Chosen): Sequenced — magic link first, OIDC second + +Deliver in two phases : + +* **Phase A — Magic link** + - Add `POST /api/v1/auth/magic-link/request` (body: `{email}`) — generates token, stores it (TTL ~15 min), sends email via SMTP + - Add `GET /api/v1/auth/magic-link/consume?token=<...>` — single-use consumption, issues a JWT, returns it as cookie + JSON body + - Reuse the existing JWT issuance + secret retention infrastructure (ADR-0021) + - Existing `/api/v1/auth/login` (username/password) stays operational during transition + +* **Phase B — OpenID Connect Authorization Code with PKCE** + - Add `GET /api/v1/auth/oidc/start` — generates state + PKCE verifier, redirects to provider's `authorization_endpoint` + - Add `GET /api/v1/auth/oidc/callback` — exchanges code for tokens, validates `id_token` signature against provider's JWKS, issues internal JWT + - Provider URL configurable per environment (`auth.oidc.issuer_url`, `auth.oidc.client_id`, `auth.oidc.client_secret`) + - Allow multiple providers in config (key by provider name, e.g. `arcodange-sso`) + - Local dev requires HTTPS — `mkcert` setup documented in `documentation/DEV_SETUP.md` + +* **Phase C (later, separate ADR) — Decommission password auth** + - Once all users have migrated, remove the password endpoints, remove the password_hash column, mark ADR-0018 as Superseded by this ADR + +### Option 2: All-at-once OIDC, no magic link + +Skip magic link, jump straight to OIDC. + +* Good — single migration, no intermediate state +* Bad — requires an OIDC provider operational on day 1, which we don't have configured +* Bad — magic link has zero infra dependencies (just SMTP) ; OIDC requires running an IdP or paying for one + +### Option 3: Magic link only, no OIDC + +Stop at Phase A. + +* Good — simplest implementation +* Bad — doesn't solve cross-product SSO ; we'd re-do this work later for the broader ARCODANGE portfolio + +### Option 4: Status quo (do nothing) + +Keep username + password. + +* Good — zero effort +* Bad — passwords stay forever ; ARCODANGE locks itself out of integration scenarios that expect OIDC + +## Decision Outcome + +Chosen option : **Option 1, sequenced magic link → OIDC**. + +Rationale : +- Magic link is implementable today with zero infra dependencies beyond the email infrastructure (ADR-0029) +- OIDC requires running an IdP locally (Authelia or Keycloak) — that's another container in the dev stack and another ADR's worth of decision work, but the magic-link work is the natural prerequisite (token-by-email plumbing is reused) +- Sequenced delivery means we never have to roll back : Phase A works alone, Phase B layers on top, Phase C cleans up + +## Implementation Plan + +### Phase A — Magic link (target: 2-3 PRs) + +1. **A.1 — Storage** : add a `magic_link_tokens` table (id, email, token_hash, expires_at, consumed_at). Repository pattern alongside `pkg/user/postgres_repository.go`. +2. **A.2 — Token endpoint** : `POST /api/v1/auth/magic-link/request` generates a token, stores it (hashed), enqueues an email send. Rate-limited (cf. ADR-0022) by email address. +3. **A.3 — Consume endpoint** : `GET /api/v1/auth/magic-link/consume?token=...` validates + marks consumed + issues JWT. Returns `Set-Cookie` and `{token: jwt}` body. +4. **A.4 — Sign-up via magic link** : if the email is unknown, the consume endpoint creates the user record. (No separate "sign-up" flow needed — first magic link IS the sign-up.) +5. **A.5 — BDD coverage** : scenarios for happy path, expired token, double-consume, wrong-email, rate-limit. Cf. ADR-0030 for the email assertion strategy. + +### Phase B — OIDC Code flow with PKCE (target: 3-4 PRs) + +1. **B.1 — Local IdP** : choose Authelia or Keycloak for local development. Add to `docker-compose.yml` with default test configuration. +2. **B.2 — mkcert** : document local HTTPS setup in `documentation/DEV_SETUP.md`, add `make cert` target. +3. **B.3 — OIDC client** : `pkg/auth/oidc.go` — discovery, JWKS cache, code exchange with PKCE. +4. **B.4 — Endpoints** : `/oidc/start` and `/oidc/callback`. +5. **B.5 — Provider config** : `auth.oidc.providers` map in config (cf. ADR-0006 Viper) ; multi-provider supported. +6. **B.6 — BDD coverage** : end-to-end scenarios using a mock OIDC server (or the local Authelia instance with deterministic users). + +### Phase C — Decommission password (separate ADR after A+B in production) + +Out of scope for this ADR. Will be ADR-NNNN when migration is complete. + +## Pros and Cons of the Options + +### Option 1 (Chosen — Sequenced) + +* Good — incremental, no flag day, each phase shippable on its own +* Good — reuses existing JWT infrastructure (ADR-0021 secret retention) +* Good — magic link work is a prerequisite for OIDC anyway (email plumbing, mkcert) +* Bad — total work spans 2 sprints, longer time-to-OIDC than Option 2 +* Mitigation: after Phase A, the team can stop if priorities shift — magic link alone is a complete improvement + +### Option 2 (All OIDC) + +* Good — single migration +* Bad — requires IdP operational from day 1 +* Bad — local dev environment more complex than necessary for the magic link case + +### Option 3 (Magic link only) + +* Good — minimal scope +* Bad — re-work later for SSO + +### Option 4 (Status quo) + +* Good — zero effort +* Bad — accumulating tech debt + +## Consequences + +* `pkg/auth/` package created (currently auth code lives in `pkg/user/`) — separation is now justified by the multi-mechanism scope +* `pkg/user/api/auth_handler.go` continues to serve username/password during transition (Phase A and B), removed in Phase C +* `documentation/DEV_SETUP.md` becomes a load-bearing doc for new contributors (mkcert + docker-compose with mailpit + Authelia) +* The 4 new endpoints (`magic-link/request`, `magic-link/consume`, `oidc/start`, `oidc/callback`) require their own ADR entries in the API doc + Swagger annotations +* Phase A's magic link plumbing depends on **ADR-0029** (email infrastructure decision) — that ADR ships first +* BDD scenarios for Phase A depend on **ADR-0030** (email testing strategy with parallel BDD) — that ADR ships before any Phase A scenario lands + +## Links + +* Email infrastructure : [ADR-0029](0029-email-infrastructure-mailpit.md) +* BDD email testing strategy : [ADR-0030](0030-bdd-email-parallel-strategy.md) +* Existing user auth (to be partially superseded by Phase C) : [ADR-0018](0018-user-management-auth-system.md) +* JWT secret retention reused : [ADR-0021](0021-jwt-secret-retention-policy.md) +* Rate limiting reused : [ADR-0022](0022-rate-limiting-cache-strategy.md) +* OAuth 2.0 Authorization Code with PKCE : [RFC 7636](https://datatracker.ietf.org/doc/html/rfc7636) +* OpenID Connect Core : [OpenID Foundation](https://openid.net/specs/openid-connect-core-1_0.html) diff --git a/adr/0029-email-infrastructure-mailpit.md b/adr/0029-email-infrastructure-mailpit.md new file mode 100644 index 0000000..4d258ab --- /dev/null +++ b/adr/0029-email-infrastructure-mailpit.md @@ -0,0 +1,142 @@ +# 29. Email infrastructure: Mailpit local + production deferred + +**Date:** 2026-05-05 +**Status:** Proposed +**Authors:** Gabriel Radureau, AI Agent + +## Context and Problem Statement + +ADR-0028 (passwordless auth) requires the application to send emails — magic-link tokens specifically. Email is a substrate decision : the choice of SMTP provider, the abstraction in code, and the local development experience all depend on it. + +Two separate concerns : + +1. **Local development + BDD tests** : we need a local SMTP receiver that captures emails and exposes them for inspection. Real email providers (Gmail, SES, SendGrid) are unsuitable for local dev — they cost money, leak test data, and rate-limit aggressively. +2. **Production** : the application needs to actually deliver mail to user inboxes. This decision is deferred — see "Out of scope" below. + +ARCODANGE already has the **Mailpit** docker image pulled locally (`axllent/mailpit:latest`, 51 MB). Mailpit captures SMTP submissions on a port, stores them in-memory, exposes them via HTTP UI (default :8025) and an HTTP API (`/api/v1/messages`). It's the de-facto choice for Go projects needing local SMTP capture. + +The application code needs to be **provider-agnostic** : a `pkg/email` package with a `Sender` interface, a Mailpit-compatible SMTP implementation, and a contract that production can swap for a real provider's adapter without changing call sites. + +## Decision Drivers + +* **Local dev and CI must work without internet** — emails should never leave the docker network in tests +* **Test inspection must be programmatic** — BDD tests assert on email content, not just "an email was sent" +* **Production decision deferred** — we don't know the volume / SLA / compliance requirements yet ; over-committing now is premature +* **Provider portability** — `pkg/email` interface lets us swap implementations without touching auth code +* **Cost** — Mailpit is free, runs in a container, no API quota concerns + +## Considered Options + +### Option 1 (Chosen): Mailpit for local + tests, production via a production-grade provider TBD + +* Add Mailpit to `docker-compose.yml` (SMTP :1025, HTTP API :8025) +* `pkg/email` package with a `Sender` interface +* Default implementation : `SMTPSender` configured against the local Mailpit in dev/CI +* Tests query Mailpit's HTTP API to inspect captured messages +* Production deployment will add a separate `pkg/email/_sender.go` implementing the same interface — that decision is its own ADR + +### Option 2: MailHog instead of Mailpit + +MailHog is the older, well-known alternative. Mailpit is its modern successor, written in Go, with a richer API and active maintenance. + +* Bad — abandoned upstream (last commit 2020). Mailpit is the natural replacement. + +### Option 3: In-process mock email sender + +Write a `MockSender` that captures emails in a Go slice. No SMTP at all. + +* Good — fastest tests, zero infra +* Bad — doesn't validate the actual SMTP wire format, the From/To/Subject headers, the encoding of multi-byte content, or the DKIM/Reply-To setup +* Bad — doesn't double as a manual-inspection tool for the developer (no UI to look at the email) + +### Option 4: Send to a real but throwaway provider (Mailtrap, Mailosaur) + +External services that capture-and-display emails. + +* Good — production-similar paths +* Bad — costs money, requires an account, leaks test data, doesn't work offline + +## Decision Outcome + +Chosen option : **Option 1 — Mailpit for local + tests, production deferred**. + +Rationale : +- Mailpit is the modern, maintained successor to MailHog ; image is already on the dev machine +- The interface-first design (`pkg/email.Sender`) means production swap is a future ADR, not a refactor +- BDD tests have a real wire-format path to assert on (cf. ADR-0030) +- Zero monthly cost in dev/CI + +## Implementation Plan + +1. **`pkg/email/sender.go`** — define the `Sender` interface : + ```go + type Sender interface { + Send(ctx context.Context, msg Message) error + } + type Message struct { + To string + From string + Subject string + BodyText string + BodyHTML string + Headers map[string]string // for trace correlation, e.g. X-Test-Scenario-ID + } + ``` +2. **`pkg/email/smtp_sender.go`** — implementation using `net/smtp` (stdlib) configured by `auth.email.smtp_host`, `smtp_port`, `smtp_username`, `smtp_password`, `smtp_use_tls`. For Mailpit defaults : `smtp_host=localhost smtp_port=1025 smtp_use_tls=false`. +3. **`pkg/email/sender_test.go`** — unit tests using `httptest`-style fake SMTP, plus a `*_integration_test.go` (build tag `integration`) hitting the live Mailpit. +4. **`docker-compose.yml`** — add the `mailpit` service : + ```yaml + mailpit: + image: axllent/mailpit:latest + ports: + - "1025:1025" # SMTP + - "8025:8025" # HTTP UI / API + environment: + MP_MAX_MESSAGES: 5000 + ``` +5. **`pkg/config/config.go`** — add the `auth.email.*` config keys with defaults pointing at local Mailpit. +6. **Documentation** : `documentation/EMAIL.md` covering local setup, message inspection via UI (http://localhost:8025), API queries. + +## Pros and Cons of the Options + +### Option 1 (Chosen — Mailpit) + +* Good — already locally available, free, modern, maintained +* Good — provider-agnostic interface decouples from prod choice +* Good — full SMTP wire format = realistic test path +* Good — UI for manual inspection during dev +* Bad — requires Mailpit running (one more docker-compose service) +* Bad — production decision still pending + +### Option 2 (MailHog) + +* Bad — unmaintained, choosing it would create immediate tech debt + +### Option 3 (Mock only) + +* Bad — too much abstraction loss, can't catch wire-level bugs + +### Option 4 (Mailtrap / Mailosaur) + +* Bad — cost, network dependency, account management + +## Consequences + +* New service in `docker-compose.yml` — developers run `docker compose up -d` once and Mailpit is on +* New `pkg/email` package — auth code (ADR-0028 magic link) calls `Sender.Send()` rather than direct SMTP +* New `auth.email.*` config keys, new env vars (`DLC_AUTH_EMAIL_SMTP_HOST` etc.) +* Mailpit's HTTP API becomes part of the BDD test contract — tests use it to assert messages were sent (cf. ADR-0030) +* Production sender ADR (TBD) will be a separate decision — this ADR explicitly does NOT pick a vendor for prod + +## Out of scope + +* **Production email provider selection** — separate ADR when we know volume / SLA / compliance constraints. Likely candidates: AWS SES, Postmark, SendGrid, Mailjet. Magic-link emails are transactional + low-volume — most providers handle that easily. +* **DKIM/SPF/DMARC setup** — production deliverability concern, not a local-dev concern +* **HTML email templating** — we'll start with plain-text emails ; HTML can be added with a template package (e.g. `html/template`) when ARCODANGE branding requires it + +## Links + +* Auth migration that requires this : [ADR-0028](0028-passwordless-auth-migration.md) +* BDD test strategy that consumes Mailpit : [ADR-0030](0030-bdd-email-parallel-strategy.md) +* Mailpit homepage : https://mailpit.axllent.org/ +* Mailpit API reference : https://mailpit.axllent.org/docs/api-v1/ diff --git a/adr/0030-bdd-email-parallel-strategy.md b/adr/0030-bdd-email-parallel-strategy.md new file mode 100644 index 0000000..dcea6ad --- /dev/null +++ b/adr/0030-bdd-email-parallel-strategy.md @@ -0,0 +1,187 @@ +# 30. BDD email assertions with parallel test execution + +**Date:** 2026-05-05 +**Status:** Proposed +**Authors:** Gabriel Radureau, AI Agent + +## Context and Problem Statement + +ADR-0028 introduces magic-link auth, which requires the application to send emails. ADR-0029 chose **Mailpit** as the local SMTP receiver for dev and BDD tests. The remaining decision : **how do BDD scenarios assert on the email content while running in parallel ?** + +Today (since [PR #35](https://gitea.arcodange.lab/arcodange/dance-lessons-coach/pulls/35)), the BDD suite runs in parallel via per-package PostgreSQL schema isolation (cf. [ADR-0025](0025-bdd-scenario-isolation-strategies.md)). Each Go test package has its own schema ; tests inside a package run serially within that schema. This works because Postgres has named schemas with strong isolation. **Mailpit has no equivalent** — there is one inbox per Mailpit instance, shared across all senders. + +A naive integration would have parallel scenarios fight over each other's emails : +- Scenario A : "request magic link for `test@example.com`" → email arrives +- Scenario B (in parallel) : "request magic link for `test@example.com`" → email arrives +- Both scenarios query Mailpit for `test@example.com` — they see each other's messages, assertions become flaky. + +We need a way to scope each scenario's emails so it only sees its own messages. + +## Decision Drivers + +* **No regression on parallelism** — BDD-isolation Phase 3 (PR #35) achieved a 2.85x speedup ; the email-assertion solution must not undo that +* **No new container per test** — running one Mailpit per scenario would defeat the simplicity that made us choose Mailpit +* **Determinism** — a scenario's email assertions must succeed regardless of how many other scenarios are running +* **Realistic SMTP path** — we still want the full SMTP wire format exercised (cf. ADR-0029) ; we don't want to bypass Mailpit +* **Cleanup hygiene** — old messages from previous test runs must not leak into a new run + +## Considered Options + +### Option 1 (Chosen): Per-test recipient scoping with deterministic addresses + +Each BDD scenario generates a unique email address for its test user, derived from the scenario key + a random suffix. Examples : + +- Scenario `magic-link-happy-path` → `magic-link-happy-path-<8hex>@bdd.local` +- Scenario `magic-link-expired-token` → `magic-link-expired-token-<8hex>@bdd.local` + +The application code accepts any email format. The BDD scenario asserts on Mailpit's HTTP API filtering by the `to` address. Two parallel scenarios with different addresses can NEVER see each other's emails. + +**Cleanup** : at the start of each scenario, the BDD framework calls `DELETE /api/v1/messages?query=to:` on Mailpit to purge any leftover messages from prior runs. + +### Option 2: One Mailpit instance per Go test package + +Spawn a fresh Mailpit container in `TestMain` of each `features//` package. Each gets its own port range. + +* Good — strong isolation +* Bad — heavyweight (one container per package = 5+ containers running) +* Bad — port allocation complexity (similar to existing `pkg/bdd/parallel/port_manager.go`, but applied to Mailpit) +* Bad — slow startup (Mailpit boot is ~200ms but adds up) + +### Option 3: One Mailpit instance, scenario-scoped via custom SMTP header + +Add a custom header `X-BDD-Scenario-ID: ` to outgoing emails. Tests query Mailpit filtered on that header. + +* Good — same single Mailpit +* Bad — requires the application code to know the scenario ID at email-send time, which means a test-only path in production code +* Bad — header propagation is fragile (gets stripped by some SMTP relays — not Mailpit, but real production providers might) ; we don't want a different code path between dev and prod + +### Option 4: Sequence parallel scenarios via per-scenario Mailpit lock + +Use a mutex / queue so no two scenarios that send email run concurrently. + +* Good — minimal code change +* Bad — gives up the parallel speedup for any feature that involves email — that's most auth-related features going forward + +## Decision Outcome + +Chosen option : **Option 1 — per-test recipient scoping**. + +Rationale : +- Recipient scoping is the simplest abstraction : the address IS the identity ; Mailpit's HTTP API natively supports filtering by recipient +- Application code stays clean : it just sends to whatever address it's given. No test-mode branching. +- Parallel-safe by construction : two scenarios cannot collide if they don't share an address +- Cheap to implement : a few helper functions in `pkg/bdd/steps/email_steps.go` and a `mailpit.Client` package wrapping the HTTP API +- Cleanup is per-scenario, not global — no "delete all messages" race between scenarios + +## Implementation Plan + +### Helper package : `pkg/bdd/mailpit/client.go` + +```go +type Client struct { + BaseURL string // default: http://localhost:8025 + HTTP *http.Client +} + +// AwaitMessageTo polls Mailpit's HTTP API for a message addressed +// to the given recipient, with a deadline. Returns the most recent +// matching message or an error on timeout. +func (c *Client) AwaitMessageTo(ctx context.Context, to string, timeout time.Duration) (*Message, error) + +// PurgeMessagesTo removes all messages addressed to the given +// recipient. Idempotent and parallel-safe. +func (c *Client) PurgeMessagesTo(ctx context.Context, to string) error + +type Message struct { + ID string + From string + To []string + Subject string + Text string + HTML string + Headers map[string][]string +} +``` + +### Helper steps : `pkg/bdd/steps/email_steps.go` + +```go +func (s *EmailSteps) iHaveAnEmailAddressForThisScenario() error +// Generates `-<8hex>@bdd.local`, stores it in the scenario state. + +func (s *EmailSteps) iShouldReceiveAnEmailWithSubject(subject string) error +// Polls AwaitMessageTo on the scenario's address, asserts subject equality. + +func (s *EmailSteps) theEmailShouldContain(snippet string) error +// Re-fetches the most recent message and checks for substring in body. + +func (s *EmailSteps) theEmailContainsAMagicLinkToken() (string, error) +// Extracts the token from the magic-link URL via regex, returns it. +``` + +### Scenario lifecycle + +- **Before each scenario** : `iHaveAnEmailAddressForThisScenario` is called (either explicitly via Background, or implicitly via a hook). The unique address is stored in the scenario's state. PurgeMessagesTo is called to clear any leftovers from prior runs of the same address (defensive — should be impossible since the suffix is random, but cheap). +- **During the scenario** : the application sends to that address. Tests query for it. +- **After each scenario** : no global cleanup needed — addresses are per-scenario unique, so they don't accumulate beyond Mailpit's `MP_MAX_MESSAGES=5000` cap. + +### Race-free deletion + +Mailpit's `DELETE /api/v1/messages?query=to:` is atomic per recipient. Two concurrent scenarios with different addresses cannot interfere. + +### Sample scenario (auth-magic-link.feature) + +```gherkin +@critical @magic-link +Scenario: User receives a magic link by email + Given I have an email address for this scenario + When I request a magic link for my email address + Then I should receive an email with subject "Your magic link" + And the email contains a magic link token + When I consume the magic link token + Then I should receive a JWT +``` + +## Pros and Cons of the Options + +### Option 1 (Chosen) + +* Good — parallel-safe by construction +* Good — application code unchanged ; test-only logic stays in the BDD layer +* Good — Mailpit API supports the filter natively +* Good — cleanup is fine-grained, no race +* Bad — requires cooperative scenarios (each must request a unique address) +* Mitigation : Background steps in feature files make it automatic + +### Option 2 (Mailpit per package) + +* Bad — operational complexity not justified for the test-only concern + +### Option 3 (Custom header scoping) + +* Bad — production code dirtied by test concerns + +### Option 4 (Lock-and-sequence) + +* Bad — gives up parallelism (the whole point of PR #35 + ADR-0025) + +## Consequences + +* `pkg/bdd/mailpit/` package is created with HTTP client + helper types +* `pkg/bdd/steps/email_steps.go` package is created and registered in `steps.go` +* `features/auth/` and any other email-using features have new BDD steps available +* The local development docker-compose must run Mailpit before BDD tests run — to be added to the BDD test runner script `scripts/run-bdd-tests.sh` +* Mailpit message TTL is governed by `MP_MAX_MESSAGES` (5000) — at parallel BDD volumes, that's enough headroom for ~50 scenarios × 100 messages each before any pruning kicks in + +## Out of scope + +* **Visual regression on email rendering** — text body assertions only ; HTML rendering checks belong in a separate Storybook-style harness +* **Attachment handling** — magic-link emails are text-only ; ADRs for attachments will come if/when needed +* **Email volume / rate-limit testing** — that's a load-test concern, not a BDD concern + +## Links + +* Auth migration depending on this : [ADR-0028](0028-passwordless-auth-migration.md) +* Email infrastructure choice : [ADR-0029](0029-email-infrastructure-mailpit.md) +* BDD parallelism foundation : [ADR-0025](0025-bdd-scenario-isolation-strategies.md), [PR #35](https://gitea.arcodange.lab/arcodange/dance-lessons-coach/pulls/35) +* Mailpit API : https://mailpit.axllent.org/docs/api-v1/ diff --git a/adr/README.md b/adr/README.md index 5461ecf..b650c2a 100644 --- a/adr/README.md +++ b/adr/README.md @@ -31,6 +31,9 @@ This directory contains the Architecture Decision Records (ADRs) for the dance-l | [0025](0025-bdd-scenario-isolation-strategies.md) | BDD Scenario Isolation Strategies | Implemented | | [0026](0026-composite-info-endpoint.md) | Composite Info Endpoint vs Separate Calls | Implemented | | [0027](0027-ollama-tier1-onboarding.md) | Ollama Tier 1 onboarding via meta-trainer-bootstrap | Proposed | +| [0028](0028-passwordless-auth-migration.md) | Passwordless authentication: magic link → OpenID Connect | Proposed | +| [0029](0029-email-infrastructure-mailpit.md) | Email infrastructure: Mailpit local + production deferred | Proposed | +| [0030](0030-bdd-email-parallel-strategy.md) | BDD email assertions with parallel test execution | Proposed | > **Note** : numbers `0011` and `0014` are not currently in use. Reserved for future ADRs or representing previously deleted entries.