Files
dance-lessons-coach/adr/0025-bdd-scenario-isolation-strategies.md
Gabriel Radureau 54c1eefb6c feat(bdd): parallel-safe schema-per-package isolation (T12 stage 2/2)
Re-enables BDD_SCHEMA_ISOLATION=true with the foundation from PR #34
(NewPostgresRepositoryFromDSN). Achieves ~2.85x speedup on the BDD
test suite by running feature packages in parallel.

ARCHITECTURE

When BDD_SCHEMA_ISOLATION=true, each test package (process) gets its
own isolated PostgreSQL schema:
  1. testserver.Start() generates a deterministic schema name per FEATURE
  2. CREATE SCHEMA <name>
  3. Open a per-package gorm.DB with DSN search_path=<name>
  4. AutoMigrate runs in the isolated schema (creates users table)
  5. Build a per-package server.Server with this isolated repo via
     server.NewServerWithUserRepo
  6. Stop() drops the schema + closes the per-package pool

Packages then run in parallel (default Go test parallelism) without
contention because each has its own schema + connection pool.

CHANGES

- pkg/server/server.go : NEW factory NewServerWithUserRepo(cfg, ctx,
  userRepo, userService) that injects a per-test repo. Existing NewServer
  becomes a thin wrapper.
- pkg/bdd/testserver/server.go : Start() chooses isolated mode based on
  BDD_SCHEMA_ISOLATION env var. Stop() drops schema + closes pool.
- pkg/user/postgres_repository.go : Exec(sql) helper for the schema
  lifecycle (CREATE/DROP) used by testserver.
- scripts/run-bdd-tests.sh : -p 1 only when BDD_SCHEMA_ISOLATION!=true.
  When true, default Go parallelism (~ NumCPU packages concurrent).
- .gitea/workflows/ci-cd.yaml : exports BDD_SCHEMA_ISOLATION=true.
- adr/0025-bdd-scenario-isolation-strategies.md : Status to "Implemented".

VALIDATION

5x AuthBDD with isolation: 5/5 PASS, public.users count=0 after runs.

Local benchmark on the full features/... suite:
- Sequential -p 1 (no isolation):     12.87s
- Parallel + isolation (this PR):      4.51s
- Speedup: 2.85x

🤖 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 19:41:20 +02:00

12 KiB

ADR 0025: BDD Scenario Isolation Strategies

Status: Implemented (per-package schema isolation since T12 stage 2/2 - 2026-05-03)

Context

As our BDD test suite grows, we're encountering test pollution issues where scenarios interfere with each other through shared state. This is particularly problematic for:

  1. Database state: Scenarios create users, JWT secrets, config entries that persist across scenarios
  2. JWT secret rotation: Multiple secrets accumulate, affecting subsequent scenario authentication
  3. Config file modifications: Feature flag changes persist between tests
  4. Gherkin Background steps: Data set up in Background is visible to all scenarios in the feature

Our current approach clears database tables after each scenario, but this has race condition vulnerabilities with concurrent scenario execution.

Gherkin Background Consideration

Crucially, Gherkin's Background section runs before each scenario in a feature, not once before all scenarios. This means:

Feature: User registration
  Background:
    Given the database is empty
    And a default admin user exists
  
  Scenario: Register new user
    When I register user "alice"
    Then user "alice" should exist
  
  Scenario: Register duplicate user
    When I register user "alice"
    Then I should see error "user already exists"

The second scenario fails because Background creates data that persists, and the first scenario's data isn't cleaned up. Background steps are re-executed before each scenario.

Decision Drivers

  • Isolation: Each scenario must start with a clean slate
  • Performance: Cleanup must be fast enough for CI/CD pipelines
  • Concurrency: Must work with parallel scenario execution
  • Compatibility: Must work with Gherkin Background steps
  • Maintainability: Solution should be simple to understand and debug

Considered Options

Option 1: Transaction Rollback (Rejected )

Wrap each scenario in a database transaction, rollback at the end.

BeforeScenario: BEGIN;
AfterScenario: ROLLBACK;

Pros:

  • Simple implementation
  • Fast - transaction rollback is nearly instant
  • No data cleanup needed

Cons:

  • Fails if scenario commits: Nested transaction problem - COMMIT inside scenario releases the transaction, parent ROLLBACK has no effect
  • Cannot handle non-database state (JWT secrets in memory, config files)
  • Doesn't solve JWT secret pollution

Verdict: Not viable - Too many scenarios use database transactions internally.


Option 2: Clear Tables in Public Schema (Current /⚠️)

Delete all rows from all tables after each scenario.

AfterScenario: DELETE FROM table1; DELETE FROM table2; ...

Pros:

  • Currently implemented
  • Works with any scenario code
  • Handles database state

Cons:

  • ⚠️ Race conditions: Concurrent scenarios can interleave - Scenario A deletes data while Scenario B is still using it
  • ⚠️ Slow: Must delete from all tables, reset sequences
  • Misses in-memory state: JWT secrets, config changes persist
  • Doesn't handle Background: Background data is shared across scenarios

Verdict: Partially adequate - Works for sequential execution but has parallel execution issues.


Create a unique PostgreSQL schema for each scenario, drop it after.

BeforeScenario:
  schema := "test_" + sha256(scenario.Name)[:8]
  CREATE SCHEMA schema;
  SET search_path = schema, public;
  
AfterScenario:
  DROP SCHEMA schema CASCADE;

Pros:

  • True isolation: Each scenario has its own database namespace
  • Works with transactions: Scenario can commit freely - entire schema is dropped
  • Works with Background: Background runs in scenario's schema, data is isolated
  • Fast: Schema drop is instant (just metadata deletion)
  • Handles concurrent scenarios: Different schemas = no conflicts

Cons:

  • Requires CREATE/DROP SCHEMA database privileges in test environment
  • Some ORMs may hardcode public schema - need to use SET search_path carefully
  • Test DB must allow many schemas (typically fine for PostgreSQL)
  • We need to handle search_path in connection pooling (each scenario needs its own connection)

Implementation notes:

  • Use Luego (PostgreSQL schema prefix) approach: test_{hash}
  • Hash: sha256(feature_name + scenario_name)[:8] for consistency across runs
  • Execute Background steps in the scenario's schema context
  • Set search_path at the connection level, not globally

Option 4: Database-per-Feature ⚠️

Create a separate database for each feature file.

BeforeFeature: CREATE DATABASE feature_auth;
AfterFeature: DROP DATABASE feature_auth;

Pros:

  • Strong isolation between features
  • Simple implementation

Cons:

  • Doesn't isolate scenarios within a feature - Background data shared across scenarios
  • Database creation is slower than schema creation
  • Harder to manage in CI (more databases to create/cleanup)
  • Still need table clearing between scenarios within a feature

Verdict: Insufficient - Doesn't solve intra-feature pollution.


Option 5: Schema-per-Feature + Table Clearing per Scenario ⚠️

Create one schema per feature, clear tables between scenarios.

BeforeFeature: CREATE SCHEMA feature_auth;
AfterFeature: DROP SCHEMA feature_auth;
AfterScenario: DELETE FROM all_tables;

Pros:

  • Isolates features from each other
  • Simpler than per-scenario schemas

Cons:

  • Scenarios within a feature share state - Background data persists
  • Still has race conditions with concurrent scenarios in same feature
  • Requires table clearing overhead

Verdict: Better than current but still has issues.


Decision Outcome

Chosen option: Schema-per-Scenario + In-Memory State Reset + Per-Scenario Step State (Option 3 Enhanced)

We will implement schema-per-scenario because it:

  1. Provides true isolation for all database state
  2. Works with Gherkin Background - Background runs in each scenario's schema
  3. Handles concurrent execution - No race conditions
  4. Works with scenario transactions - Scenarios can commit freely
  5. Is fast - Schema operations are cheap

However, we discovered a critical limitation: PostgreSQL schemas only isolate database tables. In-memory state (application-level caches, user stores, JWT secret managers) persists across scenarios because they're stored in the shared sharedServer Go instance. Schema isolation does NOT solve this.

Enhanced Strategy: Multi-Layer Isolation

To achieve complete scenario isolation, we need a 3-layer approach:

Layer Component Strategy Status
DB PostgreSQL tables Schema-per-scenario Implemented
Memory Server-level state (JWT secrets) Reset to initial state Implemented
Memory Step-level state (tokens, user IDs) Per-scenario state map Implemented
Memory User store Reset/clear between scenarios ⚠️ TODO
Memory Auth cache Reset/clear between scenarios ⚠️ TODO
Cache Redis/Memcached Key prefix with schema hash ⚠️ TODO

Layer 3: Per-Scenario Step State Isolation

New insight from test failures: Step definition structs (AuthSteps, GreetSteps, etc.) maintain state in their fields:

  • lastToken, firstToken in AuthSteps
  • lastUserID in AuthSteps

This state spills across scenarios even with schema isolation, because struct fields are shared across all scenarios in a test process.

Solution: Create a ScenarioState manager with per-scenario isolation:

type ScenarioState struct {
    LastToken  string
    FirstToken string
    LastUserID uint
}

type scenarioStateManager struct {
    mu      sync.RWMutex
    states  map[string]*ScenarioState  // keyed by scenario hash
}

// Usage in step definitions:
func (s *AuthSteps) iShouldReceiveAValidJWTToken() error {
    state := steps.GetScenarioState(s.scenarioName)
    state.LastToken = extractedToken
    // ...
}

Benefits:

  • Zero code changes to step definitions (with helper functions)
  • Thread-safe (sync.RWMutex)
  • Consistent state per scenario
  • Automatic cleanup via BeforeScenario/AfterScenario hooks
  • Works with random test order

Status: Implemented in pkg/bdd/steps/scenario_state.go

Key Insight: Cache and In-Memory Store Isolation

For caches (Redis, Memcached, in-process):

  • Use schema hash as key prefix/suffix: cache_key_{schema_hash} or {schema_hash}_cache_key
  • This ensures each scenario gets isolated cache namespace
  • Works even with external cache services
  • Consistent with schema isolation philosophy

For in-memory stores (user repository, etc.):

  • Add Reset() methods that clear all state
  • Call in AfterScenario alongside schema teardown
  • Or use schema-prefix approach for shared stores

Alternative Approach: Background Explicit State Setup

Considered but rejected: Adding explicit "Given no user X exists" steps or heavy Background sections.

Pros: More readable, explicit about state Cons:

  • Error-prone (must remember for every entity)
  • Verbose (many Given steps)
  • Doesn't scale with many entities
  • Still has race conditions with concurrent scenarios

Verdict: Automated cleanup (schema drop + memory reset) is more reliable than manual Background setup.

Implementation Plan

Phase 1: Foundation ( Complete)

  • Add scenario-aware schema management to test server
  • Implement schema creation/drop in BeforeScenario/AfterScenario hooks
  • Handle search_path configuration for each scenario's database connection

Phase 2: In-Memory State Reset (🟡 TODO)

  • Add ResetUsers() method to clear in-memory user store
  • Add ResetCache() method for auth/rateLimiting caches
  • Call these in AfterScenario alongside JWT secret reset
  • Cache key strategy: key_{schema_hash} for all cache operations

Phase 3: Connection Pooling

  • Configure connection pool to respect per-scenario search_path
  • Each scenario gets isolated connections

Phase 4: Validation

  • Run full test suite to verify complete isolation
  • Fix any hardcoded public schema references

Schema Naming Convention

Schema name: test_{sha256(feature:scenario)[:8]}
Cache key prefix: {sha256(feature:scenario)[:8]}_

Example:

  • Feature: auth, Scenario: Successful user authentication
  • Hash: sha256("auth:Successful user authentication")[:8] = a3f7b2c1
  • Schema: test_a3f7b2c1
  • Cache key: a3f7b2c1_user:newuser instead of just user:newuser

Benefits:

  • Unique per scenario
  • Consistent across test runs (same scenario = same hash)
  • Short (8 chars) - efficient for cache keys
  • Identifiable for debugging

Schema Naming Convention

Schema name: test_{sha256(feature + scenario)[:8]}

Example:

  • Feature: auth, Scenario: Successful user authentication
  • Hash: sha256("auth_Successful user authentication")[:8] = a3f7b2c1
  • Schema: test_a3f7b2c1

Benefits:

  • Unique per scenario
  • Consistent across test runs (same scenario = same schema)
  • Short (8 chars + prefix = 14 chars max)
  • Identifiable for debugging

Pros and Cons Summary

Aspect Schema-per-Scenario Current (Clear Tables) Transaction Rollback
Isolation Strong ⚠️ Medium Weak
Works with Background Yes ⚠️ Partial No
Concurrency safe Yes No No
Works with TX Yes Yes No
Speed Fast ⚠️ Slow Fast
DB privileges ⚠️ Needs CREATE None None
Complexity ⚠️ Medium Low Low