# ADR 0025: BDD Scenario Isolation Strategies ## Status **Proposed** 🟡 ## Context As our BDD test suite grows, we're encountering **test pollution** issues where scenarios interfere with each other through shared state. This is particularly problematic for: 1. **Database state**: Scenarios create users, JWT secrets, config entries that persist across scenarios 2. **JWT secret rotation**: Multiple secrets accumulate, affecting subsequent scenario authentication 3. **Config file modifications**: Feature flag changes persist between tests 4. **Gherkin Background steps**: Data set up in Background is visible to all scenarios in the feature Our current approach clears database tables after each scenario, but this has **race condition vulnerabilities** with concurrent scenario execution. ### Gherkin Background Consideration Crucially, Gherkin's `Background` section runs **before each scenario** in a feature, not once before all scenarios. This means: ```gherkin Feature: User registration Background: Given the database is empty And a default admin user exists Scenario: Register new user When I register user "alice" Then user "alice" should exist Scenario: Register duplicate user When I register user "alice" Then I should see error "user already exists" ``` The second scenario fails because Background creates data that persists, and the first scenario's data isn't cleaned up. Background steps are re-executed before each scenario. ## Decision Drivers * **Isolation**: Each scenario must start with a clean slate * **Performance**: Cleanup must be fast enough for CI/CD pipelines * **Concurrency**: Must work with parallel scenario execution * **Compatibility**: Must work with Gherkin Background steps * **Maintainability**: Solution should be simple to understand and debug ## Considered Options ### Option 1: Transaction Rollback (Rejected ❌) Wrap each scenario in a database transaction, rollback at the end. ```go BeforeScenario: BEGIN; AfterScenario: ROLLBACK; ``` **Pros:** - Simple implementation - Fast - transaction rollback is nearly instant - No data cleanup needed **Cons:** - ❌ **Fails if scenario commits**: Nested transaction problem - `COMMIT` inside scenario releases the transaction, parent `ROLLBACK` has no effect - Cannot handle non-database state (JWT secrets in memory, config files) - Doesn't solve JWT secret pollution **Verdict: Not viable** - Too many scenarios use database transactions internally. --- ### Option 2: Clear Tables in Public Schema (Current ✅/⚠️) Delete all rows from all tables after each scenario. ```go AfterScenario: DELETE FROM table1; DELETE FROM table2; ... ``` **Pros:** - Currently implemented - Works with any scenario code - Handles database state **Cons:** - ⚠️ **Race conditions**: Concurrent scenarios can interleave - Scenario A deletes data while Scenario B is still using it - ⚠️ **Slow**: Must delete from all tables, reset sequences - ❌ **Misses in-memory state**: JWT secrets, config changes persist - ❌ **Doesn't handle Background**: Background data is shared across scenarios **Verdict: Partially adequate** - Works for sequential execution but has parallel execution issues. --- ### Option 3: Schema-per-Scenario (Recommended ✅) Create a unique PostgreSQL schema for each scenario, drop it after. ```go BeforeScenario: schema := "test_" + sha256(scenario.Name)[:8] CREATE SCHEMA schema; SET search_path = schema, public; AfterScenario: DROP SCHEMA schema CASCADE; ``` **Pros:** - ✅ **True isolation**: Each scenario has its own database namespace - ✅ **Works with transactions**: Scenario can commit freely - entire schema is dropped - ✅ **Works with Background**: Background runs in scenario's schema, data is isolated - ✅ **Fast**: Schema drop is instant (just metadata deletion) - ✅ **Handles concurrent scenarios**: Different schemas = no conflicts **Cons:** - Requires `CREATE/DROP SCHEMA` database privileges in test environment - Some ORMs may hardcode `public` schema - need to use `SET search_path` carefully - Test DB must allow many schemas (typically fine for PostgreSQL) - We need to handle `search_path` in connection pooling (each scenario needs its own connection) **Implementation notes:** - Use `Luego` (PostgreSQL schema prefix) approach: `test_{hash}` - Hash: `sha256(feature_name + scenario_name)[:8]` for consistency across runs - Execute Background steps in the scenario's schema context - Set `search_path` at the connection level, not globally --- ### Option 4: Database-per-Feature ⚠️ Create a separate database for each feature file. ```go BeforeFeature: CREATE DATABASE feature_auth; AfterFeature: DROP DATABASE feature_auth; ``` **Pros:** - Strong isolation between features - Simple implementation **Cons:** - ❌ **Doesn't isolate scenarios within a feature** - Background data shared across scenarios - Database creation is slower than schema creation - Harder to manage in CI (more databases to create/cleanup) - Still need table clearing between scenarios within a feature **Verdict: Insufficient** - Doesn't solve intra-feature pollution. --- ### Option 5: Schema-per-Feature + Table Clearing per Scenario ⚠️ Create one schema per feature, clear tables between scenarios. ```go BeforeFeature: CREATE SCHEMA feature_auth; AfterFeature: DROP SCHEMA feature_auth; AfterScenario: DELETE FROM all_tables; ``` **Pros:** - Isolates features from each other - Simpler than per-scenario schemas **Cons:** - ❌ **Scenarios within a feature share state** - Background data persists - Still has race conditions with concurrent scenarios in same feature - Requires table clearing overhead **Verdict: Better than current but still has issues**. --- ## Decision Outcome **Chosen option: Schema-per-Scenario (Option 3)** We will implement schema-per-scenario because it: 1. Provides **true isolation** for all database state 2. **Works with Gherkin Background** - Background runs in each scenario's schema 3. **Handles concurrent execution** - No race conditions 4. **Works with scenario transactions** - Scenarios can commit freely 5. Is **fast** - Schema operations are cheap ### Implementation Plan **Phase 1: Foundation** - Add scenario-aware schema management to test server - Implement schema creation/drop in BeforeScenario/AfterScenario hooks - Handle `search_path` configuration for each scenario's database connection **Phase 2: Connection Pooling** - Configure connection pool to respect per-scenario `search_path` - Each scenario gets isolated connections **Phase 3: In-Memory State** - Extend cleanup to handle JWT secrets (already implemented in suite.go) - Add config reset capability **Phase 4: Validation** - Run full test suite to identify ORM/schema issues - Fix any hardcoded `public` schema references ### Schema Naming Convention ``` Schema name: test_{sha256(feature + scenario)[:8]} ``` Example: - Feature: `auth`, Scenario: `Successful user authentication` - Hash: `sha256("auth_Successful user authentication")[:8]` = `a3f7b2c1` - Schema: `test_a3f7b2c1` Benefits: - Unique per scenario - Consistent across test runs (same scenario = same schema) - Short (8 chars + prefix = 14 chars max) - Identifiable for debugging ## Pros and Cons Summary | Aspect | Schema-per-Scenario | Current (Clear Tables) | Transaction Rollback | |--------|---------------------|----------------------|-------------------| | Isolation | ✅ Strong | ⚠️ Medium | ❌ Weak | | Works with Background | ✅ Yes | ⚠️ Partial | ❌ No | | Concurrency safe | ✅ Yes | ❌ No | ❌ No | | Works with TX | ✅ Yes | ✅ Yes | ❌ No | | Speed | ✅ Fast | ⚠️ Slow | ✅ Fast | | DB privileges | ⚠️ Needs CREATE | ✅ None | ✅ None | | Complexity | ⚠️ Medium | ✅ Low | ✅ Low | ## Links * [ADR 0008: BDD Testing](adr/0008-bdd-testing.md) - Original BDD adoption decision * [ADR 0024: BDD Test Organization and Isolation](adr/0024-bdd-test-organization-and-isolation.md) - Feature isolation strategy * [Godog Documentation](https://github.com/cucumber/godog) - BDD framework specifics * [PostgreSQL Schemas](https://www.postgresql.org/docs/current/ddl-schemas.html) - Schema management