Files
dance-lessons-coach/adr/0025-bdd-scenario-isolation-strategies.md
Gabriel Radureau 25a20d4380 📝 docs: add ADR 0025 documenting BDD scenario isolation strategies
- Document 5 considered options for BDD test isolation
- Analyze pros/cons of each approach (transaction rollback, clear tables, schema-per-scenario, database-per-feature, combined)
- Explain Gherkin Background behavior and implications
- Detail critical limitation: PostgreSQL schemas don't isolate in-memory state
- Recommend enhanced multi-layer isolation strategy
- Document cache key prefix/suffix approach for future cache integration
- Add schema and cache key naming conventions
- Reject explicit Background setup approach as error-prone and unscalable

Generated by Mistral Vibe.
Co-Authored-By: Mistral Vibe <vibe@mistral.ai>
2026-04-11 07:57:46 +02:00

302 lines
11 KiB
Markdown

# ADR 0025: BDD Scenario Isolation Strategies
## Status
**Proposed** 🟡
## Context
As our BDD test suite grows, we're encountering **test pollution** issues where scenarios interfere with each other through shared state. This is particularly problematic for:
1. **Database state**: Scenarios create users, JWT secrets, config entries that persist across scenarios
2. **JWT secret rotation**: Multiple secrets accumulate, affecting subsequent scenario authentication
3. **Config file modifications**: Feature flag changes persist between tests
4. **Gherkin Background steps**: Data set up in Background is visible to all scenarios in the feature
Our current approach clears database tables after each scenario, but this has **race condition vulnerabilities** with concurrent scenario execution.
### Gherkin Background Consideration
Crucially, Gherkin's `Background` section runs **before each scenario** in a feature, not once before all scenarios. This means:
```gherkin
Feature: User registration
Background:
Given the database is empty
And a default admin user exists
Scenario: Register new user
When I register user "alice"
Then user "alice" should exist
Scenario: Register duplicate user
When I register user "alice"
Then I should see error "user already exists"
```
The second scenario fails because Background creates data that persists, and the first scenario's data isn't cleaned up. Background steps are re-executed before each scenario.
## Decision Drivers
* **Isolation**: Each scenario must start with a clean slate
* **Performance**: Cleanup must be fast enough for CI/CD pipelines
* **Concurrency**: Must work with parallel scenario execution
* **Compatibility**: Must work with Gherkin Background steps
* **Maintainability**: Solution should be simple to understand and debug
## Considered Options
### Option 1: Transaction Rollback (Rejected ❌)
Wrap each scenario in a database transaction, rollback at the end.
```go
BeforeScenario: BEGIN;
AfterScenario: ROLLBACK;
```
**Pros:**
- Simple implementation
- Fast - transaction rollback is nearly instant
- No data cleanup needed
**Cons:**
-**Fails if scenario commits**: Nested transaction problem - `COMMIT` inside scenario releases the transaction, parent `ROLLBACK` has no effect
- Cannot handle non-database state (JWT secrets in memory, config files)
- Doesn't solve JWT secret pollution
**Verdict: Not viable** - Too many scenarios use database transactions internally.
---
### Option 2: Clear Tables in Public Schema (Current ✅/⚠️)
Delete all rows from all tables after each scenario.
```go
AfterScenario: DELETE FROM table1; DELETE FROM table2; ...
```
**Pros:**
- Currently implemented
- Works with any scenario code
- Handles database state
**Cons:**
- ⚠️ **Race conditions**: Concurrent scenarios can interleave - Scenario A deletes data while Scenario B is still using it
- ⚠️ **Slow**: Must delete from all tables, reset sequences
-**Misses in-memory state**: JWT secrets, config changes persist
-**Doesn't handle Background**: Background data is shared across scenarios
**Verdict: Partially adequate** - Works for sequential execution but has parallel execution issues.
---
### Option 3: Schema-per-Scenario (Recommended ✅)
Create a unique PostgreSQL schema for each scenario, drop it after.
```go
BeforeScenario:
schema := "test_" + sha256(scenario.Name)[:8]
CREATE SCHEMA schema;
SET search_path = schema, public;
AfterScenario:
DROP SCHEMA schema CASCADE;
```
**Pros:**
-**True isolation**: Each scenario has its own database namespace
-**Works with transactions**: Scenario can commit freely - entire schema is dropped
-**Works with Background**: Background runs in scenario's schema, data is isolated
-**Fast**: Schema drop is instant (just metadata deletion)
-**Handles concurrent scenarios**: Different schemas = no conflicts
**Cons:**
- Requires `CREATE/DROP SCHEMA` database privileges in test environment
- Some ORMs may hardcode `public` schema - need to use `SET search_path` carefully
- Test DB must allow many schemas (typically fine for PostgreSQL)
- We need to handle `search_path` in connection pooling (each scenario needs its own connection)
**Implementation notes:**
- Use `Luego` (PostgreSQL schema prefix) approach: `test_{hash}`
- Hash: `sha256(feature_name + scenario_name)[:8]` for consistency across runs
- Execute Background steps in the scenario's schema context
- Set `search_path` at the connection level, not globally
---
### Option 4: Database-per-Feature ⚠️
Create a separate database for each feature file.
```go
BeforeFeature: CREATE DATABASE feature_auth;
AfterFeature: DROP DATABASE feature_auth;
```
**Pros:**
- Strong isolation between features
- Simple implementation
**Cons:**
-**Doesn't isolate scenarios within a feature** - Background data shared across scenarios
- Database creation is slower than schema creation
- Harder to manage in CI (more databases to create/cleanup)
- Still need table clearing between scenarios within a feature
**Verdict: Insufficient** - Doesn't solve intra-feature pollution.
---
### Option 5: Schema-per-Feature + Table Clearing per Scenario ⚠️
Create one schema per feature, clear tables between scenarios.
```go
BeforeFeature: CREATE SCHEMA feature_auth;
AfterFeature: DROP SCHEMA feature_auth;
AfterScenario: DELETE FROM all_tables;
```
**Pros:**
- Isolates features from each other
- Simpler than per-scenario schemas
**Cons:**
-**Scenarios within a feature share state** - Background data persists
- Still has race conditions with concurrent scenarios in same feature
- Requires table clearing overhead
**Verdict: Better than current but still has issues**.
---
## Decision Outcome
**Chosen option: Schema-per-Scenario + In-Memory State Reset (Option 3 Enhanced)**
We will implement schema-per-scenario because it:
1. Provides **true isolation** for all database state
2. **Works with Gherkin Background** - Background runs in each scenario's schema
3. **Handles concurrent execution** - No race conditions
4. **Works with scenario transactions** - Scenarios can commit freely
5. Is **fast** - Schema operations are cheap
**However, we discovered a critical limitation:** PostgreSQL schemas only isolate **database tables**. In-memory state (application-level caches, user stores, JWT secret managers) **persists across scenarios** because they're stored in the shared `sharedServer` Go instance. Schema isolation does NOT solve this.
### Enhanced Strategy: Multi-Layer Isolation
To achieve **complete scenario isolation**, we need a **2-layer approach:**
| Layer | Component | Strategy | Status |
|-------|-----------|----------|--------|
| DB | PostgreSQL tables | Schema-per-scenario | ✅ Implemented |
| Memory | User store | Reset/clear between scenarios | ⚠️ TODO |
| Memory | JWT secrets | Reset to initial state | ✅ Implemented |
| Memory | Auth cache | Reset/clear between scenarios | ⚠️ TODO |
| Cache | Redis/Memcached | Key prefix with schema hash | ⚠️ TODO |
### Key Insight: Cache and In-Memory Store Isolation
**For caches (Redis, Memcached, in-process):**
- Use **schema hash as key prefix/suffix**: `cache_key_{schema_hash}` or `{schema_hash}_cache_key`
- This ensures each scenario gets isolated cache namespace
- Works even with external cache services
- Consistent with schema isolation philosophy
**For in-memory stores (user repository, etc.):**
- Add `Reset()` methods that clear all state
- Call in `AfterScenario` alongside schema teardown
- Or use schema-prefix approach for shared stores
### Alternative Approach: Background Explicit State Setup
**Considered but rejected:** Adding explicit "Given no user X exists" steps or heavy Background sections.
**Pros:** More readable, explicit about state
**Cons:**
- Error-prone (must remember for every entity)
- Verbose (many Given steps)
- Doesn't scale with many entities
- Still has race conditions with concurrent scenarios
**Verdict:** Automated cleanup (schema drop + memory reset) is more reliable than manual Background setup.
### Implementation Plan
**Phase 1: Foundation (✅ Complete)**
- Add scenario-aware schema management to test server
- Implement schema creation/drop in BeforeScenario/AfterScenario hooks
- Handle `search_path` configuration for each scenario's database connection
**Phase 2: In-Memory State Reset (🟡 TODO)**
- Add `ResetUsers()` method to clear in-memory user store
- Add `ResetCache()` method for auth/rateLimiting caches
- Call these in AfterScenario alongside JWT secret reset
- **Cache key strategy**: `key_{schema_hash}` for all cache operations
**Phase 3: Connection Pooling**
- Configure connection pool to respect per-scenario `search_path`
- Each scenario gets isolated connections
**Phase 4: Validation**
- Run full test suite to verify complete isolation
- Fix any hardcoded `public` schema references
### Schema Naming Convention
```
Schema name: test_{sha256(feature:scenario)[:8]}
Cache key prefix: {sha256(feature:scenario)[:8]}_
```
Example:
- Feature: `auth`, Scenario: `Successful user authentication`
- Hash: `sha256("auth:Successful user authentication")[:8]` = `a3f7b2c1`
- Schema: `test_a3f7b2c1`
- Cache key: `a3f7b2c1_user:newuser` instead of just `user:newuser`
Benefits:
- Unique per scenario
- Consistent across test runs (same scenario = same hash)
- Short (8 chars) - efficient for cache keys
- Identifiable for debugging
### Schema Naming Convention
```
Schema name: test_{sha256(feature + scenario)[:8]}
```
Example:
- Feature: `auth`, Scenario: `Successful user authentication`
- Hash: `sha256("auth_Successful user authentication")[:8]` = `a3f7b2c1`
- Schema: `test_a3f7b2c1`
Benefits:
- Unique per scenario
- Consistent across test runs (same scenario = same schema)
- Short (8 chars + prefix = 14 chars max)
- Identifiable for debugging
## Pros and Cons Summary
| Aspect | Schema-per-Scenario | Current (Clear Tables) | Transaction Rollback |
|--------|---------------------|----------------------|-------------------|
| Isolation | ✅ Strong | ⚠️ Medium | ❌ Weak |
| Works with Background | ✅ Yes | ⚠️ Partial | ❌ No |
| Concurrency safe | ✅ Yes | ❌ No | ❌ No |
| Works with TX | ✅ Yes | ✅ Yes | ❌ No |
| Speed | ✅ Fast | ⚠️ Slow | ✅ Fast |
| DB privileges | ⚠️ Needs CREATE | ✅ None | ✅ None |
| Complexity | ⚠️ Medium | ✅ Low | ✅ Low |
## Links
* [ADR 0008: BDD Testing](adr/0008-bdd-testing.md) - Original BDD adoption decision
* [ADR 0024: BDD Test Organization and Isolation](adr/0024-bdd-test-organization-and-isolation.md) - Feature isolation strategy
* [Godog Documentation](https://github.com/cucumber/godog) - BDD framework specifics
* [PostgreSQL Schemas](https://www.postgresql.org/docs/current/ddl-schemas.html) - Schema management