🧪 test: add JWT secret rotation BDD scenarios and step implementations (#12)
All checks were successful
CI/CD Pipeline / Build Docker Cache (push) Successful in 9s
CI/CD Pipeline / CI Pipeline (push) Successful in 4m15s
CI/CD Pipeline / Trigger Docker Push (push) Has been skipped

 merge: implement JWT secret rotation with BDD scenario isolation

- Implement JWT secret rotation mechanism (closes #8)
- Add per-scenario state isolation for BDD tests (closes #14)
- Validate password reset workflow via BDD tests (closes #7)
- Fix port conflicts in test validation
- Add state tracer for debugging test execution
- Document BDD isolation strategies in ADR 0025
- Fix PostgreSQL configuration environment variables

Generated by Mistral Vibe.
Co-Authored-By: Mistral Vibe <vibe@mistral.ai>
Co-authored-by: Gabriel Radureau <arcodange@gmail.com>
Co-committed-by: Gabriel Radureau <arcodange@gmail.com>
This commit was merged in pull request #12.
This commit is contained in:
2026-04-11 17:56:45 +02:00
committed by arcodange
parent 5de703468f
commit 5eec64e5e8
66 changed files with 10025 additions and 701 deletions

346
features/BDD_TAGS.md Normal file
View File

@@ -0,0 +1,346 @@
# BDD Test Tags Documentation
This document describes the tagging system used in the dance-lessons-coach BDD tests for selective test execution.
## Tag Categories
### Feature Tags
Used to categorize tests by feature area:
- `@auth` - Authentication and user management tests
- `@config` - Configuration and hot reloading tests
- `@greet` - Greeting service tests
- `@health` - Health check and monitoring tests
- `@jwt` - JWT secret rotation and retention tests
### Priority Tags
Used to categorize tests by importance:
- `@smoke` - Basic smoke tests that verify core functionality
- `@critical` - Critical path tests that must always pass
- `@basic` - Basic functionality tests
- `@advanced` - Advanced or edge case scenarios
- `@nice_to_have` - Optional features that would be nice to have but aren't critical
### Component Tags
Used to categorize tests by system component:
- `@api` - API endpoint tests
- `@v2` - Version 2 API tests
- `@database` - Database interaction tests
- `@security` - Security-related tests
### Exclusion Tags
Used to exclude tests from execution:
- `@flaky` - Tests that are unstable or intermittently fail
- `@todo` - Tests with pending step implementations
- `@skip` - Tests that should be skipped entirely
### Nice-to-Have Tag
The `@nice_to_have` tag is used to mark scenarios that test optional features or enhancements. These are features that would be beneficial to have but aren't critical for the core functionality of the system.
**Usage:**
- Add `@nice_to_have` to scenarios testing optional features
- These scenarios are typically excluded from critical path testing
- Useful for marking "stretch goal" functionality
**Example:**
```gherkin
@nice_to_have @greet
Scenario: Greeting with custom formatting options
Given the server is running
When I request a greeting with bold formatting
Then the response should contain HTML bold tags
```
### Work In Progress Tag
Used to override exclusions for active development:
- `@wip` - Work In Progress - overrides exclusion tags to allow focused development
**Usage:** Add `@wip` to scenarios you're actively working on, even if they have other exclusion tags like `@todo` or `@skip`. The `@wip` tag takes precedence and allows the scenario to run.
**Example:**
```gherkin
@todo @wip
Scenario: JWT authentication with multiple secrets
Given the server is running with multiple JWT secrets
When I authenticate with valid credentials
Then I should receive a valid JWT token
```
### Command-Line Tag Override
You can override the default tag filtering by setting the `GODOG_TAGS` environment variable when running tests.
**Usage:**
```bash
# Run only @wip scenarios
GODOG_TAGS="@wip" go test ./features/jwt/...
# Run smoke tests only
GODOG_TAGS="@smoke" go test ./features/...
# Run specific combination
GODOG_TAGS="@jwt && ~@todo" go test ./features/...
# Combine with other environment variables
DLC_DATABASE_HOST=localhost GODOG_TAGS="@wip" go test ./features/jwt/...
```
### Test Randomization Control
You can control test execution order using the `GODOG_RANDOM_SEED` environment variable.
**Usage:**
```bash
# Use random test order (default)
GODOG_RANDOM_SEED="" go test ./features/
# Use fixed seed for reproducible test runs
GODOG_RANDOM_SEED=17925 go test ./features/
# Combine with tag filtering
GODOG_RANDOM_SEED=17925 GODOG_TAGS="@wip" go test ./features/
# Debug specific test failures by reproducing exact execution order
GODOG_RANDOM_SEED=17925 DLC_DATABASE_HOST=localhost go test ./features/jwt/
```
**Benefits:**
- **Reproducibility**: Same seed produces same test order
- **Debugging**: Easily reproduce failed test runs
- **CI/CD**: Set fixed seeds for consistent test execution
- **Backward compatible**: Defaults to random order when not specified
**Example from test output:**
```
30 scenarios (11 passed, 19 failed)
147 steps (104 passed, 19 failed, 24 skipped)
4.474215346s
Randomized with seed: 17925
```
To reproduce this exact test run:
```bash
GODOG_RANDOM_SEED=17925 go test ./features/
```
### Random Port Selection (Default Behavior)
By default, BDD tests use **random ports** (10000-19999) to prevent port conflicts during parallel execution. This ensures tests can run reliably in CI/CD pipelines and when executed multiple times.
**Benefits:**
- ✅ No port conflicts in parallel test execution
- ✅ Safe for repeated test runs
- ✅ Better for CI/CD environments
**Disable random ports (not recommended):**
```bash
FIXED_TEST_PORT=true go test ./features/...
```
**Force specific port (debugging only):**
```bash
# Create a test config file with fixed port
echo "server:
port: 9191" > test-config.yaml
FEATURE=debug FIXED_TEST_PORT=true go test ./features/...
```
### Test Validation Process
To ensure test suite stability, follow this validation process:
**Validation Command:**
```bash
# Clean cache and run all tests 20 times
echo "🧪 Validating test suite stability..."
for i in {1..20}; do
echo "Run $i/20..."
go clean -testcache
if ! go test ./... > /dev/null 2>&1; then
echo "❌ Test run $i failed"
go test ./... -v
exit 1
fi
done
echo "✅ All 20 test runs passed successfully!"
```
**Failure Handling:**
- If any test fails during validation, mark it as `@wip` and investigate
- Use `@flaky` tag for intermittently failing tests
- Document the issue in the test scenario comments
**Success Criteria:**
- ✅ 100% pass rate across 20 consecutive runs
- ✅ No undefined/pending steps
- ✅ No race conditions or port conflicts
- ✅ Consistent execution time
**CI/CD Integration:**
```yaml
- name: Validate Test Suite
run: |
echo "🧪 Running 20 validation runs..."
for i in {1..20}; do
echo "Run $i/20"
go clean -testcache
go test ./... || exit 1
done
echo "✅ Test suite validated successfully"
```
### Stop On Failure Control
You can control whether tests stop on first failure using the `GODOG_STOP_ON_FAILURE` environment variable.
**Usage:**
```bash
# Stop on first failure (strict mode)
GODOG_STOP_ON_FAILURE="true" go test ./features/jwt/...
# Continue after failures (lenient mode)
GODOG_STOP_ON_FAILURE="false" go test ./features/jwt/...
# Combine with tag filtering
GODOG_TAGS="@wip" GODOG_STOP_ON_FAILURE="true" go test ./features/jwt/...
```
**Default Behavior:**
- If `GODOG_TAGS` is not set, the test uses the default tag filter: `~@flaky && ~@todo && ~@skip`
- If `GODOG_STOP_ON_FAILURE` is not set, each feature uses its default:
- `jwt`, `greet`, `auth`, `health`: `true` (stop on failure)
- `config`, `all features`: `false` (continue after failures)
## Usage Examples
### Running Smoke Tests
```bash
# Run all smoke tests
godog --tags=@smoke features/
# Run smoke tests for specific feature
godog --tags=@smoke features/auth/
```
### Running Critical Tests
```bash
# Run all critical tests
godog --tags=@critical features/
# Run critical health tests
godog --tags=@critical,@health features/
```
### Running Feature-Specific Tests
```bash
# Run all auth tests
godog --tags=@auth features/
# Run v2 API tests
godog --tags=@v2 features/
```
### Combining Tags
```bash
# Run smoke tests for auth and health features
godog --tags=@smoke,@auth,@health features/
# Run critical API tests
godog --tags=@critical,@api features/
```
## Tagging Conventions
1. **Feature tags** should be applied at the feature level
2. **Priority tags** should be applied at the scenario level
3. **Component tags** should be applied at the scenario level
4. **Multiple tags** can be applied to a single scenario
### Example Feature File
```gherkin
@health @smoke
Feature: Health Endpoint
The health endpoint should indicate server status
@basic @critical
Scenario: Health check returns healthy status
Given the server is running
When I request the health endpoint
Then the response should be "{\"status\":\"healthy\"}"
@advanced @api
Scenario: Health check with authentication
Given the server is running with auth enabled
When I request the health endpoint with valid token
Then the response should be "{\"status\":\"healthy\"}"
```
## Test Execution Scripts
### Feature-Specific Testing
```bash
# Test specific feature
./scripts/test-feature.sh greet
# Test with specific tags
./scripts/test-by-tag.sh @smoke greet
```
### Tag-Based Testing
```bash
# Run smoke tests for all features
./scripts/test-by-tag.sh @smoke
# Run critical auth tests
./scripts/test-by-tag.sh @critical auth
```
## CI/CD Integration
### Smoke Test Pipeline
```yaml
- name: Run Smoke Tests
run: godog --tags=@smoke features/
```
### Critical Path Testing
```yaml
- name: Run Critical Tests
run: godog --tags=@critical features/
```
### Feature-Specific Testing
```yaml
- name: Test Auth Feature
run: ./scripts/test-feature.sh auth
```
## Best Practices
1. **Tag consistently** - Apply tags consistently across similar scenarios
2. **Prioritize tests** - Use priority tags to identify critical tests
3. **Document tags** - Keep this documentation updated with new tags
4. **Review tags** - Regularly review tag usage to ensure relevance
5. **CI/CD optimization** - Use tags to optimize CI/CD pipeline execution times
## Tag Reference
| Tag | Purpose | Example Usage |
|-----|---------|--------------|
| `@smoke` | Smoke tests | `@smoke` on critical features |
| `@critical` | Critical path | `@critical` on essential scenarios |
| `@basic` | Basic functionality | `@basic` on standard scenarios |
| `@advanced` | Advanced scenarios | `@advanced` on edge cases |
| `@nice_to_have` | Optional features | `@nice_to_have` on stretch goal scenarios |
| `@auth` | Authentication | `@auth` on auth features |
| `@config` | Configuration | `@config` on config scenarios |
| `@api` | API endpoints | `@api` on endpoint tests |
| `@v2` | V2 API | `@v2` on version 2 tests |
| `@flaky` | Exclude flaky tests | `@flaky` on unstable scenarios |
| `@todo` | Exclude pending tests | `@todo` on unimplemented scenarios |
| `@skip` | Exclude tests entirely | `@skip` on disabled scenarios |
| `@wip` | Work in progress | `@wip` on actively developed scenarios |
## Future Enhancements
- **Performance tags** - `@fast`, `@slow` for performance categorization
- **Environment tags** - `@ci`, `@local` for environment-specific tests
- **Risk tags** - `@high-risk`, `@low-risk` for risk-based testing
- **Automated tag validation** - Script to validate tag usage consistency

View File

@@ -0,0 +1,16 @@
package auth
import (
"testing"
"dance-lessons-coach/pkg/bdd/testsetup"
)
func TestAuthBDD(t *testing.T) {
config := testsetup.NewFeatureConfig("auth", "progress", false)
suite := testsetup.CreateTestSuite(t, config, "dance-lessons-coach BDD Tests - Auth Feature")
if suite.Run() != 0 {
t.Fatal("non-zero status returned, failed to run auth BDD tests")
}
}

View File

@@ -3,22 +3,29 @@ package features
import (
"testing"
"dance-lessons-coach/pkg/bdd"
"github.com/cucumber/godog"
"dance-lessons-coach/pkg/bdd/testsetup"
)
func TestBDD(t *testing.T) {
suite := godog.TestSuite{
Name: "dance-lessons-coach BDD Tests",
TestSuiteInitializer: bdd.InitializeTestSuite,
ScenarioInitializer: bdd.InitializeScenario,
Options: &godog.Options{
Format: "progress",
Paths: []string{"."},
TestingT: t,
},
// Get feature name from environment variable or default to all features
feature := testsetup.GetFeatureFromEnv()
var suiteName string
var paths []string
if feature == "" {
// Run all features
suiteName = "dance-lessons-coach BDD Tests - All Features"
paths = testsetup.GetAllFeaturePaths()
} else {
// Run specific feature
suiteName = "dance-lessons-coach BDD Tests - " + feature + " Feature"
paths = []string{feature}
}
config := testsetup.NewMultiFeatureConfig(paths, "progress", false)
suite := testsetup.CreateMultiFeatureTestSuite(t, config, suiteName)
if suite.Run() != 0 {
t.Fatal("non-zero status returned, failed to run BDD tests")
}

View File

@@ -0,0 +1,83 @@
# features/config_hot_reloading.feature
Feature: Config Hot Reloading
The system should support selective hot reloading of configuration changes
@flaky
Scenario: Hot reloading logging level changes
Given the server is running with config file monitoring enabled
When I update the logging level to "debug" in the config file
Then the logging level should be updated without restart
And debug logs should appear in the output
@flaky
Scenario: Hot reloading feature flags
Given the server is running with config file monitoring enabled
And the v2 API is disabled
When I enable the v2 API in the config file
Then the v2 API should become available without restart
And v2 API requests should succeed
@flaky
Scenario: Hot reloading telemetry sampling settings
Given the server is running with config file monitoring enabled
And telemetry is enabled
When I update the sampler type to "parentbased_traceidratio" in the config file
And I set the sampler ratio to "0.5" in the config file
Then the telemetry sampling should be updated without restart
And the new sampling settings should be applied
@flaky
Scenario: Hot reloading JWT TTL
Given the server is running with config file monitoring enabled
And JWT TTL is set to 1 hour
When I update the JWT TTL to 2 hours in the config file
Then the JWT TTL should be updated without restart
And new JWT tokens should have the updated expiration
@flaky
Scenario: Attempting to hot reload non-reloadable settings should be ignored
Given the server is running with config file monitoring enabled
When I update the server port to 9090 in the config file
Then the server port should remain unchanged
And the server should continue running on the original port
And a warning should be logged about ignored configuration change
@flaky
Scenario: Invalid configuration changes should be handled gracefully
Given the server is running with config file monitoring enabled
When I update the logging level to "invalid_level" in the config file
Then the logging level should remain unchanged
And an error should be logged about invalid configuration
And the server should continue running normally
@flaky
Scenario: Config file monitoring should handle file deletion gracefully
Given the server is running with config file monitoring enabled
When I delete the config file
Then the server should continue running with last known good configuration
And a warning should be logged about missing config file
@flaky
Scenario: Config file monitoring should handle file recreation
Given the server is running with config file monitoring enabled
And I have deleted the config file
When I recreate the config file with valid configuration
Then the server should reload the configuration
And the new configuration should be applied
@flaky
Scenario: Multiple rapid configuration changes should be handled
Given the server is running with config file monitoring enabled
When I rapidly update the logging level multiple times
Then all changes should be processed in order
And the final configuration should be applied
And no configuration changes should be lost
@flaky
Scenario: Configuration changes should be audited
Given the server is running with config file monitoring enabled
And audit logging is enabled
When I update the logging level to "info" in the config file
Then an audit log entry should be created
And the audit entry should contain the previous and new values
And the audit entry should contain the timestamp of the change

View File

@@ -0,0 +1,16 @@
package config
import (
"testing"
"dance-lessons-coach/pkg/bdd/testsetup"
)
func TestConfigBDD(t *testing.T) {
config := testsetup.NewFeatureConfig("config", "progress", false)
suite := testsetup.CreateTestSuite(t, config, "dance-lessons-coach BDD Tests - Config Feature")
if suite.Run() != 0 {
t.Fatal("non-zero status returned, failed to run config BDD tests")
}
}

View File

@@ -1,17 +1,21 @@
# features/greet.feature
@greet @smoke
Feature: Greet Service
The greet service should return appropriate greetings
@basic
Scenario: Default greeting
Given the server is running
When I request the default greeting
Then the response should be "{\"message\":\"Hello world!\"}"
@basic
Scenario: Personalized greeting
Given the server is running
When I request a greeting for "John"
Then the response should be "{\"message\":\"Hello John!\"}"
@v2 @api
Scenario: v2 greeting with JSON POST request
Given the server is running with v2 enabled
When I send a POST request to v2 greet with name "John"

View File

@@ -0,0 +1,30 @@
package greet
import (
"os"
"testing"
"dance-lessons-coach/pkg/bdd/testsetup"
)
func TestGreetBDD(t *testing.T) {
// Test suite with v2 disabled - run non-v2 scenarios only
t.Run("v1", func(t *testing.T) {
os.Setenv("GODOG_TAGS", "~@v2 && ~@skip")
config := testsetup.NewFeatureConfig("greet", "progress", false)
suite := testsetup.CreateTestSuite(t, config, "dance-lessons-coach BDD Tests - Greet Feature v1")
if suite.Run() != 0 {
t.Fatal("non-zero status returned, failed to run greet BDD tests with v2 disabled")
}
})
// Test suite with v2 enabled - run v2 scenarios only
t.Run("v2", func(t *testing.T) {
os.Setenv("GODOG_TAGS", "@v2 && ~@skip")
config := testsetup.NewFeatureConfig("greet", "progress", false)
suite := testsetup.CreateTestSuite(t, config, "dance-lessons-coach BDD Tests - Greet Feature v2")
if suite.Run() != 0 {
t.Fatal("non-zero status returned, failed to run greet BDD tests with v2 enabled")
}
})
}

View File

@@ -1,7 +1,9 @@
# features/health.feature
@health @smoke @critical
Feature: Health Endpoint
The health endpoint should indicate server status
@basic @critical
Scenario: Health check returns healthy status
Given the server is running
When I request the health endpoint

View File

@@ -0,0 +1,16 @@
package health
import (
"testing"
"dance-lessons-coach/pkg/bdd/testsetup"
)
func TestHealthBDD(t *testing.T) {
config := testsetup.NewFeatureConfig("health", "progress", false)
suite := testsetup.CreateTestSuite(t, config, "dance-lessons-coach BDD Tests - Health Feature")
if suite.Run() != 0 {
t.Fatal("non-zero status returned, failed to run health BDD tests")
}
}

View File

@@ -0,0 +1,181 @@
# features/jwt_secret_retention.feature
Feature: JWT Secret Retention Policy
As a system administrator
I want automatic cleanup of expired JWT secrets
So that we can maintain security while ensuring system performance
Background:
Given the server is running with JWT secret retention configured
And the default JWT TTL is 24 hours
And the retention factor is 2.0
And the maximum retention is 72 hours
Scenario: Automatic cleanup of expired secrets
Given a primary JWT secret exists
And I add a secondary JWT secret with 1 hour expiration
When I wait for the retention period to elapse
Then the expired secondary secret should be automatically removed
And the primary secret should remain active
And I should see cleanup event in logs
Scenario: Secret retention based on TTL factor
Given the JWT TTL is set to 2 hours
And the retention factor is 3.0
When I add a new JWT secret
Then the secret should expire after 6 hours
And the retention period should be 6 hours
Scenario: Maximum retention period enforcement
Given the JWT TTL is set to 72 hours
And the retention factor is 3.0
And the maximum retention is 72 hours
When I add a new JWT secret
Then the retention period should be capped at 72 hours
And not exceed the maximum retention limit
Scenario: Cleanup preserves primary secret
Given a primary JWT secret exists
And the primary secret is older than retention period
When the cleanup job runs
Then the primary secret should not be removed
And the primary secret should remain active
@todo
Scenario: Multiple secrets with different ages
Given I have 3 JWT secrets of different ages
And secret A is 1 hour old (within retention)
And secret B is 50 hours old (expired)
And secret C is the primary secret
When the cleanup job runs
Then secret A should be retained
And secret B should be removed
And secret C should be retained as primary
@todo
Scenario: Cleanup frequency configuration
Given the cleanup interval is set to 30 minutes
When I add an expired JWT secret
Then it should be removed within 30 minutes
And I should see cleanup events every 30 minutes
@todo
Scenario: Token validation with expired secret
Given a user "retentionuser" exists with password "testpass123"
And I authenticate with username "retentionuser" and password "testpass123"
And I receive a valid JWT token signed with current secret
When I wait for the secret to expire
And I try to validate the expired token
Then the token validation should fail
And I should receive "invalid_token" error
@todo
Scenario: Graceful rotation during retention period
Given a user "gracefuluser" exists with password "testpass123"
And I authenticate with username "gracefuluser" and password "testpass123"
And I receive a valid JWT token signed with primary secret
When I add a new secondary secret and rotate to it
And I authenticate again with username "gracefuluser" and password "testpass123"
Then I should receive a new token signed with secondary secret
And the old token should still be valid during retention period
And both tokens should work until retention period expires
Scenario: Configuration validation
Given I set retention factor to 0.5
When I try to start the server
Then I should receive configuration validation error
And the error should mention "retention_factor must be 1.0"
@todo @nice_to_have
Scenario: Metrics for secret retention
Given I have enabled Prometheus metrics
When the cleanup job removes expired secrets
Then I should see "jwt_secrets_expired_total" metric increment
And I should see "jwt_secrets_active_count" metric decrease
And I should see "jwt_secret_retention_duration_seconds" histogram update
@todo @nice_to_have
Scenario: Log masking for security
Given I add a new JWT secret "super-secret-key-123456"
When the cleanup job runs
Then the logs should show masked secret "supe****123456"
And not expose the full secret in logs
@todo
Scenario: Cleanup with high volume of secrets
Given I have 1000 JWT secrets
And 300 of them are expired
When the cleanup job runs
Then it should complete within 100 milliseconds
And remove all 300 expired secrets
And not impact server performance
@todo
Scenario: Disabled cleanup via configuration
Given I set cleanup interval to 8760 hours
When I add expired JWT secrets
Then they should not be automatically removed
And manual cleanup should still be possible
@todo
Scenario: Retention period calculation edge cases
Given the JWT TTL is 1 hour
And the retention factor is 1.0
When I add a new JWT secret
Then the retention period should be 1 hour
And the secret should expire after 1 hour
@todo
Scenario: Secret validation with retention policy
Given I try to add an invalid JWT secret
When the secret is less than 16 characters
Then I should receive validation error
And the error should mention "must be at least 16 characters"
@todo
Scenario: Cleanup job error handling
Given the cleanup job encounters an error
When it tries to remove a secret
Then it should log the error
And continue with remaining secrets
And not crash the cleanup process
@todo
Scenario: Configuration reload without restart
Given the server is running with default retention settings
When I update the retention factor via configuration
Then the new settings should take effect immediately
And existing secrets should be reevaluated
And cleanup should use new retention periods
@todo @nice_to_have
Scenario: Audit trail for secret operations
Given I enable audit logging
When I add a new JWT secret
Then I should see audit log entry with event type "secret_added"
And when the secret is removed by cleanup
Then I should see audit log entry with event type "secret_removed"
@todo
Scenario: Retention policy with token refresh
Given a user "refreshuser" exists with password "testpass123"
And I authenticate and receive token A
When I refresh my token during retention period
Then I should receive new token B
And token A should still be valid until retention expires
And both tokens should work concurrently
@todo
Scenario: Emergency secret rotation
Given a security incident requires immediate rotation
When I rotate to a new primary secret
Then old tokens should be invalidated immediately
And new tokens should use the emergency secret
And cleanup should remove compromised secrets
@todo @nice_to_have
Scenario: Monitoring and alerting
Given I have monitoring configured
When the cleanup job fails repeatedly
Then I should receive alert notification
And the alert should include error details
And suggest remediation steps

View File

@@ -0,0 +1,54 @@
# features/jwt_secret_rotation.feature
Feature: JWT Secret Rotation
As a system administrator
I want to rotate JWT secrets without disrupting users
So that we can maintain security while ensuring continuous service
Scenario: Authentication with multiple valid JWT secrets
Given the server is running with multiple JWT secrets
And a user "multiuser" exists with password "testpass123"
When I authenticate with username "multiuser" and password "testpass123"
Then the authentication should be successful
And I should receive a valid JWT token signed with the primary secret
Scenario: Token validation with multiple valid secrets
Given the server is running with multiple JWT secrets
And a user "tokenuser" exists with password "testpass123"
When I authenticate with username "tokenuser" and password "testpass123"
Then the authentication should be successful
And I should receive a valid JWT token
When I validate a JWT token signed with the secondary secret
Then the token should be valid
And it should contain the correct user ID
Scenario: Secret rotation - adding new secret while keeping old one valid
Given the server is running with primary JWT secret
And a user "rotateuser" exists with password "testpass123"
When I authenticate with username "rotateuser" and password "testpass123"
Then the authentication should be successful
And I should receive a valid JWT token signed with the primary secret
When I add a new secondary JWT secret to the server
And I authenticate with username "rotateuser" and password "testpass123" again
Then the authentication should be successful
And I should receive a valid JWT token signed with the new secondary secret
When I validate the old JWT token signed with primary secret
Then the token should still be valid
Scenario: Token rejection after secret expiration
Given the server is running with primary and expired secondary JWT secrets
When I use a JWT token signed with the expired secondary secret for authentication
Then the authentication should fail
And the response should contain error "invalid_token"
Scenario: Graceful secret rotation with user continuity
Given the server is running with primary JWT secret
And a user "gracefuluser" exists with password "testpass123"
When I authenticate with username "gracefuluser" and password "testpass123"
Then the authentication should be successful
And I should receive a valid JWT token signed with the primary secret
When I add a new secondary JWT secret and rotate to it
And I use the old JWT token signed with primary secret
Then the token should still be valid during retention period
When I authenticate with username "gracefuluser" and password "testpass123" after rotation
Then the authentication should be successful
And I should receive a valid JWT token signed with the new secondary secret

16
features/jwt/jwt_test.go Normal file
View File

@@ -0,0 +1,16 @@
package jwt
import (
"testing"
"dance-lessons-coach/pkg/bdd/testsetup"
)
func TestJWTBDD(t *testing.T) {
config := testsetup.NewFeatureConfig("jwt", "pretty", false)
suite := testsetup.CreateTestSuite(t, config, "dance-lessons-coach BDD Tests - JWT Feature")
if suite.Run() != 0 {
t.Fatal("non-zero status returned, failed to run jwt BDD tests")
}
}