Files
dance-lessons-coach/adr/0008-bdd-testing.md
Gabriel Radureau 73a3af1552 📝 docs: audit and correct all ADR statuses and content
Full pass over all 25 ADRs to align documentation with actual
implementation state. Changes by ADR:

README index: completely rewritten — previous table mapped numbers to
wrong titles from 0010 onward.

0008 (BDD Testing): added note that flat features/ structure and godog
CLI invocation are superseded by ADR-0024; framework decision stands.

0009 (Hybrid Testing): renamed from "Combine BDD and Swagger-based
testing" to "BDD Testing with OpenAPI Documentation"; clarified that
the SDK-testing layer was never built and has no open issue.

0013 (OpenAPI/Swagger): removed leftover merge conflict artifact
(=======) and duplicated 60-line block.

0015 (Cobra CLI): fixed status contradiction — body said "Implemented"
while footer said "Proposed". Now Accepted.

0018 (User Management): status Proposed → Accepted; system is fully
implemented (JWT, bcrypt, GORM repos all present).

0019 (PostgreSQL): status Proposed → Accepted (Partial); added warning
that sqlite_repository.go and gorm/driver/sqlite still present contrary
to ADR intent.

0021 (JWT Retention): fixed wrong cross-reference (previously cited
ADR-0009 "Hybrid Testing" as source of JWT multi-secret support); fixed
title number from "10" to "21"; clarified that base JWT is implemented
but the retention cleanup job is not.

0022 (Rate Limiting/Cache): added warning block linking to open Gitea
issue #13; changed all 20 false  implementation checkboxes to .

0023 (Config Hot Reloading): added note that BDD scenarios exist for
this feature but the feature itself is not yet implemented.

0024 (BDD Organization): status Proposed → Accepted; modular domain
structure is fully built.

0025 (BDD Scenario Isolation): status Proposed → Accepted (Partial);
Phase 1 done, Phase 2 blocked on ADR-0022.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 23:26:09 +02:00

262 lines
8.3 KiB
Markdown

# Adopt BDD with Godog for behavioral testing
* Status: Accepted
* Deciders: Gabriel Radureau, AI Agent
* Date: 2026-04-05
> **⚠️ Structure superseded by ADR-0024.** The framework decision (Godog, in-process test server) remains valid. However, the flat `features/` layout and single `steps.go` file described here were replaced by a modular per-domain structure. See ADR-0024 for the current organisation: `features/{auth,greet,health,jwt,config}/` with domain-specific step files and per-domain `*_test.go` runners. The `cd features && godog` execution pattern is also outdated — each domain now uses `go test`.
## Context and Problem Statement
We needed to add behavioral testing to dance-lessons-coach that provides:
- User-centric test scenarios
- Living documentation
- Integration testing capabilities
- Clear communication between technical and non-technical stakeholders
- Complementary testing to unit tests
## Decision Drivers
* Need for higher-level testing than unit tests
* Desire for living documentation that's always up-to-date
* Requirement for testing through public interfaces
* Need for clear behavioral specifications
* Desire for good test organization and readability
## Considered Options
* Godog (Cucumber for Go) - BDD framework for Go
* Ginkgo - BDD-style testing framework
* Standard Go testing - Extended for integration tests
* Custom BDD framework - Build our own
## Decision Outcome
Chosen option: "Godog" because it provides proper BDD support with Gherkin syntax, good Go integration, living documentation capabilities, and follows standard Cucumber patterns.
## Pros and Cons of the Options
### Godog
* Good, because proper BDD with Gherkin syntax
* Good, because living documentation
* Good, because good Go integration
* Good, because follows Cucumber standards
* Good, because clear separation of concerns
* Bad, because slightly more complex setup
* Bad, because slower execution than unit tests
### Ginkgo
* Good, because good BDD-style testing
* Good, because fast execution
* Good, because good Go integration
* Bad, because not proper Gherkin/BDD
* Bad, because less clear for non-technical stakeholders
### Standard Go testing
* Good, because no external dependencies
* Good, because familiar to Go developers
* Bad, because no BDD capabilities
* Bad, because no living documentation
* Bad, because less organized for behavioral tests
### Custom BDD framework
* Good, because tailored to our needs
* Good, because no external dependencies
* Bad, because time-consuming to develop
* Bad, because need to maintain ourselves
* Bad, because likely less feature-rich
## Implementation Structure
```
features/
├── greet.feature # Gherkin feature files
├── health.feature
└── readiness.feature
pkg/bdd/
├── steps/ # Step definitions
│ ├── greet_steps.go # Implementation of steps
│ ├── health_steps.go
│ └── readiness_steps.go
├── testserver/ # Test infrastructure
│ ├── server.go # In-process test server harness
│ └── client.go # HTTP client for testing
└── suite.go # Test suite initialization
```
## Testing Approach Evolution
### Initial Approach (Process-based)
Initially planned to test against external server process using `go run`, but this proved unreliable for automated testing due to:
- Process management complexity
- Port conflicts in parallel execution
- CI/CD environment challenges
- Process cleanup issues
### Current Approach (Hybrid In-Process)
Adopted a hybrid approach that maintains black box testing principles while improving reliability:
```go
// pkg/bdd/testserver/server.go
func (s *Server) Start() error {
// Create real server instance from pkg/server
cfg := createTestConfig(s.port)
realServer := server.NewServer(cfg, context.Background())
// Start HTTP server in same process
s.httpServer = &http.Server{
Addr: fmt.Sprintf(":%d", s.port),
Handler: realServer.Router(),
}
go func() {
if err := s.httpServer.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Error().Err(err).Msg("Test server failed")
}
}()
return s.waitForServerReady()
}
```
## Black Box Testing Principles Maintained
Despite using in-process server, the approach maintains core black box testing principles:
**External Interface Testing**: All tests interact through HTTP API only
**No Implementation Knowledge**: Tests don't access internal server components
**Real Server Code**: Uses actual server implementation from `pkg/server`
**Production Configuration**: Tests with realistic server configuration
**Isolation**: Each test suite gets fresh server instance
## What We Test vs What We Don't
### ✅ Covered by BDD Tests
- HTTP API endpoints and responses
- Request/response handling
- Business logic through public interface
- Error handling and status codes
- Readiness/liveness behavior
- JSON serialization/deserialization
### 🚫 Not Covered by BDD Tests (Covered Elsewhere)
- Actual process startup/shutdown (covered by `scripts/test-server.sh`)
- Main function execution (covered by integration tests)
- External process management (covered by server control scripts)
- Operating system signals (covered by manual testing)
## Example Feature File
```gherkin
# features/greet.feature
Feature: Greet Service
The greet service should return appropriate greetings
Scenario: Default greeting
Given the server is running
When I request the default greeting
Then the response should be "Hello world!"
Scenario: Personalized greeting
Given the server is running
When I request a greeting for "John"
Then the response should be "Hello John!"
```
## Example Step Implementation
```go
// pkg/bdd/steps/steps.go
func InitializeAllSteps(ctx *godog.ScenarioContext, client *testserver.Client) {
sc := NewStepContext(client)
ctx.Step(`^the server is running$`, sc.theServerIsRunning)
ctx.Step(`^I request the default greeting$`, sc.iRequestTheDefaultGreeting)
ctx.Step(`^I request a greeting for "([^"]*)"$`, sc.iRequestAGreetingFor)
ctx.Step(`^I request the health endpoint$`, sc.iRequestTheHealthEndpoint)
ctx.Step(`^the response should be "{\"([^"]*)\":\"([^"]*)\"}"$`, sc.theResponseShouldBe)
}
// StepContext struct holds the test client
type StepContext struct {
client *testserver.Client
}
func (sc *StepContext) theServerIsRunning() error {
// Actually verify the server is running by checking the readiness endpoint
return sc.client.Request("GET", "/api/ready", nil)
}
func (sc *StepContext) iRequestTheDefaultGreeting() error {
return sc.client.Request("GET", "/api/v1/greet/", nil)
}
func (sc *StepContext) theResponseShouldBe(arg1, arg2 string) error {
// Handle JSON escaping from feature files
cleanArg1 := strings.Trim(arg1, `"\`)
cleanArg2 := strings.Trim(arg2, `"\`)
expected := fmt.Sprintf(`{"%s":"%s"}`, cleanArg1, cleanArg2)
return sc.client.ExpectResponseBody(expected)
}
```
## Black Box Testing Approach
The BDD implementation follows black box testing principles:
* **External perspective**: Tests interact only through public HTTP API
* **No implementation knowledge**: Tests don't know about internal components
* **Behavior focus**: Tests verify what the system does, not how it does it
* **Interface testing**: Tests verify the contract between system and users
## Testing Strategy
### Test Types
1. **Direct HTTP tests**: Test raw API behavior
2. **SDK client tests**: Test generated client integration (future)
### Test Execution
```bash
# Run BDD tests
cd features
godog
# Run with specific format
godog -f progress
# Run specific feature
godog features/greet.feature
```
## Links
* [Godog GitHub](https://github.com/cucumber/godog)
* [Godog Documentation](https://github.com/cucumber/godog#readme)
* [Cucumber Documentation](https://cucumber.io/docs/gherkin/)
* [BDD Introduction](https://dannorth.net/introducing-bdd/)
## Integration with CI/CD
```yaml
# Example GitHub Actions step
- name: Run BDD tests
run: |
cd features
godog -f progress
```
## Performance Considerations
* BDD tests are slower than unit tests (expected)
* Each scenario runs with fresh server instance for isolation
* Tests can be run in parallel where appropriate
* Focus on critical paths rather than exhaustive testing