Files
dance-lessons-coach/adr/0008-bdd-testing.md
Gabriel Radureau a24b4fdb3b 📝 docs(adr): homogenize 23 ADRs + rewrite README (Tâche 7 migration) (#18)
## Summary

Homogenize all 23 ADRs to a single canonical header format, and rewrite `adr/README.md` to match the actual state of the corpus.

This is **Tâche 7** of the ARCODANGE Phase 1 migration (Claude Code → Mistral Vibe). Independent from PR #17 (Tâche 6 — restructure AGENTS.md) — both can merge in any order. No code changes; only documentation.

## Changes

### 1. Homogenize 21 ADR headers (commit `db09d0a`)

The audit (Tâche 6 Phase A, Mistral intent-router agent, 2026-05-02) had identified **3 inconsistent header formats** :

- **F1** — list bullets (`* Status:` / `* Date:` / `* Deciders:`) : 11 ADRs (0001-0008, 0011, 0014, 0023)
- **F2** — bold fields (`**Status:**` / `**Date:**` / `**Authors:**`) : 9 ADRs (0009, 0010, 0012, 0013, 0015, 0016, 0017, 0018, 0019)
- **F3** — dedicated section (`## Status\n**Value** `) : 5 ADRs (0020, 0021, 0022, 0024, 0025)

Plus mixed metadata names (Authors / Deciders / Decision Date / Implementation Date / Implementation Status / Last Updated) and decorative emojis on status values made the corpus hard to scan or template against.

**Canonical format adopted** (see `adr/README.md` for full template) :

```markdown
# NN. Title

**Status:** <Proposed | Accepted | Implemented | Partially Implemented | Approved | Rejected | Deferred | Deprecated | Superseded by ADR-NNNN>
**Date:** YYYY-MM-DD
**Authors:** Name(s)

[optional **Field:** ... lines]

## Context...
```

**Transformations applied** (via `/tmp/homogenize-adrs.py` script, 23 files scanned, 21 modified — 0010 and 0012 were already conform) :

- F1 list bullets → bold fields
- F2 cleanup : `**Deciders:**` → `**Authors:**`, strip status emojis
- F3 sections : `## Status\n**Value** ` → `**Status:** Value` (single line)
- Strip decorative emojis from `**Status:**` and `**Implementation Status:**`
- Convert `* Last Updated:` / `* Implementation Status:` / `* Decision Drivers:` / `* Decision Date:` to bold
- Date typo fix : `2024-04-XX` → `2026-04-XX` for ADRs 0018, 0019 (off-by-2-years in original)
- Normalize multiple blank lines after header (max 1)

**ADR body content is preserved unchanged.** Only headers transformed.

### 2. Rewrite `adr/README.md` (commit `d64ab02`)

Previous README had multiple inconsistencies :

- Index table listed wrong titles for ADRs 0010-0021 (looked like an aspirational forecast that never matched reality — e.g. "0011 = Trunk-Based Development" but real 0011 is absent and Trunk-Based Development is actually 0017)
- Listed entries for ADRs 0011 (validation library) and 0014 (gRPC) but **these files do not exist** in the repo
- 0024 (BDD Test Organization) was missing from the detail list
- Template still showed the obsolete F1 format (`* Status:`)
- Decorative emojis on every status entry

Rewrite :

- Index table **regenerated from actual file contents** (title from H1, status from `**Status:**` line) — emoji-free, accurate
- Notes that 0011 / 0014 are not currently in use (reserved)
- Updated template block matches the canonical format
- Status Legend extended with `Approved`, `Partially Implemented`, `Deferred`
- Added note that 0026 is the next free number for new ADRs

## Test plan

- [x] All 23 ADRs follow `**Status:**` / `**Date:**` / `**Authors:**` (verified via grep)
- [x] No more occurrences of `* Status:` (F1) or `## Status` (F3) in any ADR header
- [x] No more emojis on `**Status:**` lines
- [x] `adr/README.md` index links resolve to existing files (no more 0011 / 0014 dead links)
- [x] Pre-commit hooks pass (`go mod tidy`, `go fmt`, `swag fmt`)

## Migration context

Part of Phase 1 of the ARCODANGE migration from Claude Code to Mistral Vibe. Tâche 7 of the curriculum.

Independent from PR #17 (which restructures `AGENTS.md`). The two PRs touch disjoint files — no merge conflict expected when both are merged.

🤖 Generated with [Claude Code](https://claude.com/claude-code) (Opus 4.7, 1M context). Mistral Vibe (intent-router agent / mistral-medium-3.5) did the original audit identifying the 3 formats during Tâche 6 Phase A.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Mistral Vibe (devstral-2 / mistral-medium-3.5)
Reviewed-on: #18
Co-authored-by: Gabriel Radureau <arcodange@gmail.com>
Co-committed-by: Gabriel Radureau <arcodange@gmail.com>
2026-05-03 11:01:13 +02:00

7.8 KiB

Adopt BDD with Godog for behavioral testing

Status: Accepted Authors: Gabriel Radureau, AI Agent Date: 2026-04-05

Context and Problem Statement

We needed to add behavioral testing to dance-lessons-coach that provides:

  • User-centric test scenarios
  • Living documentation
  • Integration testing capabilities
  • Clear communication between technical and non-technical stakeholders
  • Complementary testing to unit tests

Decision Drivers

  • Need for higher-level testing than unit tests
  • Desire for living documentation that's always up-to-date
  • Requirement for testing through public interfaces
  • Need for clear behavioral specifications
  • Desire for good test organization and readability

Considered Options

  • Godog (Cucumber for Go) - BDD framework for Go
  • Ginkgo - BDD-style testing framework
  • Standard Go testing - Extended for integration tests
  • Custom BDD framework - Build our own

Decision Outcome

Chosen option: "Godog" because it provides proper BDD support with Gherkin syntax, good Go integration, living documentation capabilities, and follows standard Cucumber patterns.

Pros and Cons of the Options

Godog

  • Good, because proper BDD with Gherkin syntax
  • Good, because living documentation
  • Good, because good Go integration
  • Good, because follows Cucumber standards
  • Good, because clear separation of concerns
  • Bad, because slightly more complex setup
  • Bad, because slower execution than unit tests

Ginkgo

  • Good, because good BDD-style testing
  • Good, because fast execution
  • Good, because good Go integration
  • Bad, because not proper Gherkin/BDD
  • Bad, because less clear for non-technical stakeholders

Standard Go testing

  • Good, because no external dependencies
  • Good, because familiar to Go developers
  • Bad, because no BDD capabilities
  • Bad, because no living documentation
  • Bad, because less organized for behavioral tests

Custom BDD framework

  • Good, because tailored to our needs
  • Good, because no external dependencies
  • Bad, because time-consuming to develop
  • Bad, because need to maintain ourselves
  • Bad, because likely less feature-rich

Implementation Structure

features/
├── greet.feature          # Gherkin feature files
├── health.feature
└── readiness.feature

pkg/bdd/
├── steps/                 # Step definitions
│   ├── greet_steps.go     # Implementation of steps
│   ├── health_steps.go
│   └── readiness_steps.go
│
├── testserver/            # Test infrastructure
│   ├── server.go          # In-process test server harness
│   └── client.go          # HTTP client for testing
│
└── suite.go               # Test suite initialization

Testing Approach Evolution

Initial Approach (Process-based)

Initially planned to test against external server process using go run, but this proved unreliable for automated testing due to:

  • Process management complexity
  • Port conflicts in parallel execution
  • CI/CD environment challenges
  • Process cleanup issues

Current Approach (Hybrid In-Process)

Adopted a hybrid approach that maintains black box testing principles while improving reliability:

// pkg/bdd/testserver/server.go
func (s *Server) Start() error {
    // Create real server instance from pkg/server
    cfg := createTestConfig(s.port)
    realServer := server.NewServer(cfg, context.Background())
    
    // Start HTTP server in same process
    s.httpServer = &http.Server{
        Addr:    fmt.Sprintf(":%d", s.port),
        Handler: realServer.Router(),
    }
    
    go func() {
        if err := s.httpServer.ListenAndServe(); err != nil && err != http.ErrServerClosed {
            log.Error().Err(err).Msg("Test server failed")
        }
    }()
    
    return s.waitForServerReady()
}

Black Box Testing Principles Maintained

Despite using in-process server, the approach maintains core black box testing principles:

External Interface Testing: All tests interact through HTTP API only No Implementation Knowledge: Tests don't access internal server components Real Server Code: Uses actual server implementation from pkg/server Production Configuration: Tests with realistic server configuration Isolation: Each test suite gets fresh server instance

What We Test vs What We Don't

Covered by BDD Tests

  • HTTP API endpoints and responses
  • Request/response handling
  • Business logic through public interface
  • Error handling and status codes
  • Readiness/liveness behavior
  • JSON serialization/deserialization

🚫 Not Covered by BDD Tests (Covered Elsewhere)

  • Actual process startup/shutdown (covered by scripts/test-server.sh)
  • Main function execution (covered by integration tests)
  • External process management (covered by server control scripts)
  • Operating system signals (covered by manual testing)

Example Feature File

# features/greet.feature
Feature: Greet Service
  The greet service should return appropriate greetings

  Scenario: Default greeting
    Given the server is running
    When I request the default greeting
    Then the response should be "Hello world!"

  Scenario: Personalized greeting
    Given the server is running
    When I request a greeting for "John"
    Then the response should be "Hello John!"

Example Step Implementation

// pkg/bdd/steps/steps.go
func InitializeAllSteps(ctx *godog.ScenarioContext, client *testserver.Client) {
    sc := NewStepContext(client)

    ctx.Step(`^the server is running$`, sc.theServerIsRunning)
    ctx.Step(`^I request the default greeting$`, sc.iRequestTheDefaultGreeting)
    ctx.Step(`^I request a greeting for "([^"]*)"$`, sc.iRequestAGreetingFor)
    ctx.Step(`^I request the health endpoint$`, sc.iRequestTheHealthEndpoint)
    ctx.Step(`^the response should be "{\"([^"]*)\":\"([^"]*)\"}"$`, sc.theResponseShouldBe)
}

// StepContext struct holds the test client
type StepContext struct {
    client *testserver.Client
}

func (sc *StepContext) theServerIsRunning() error {
    // Actually verify the server is running by checking the readiness endpoint
    return sc.client.Request("GET", "/api/ready", nil)
}

func (sc *StepContext) iRequestTheDefaultGreeting() error {
    return sc.client.Request("GET", "/api/v1/greet/", nil)
}

func (sc *StepContext) theResponseShouldBe(arg1, arg2 string) error {
    // Handle JSON escaping from feature files
    cleanArg1 := strings.Trim(arg1, `"\`)
    cleanArg2 := strings.Trim(arg2, `"\`)
    expected := fmt.Sprintf(`{"%s":"%s"}`, cleanArg1, cleanArg2)
    return sc.client.ExpectResponseBody(expected)
}

Black Box Testing Approach

The BDD implementation follows black box testing principles:

  • External perspective: Tests interact only through public HTTP API
  • No implementation knowledge: Tests don't know about internal components
  • Behavior focus: Tests verify what the system does, not how it does it
  • Interface testing: Tests verify the contract between system and users

Testing Strategy

Test Types

  1. Direct HTTP tests: Test raw API behavior
  2. SDK client tests: Test generated client integration (future)

Test Execution

# Run BDD tests
cd features
godog

# Run with specific format
godog -f progress

# Run specific feature
godog features/greet.feature

Integration with CI/CD

# Example GitHub Actions step
- name: Run BDD tests
  run: |
    cd features
    godog -f progress

Performance Considerations

  • BDD tests are slower than unit tests (expected)
  • Each scenario runs with fresh server instance for isolation
  • Tests can be run in parallel where appropriate
  • Focus on critical paths rather than exhaustive testing