Files
dance-lessons-coach/adr/0009-hybrid-testing-approach.md
Gabriel Radureau db09d0ace1 📝 docs(adr): homogenize all 23 ADR headers to canonical format
Audit 2026-05-02 (Tâche 6 Phase A) had identified 3 inconsistent
formats across the ADR corpus :
- F1 list bullets : `* Status:` / `* Date:` / `* Deciders:` (11 ADRs)
- F2 bold fields : `**Status:**` / `**Date:**` / `**Authors:**` (9 ADRs)
- F3 dedicated section : `## Status\n**Value** ` (5 ADRs)

Mixed metadata names (Authors / Deciders / Decision Date / Implementation
Date / Implementation Status / Last Updated) and decorative emojis on
status values made the corpus hard to scan or template against.

Canonical format adopted (see adr/README.md for full template) :
    # NN. Title

    **Status:** <Proposed|Accepted|Implemented|Partially Implemented|
                  Approved|Rejected|Deferred|Deprecated|Superseded by ADR-NNNN>
    **Date:** YYYY-MM-DD
    **Authors:** Name(s)
    [optional **Field:** ... lines]

    ## Context...

Transformations applied (via /tmp/homogenize-adrs.py) :
- F1 list bullets → bold fields
- F2 cleanup : `**Deciders:**` → `**Authors:**`, strip status emojis
- F3 sections : `## Status\n**Value** ` → `**Status:** Value`
- Strip decorative emojis from `**Status:**` and `**Implementation Status:**`
- Convert any `* Implementation Status:` / `* Last Updated:` /
  `* Decision Drivers:` / `* Decision Date:` to bold equivalents
- Date typo fix : `2024-04-XX` → `2026-04-XX` for ADRs 0018, 0019
  (already noted in PR #17 but here re-applied since branch starts
  from origin/main pre-PR17)
- Normalize multiple blank lines after header (max 1)

21 / 23 ADRs modified. 0010 and 0012 were already conform.
0011 and 0014 do not exist in the repo (cf. README index update).

Body content of each ADR is preserved unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 00:27:42 +02:00

340 lines
11 KiB
Markdown

# Combine BDD and Swagger-based testing
**Status:** Partially Implemented (BDD + Documentation only)
**Authors:** Gabriel Radureau, AI Agent
**Date:** 2026-04-05
**Last Updated:** 2026-04-05
**Implementation Status:** BDD testing and OpenAPI documentation completed, SDK generation deferred
## Context and Problem Statement
We need to establish a comprehensive testing strategy for dance-lessons-coach that provides:
- Behavioral verification through BDD
- API documentation through Swagger/OpenAPI
- Client SDK validation
- Clear separation of concerns
- Maintainable test suite
## Decision Drivers
* Need for comprehensive API testing
* Desire for living documentation
* Requirement for client SDK validation
* Need for clear test organization
* Desire for maintainable test suite
## Considered Options
* BDD only - Use Godog for all testing
* Swagger only - Use OpenAPI for testing
* Hybrid approach - Combine BDD and Swagger testing
* Custom solution - Build our own testing framework
## Decision Outcome
Chosen option: "Hybrid approach" because it provides the best combination of behavioral verification, API documentation, client validation, and maintainable test organization.
## Implementation Status
**Status**: ✅ Partially Implemented (BDD + Documentation only)
### What We Actually Have
1.**BDD Testing with Direct HTTP Client**
- Godog framework integration
- Direct HTTP testing of all endpoints
- Comprehensive feature coverage
- Clear, readable scenarios
- 7 scenarios, 21 steps, 100% passing
2.**OpenAPI/Swagger Documentation**
- swaggo/swag integration
- Interactive Swagger UI at `/swagger/`
- OpenAPI 2.0 specification
- Hierarchical tagging system
- Embedded documentation for single-binary deployment
3.**Swagger-based Testing** (Not implemented)
- No SDK generation from OpenAPI spec
- No SDK-based BDD tests
- No client validation through generated SDKs
- No `api/gen/` directory with generated clients
### Why We Don't Need Full Hybrid Testing (Yet)
1. **Current Scale**: Small API with limited endpoints (health, ready, version, greet)
2. **Team Size**: Small team can effectively maintain direct HTTP tests
3. **Complexity**: SDK generation adds unnecessary infrastructure complexity
4. **Maintenance**: Direct HTTP tests are simpler to write and maintain
5. **Coverage**: Current BDD tests provide comprehensive coverage of all functionality
6. **No External Consumers**: No current need for official SDKs or client libraries
7. **Manual Testing Sufficient**: Team can manually test client integration patterns
### Current Testing Architecture
```
features/
├── greet.feature # Direct HTTP testing ✅
├── health.feature # Direct HTTP testing ✅
└── readiness.feature # Direct HTTP testing ✅
pkg/bdd/
├── steps/ # Step definitions ✅
│ └── steps.go # Direct HTTP client steps ✅
└── testserver/ # Test infrastructure ✅
├── client.go # HTTP client ✅
└── server.go # Test server ✅
pkg/server/docs/ # OpenAPI documentation ✅
├── swagger.json # Generated spec ✅
├── swagger.yaml # Generated spec ✅
└── docs.go # Embedded docs ✅
```
### Missing Components for Full Hybrid Approach
```
api/ # Not implemented ❌
├── openapi.yaml # Manual spec (not generated) ❌
└── gen/ # Generated code ❌
└── go/ # Go SDK client ❌
features/
└── greet_sdk.feature # SDK-based testing ❌
pkg/bdd/
├── steps/
│ └── sdk_steps.go # SDK client steps ❌
└── testserver/
└── sdk_client.go # SDK client wrapper ❌
```
## Pros and Cons of the Options
### Hybrid approach
* Good, because combines strengths of both approaches
* Good, because BDD for behavioral verification
* Good, because Swagger for API documentation
* Good, because SDK testing for client validation
* Good, because clear separation of concerns
* Bad, because more complex setup
* Bad, because requires maintaining two test suites
### BDD only
* Good, because consistent testing approach
* Good, because good for behavioral verification
* Bad, because no API documentation
* Bad, because no SDK validation
### Swagger only
* Good, because good API documentation
* Good, because SDK validation
* Bad, because poor for behavioral testing
* Bad, because less readable for non-technical stakeholders
### Custom solution
* Good, because tailored to our needs
* Good, because no external dependencies
* Bad, because time-consuming to develop
* Bad, because need to maintain ourselves
## Implementation Strategy
### Phase 1: BDD Implementation (Current) ✅ COMPLETED
```
features/
├── greet.feature # Direct HTTP testing ✅
├── health.feature # Direct HTTP testing ✅
└── readiness.feature # Direct HTTP testing ✅
pkg/bdd/
├── steps/ # Step definitions ✅
│ └── steps.go # Direct HTTP client steps ✅
└── testserver/ # Test infrastructure ✅
├── client.go # HTTP client ✅
└── server.go # Test server ✅
```
### Phase 2: Swagger Integration (Current) ✅ COMPLETED
```
pkg/server/docs/ # OpenAPI documentation ✅
├── swagger.json # Generated spec ✅
├── swagger.yaml # Generated spec ✅
└── docs.go # Embedded docs ✅
pkg/server/ # Server integration ✅
├── server.go # Swagger UI routes ✅
└── main.go # Swagger annotations ✅
```
### Phase 3: SDK Generation (Future - Not Currently Needed) ❌ DEFERRED
```
api/ # Future consideration ❌
├── openapi.yaml # Manual spec (if needed) ❌
└── gen/ # Generated code ❌
└── go/ # Go SDK client ❌
features/
└── greet_sdk.feature # SDK-based testing ❌
pkg/bdd/
├── steps/
│ └── sdk_steps.go # SDK client steps ❌
└── testserver/
└── sdk_client.go # SDK client wrapper ❌
```
## Current Testing Benefits
### 1. Direct HTTP Tests ✅ (Our Current Approach)
- Verify raw API behavior ✅
- Test edge cases and error handling ✅
- Black box testing of actual endpoints ✅
- No dependency on generated code ✅
- Simple to write and maintain ✅
- Fast execution ✅
- Clear failure messages ✅
### 2. SDK-Based Tests ❌ (Not Implemented)
- Would validate generated client works correctly ❌
- Would test client integration patterns ❌
- Would catch issues in SDK generation ❌
- Would provide examples for SDK users ❌
- Would add complexity to test suite ❌
- Would require maintenance of generated code ❌
## Example SDK-Based Feature
```gherkin
# features/greet_sdk.feature
Feature: Greet Service SDK
The generated SDK should work correctly with the service
Scenario: SDK default greeting
Given the server is running
And I have a configured SDK client
When I call Greet with no name
Then the response should be "Hello world!"
Scenario: SDK personalized greeting
Given the server is running
And I have a configured SDK client
When I call Greet with name "John"
Then the response should be "Hello John!"
Scenario: SDK error handling
Given the server is running
And I have a configured SDK client
When I call Greet with invalid parameters
Then I should receive an appropriate error
```
## Implementation Order
1.**Implement BDD with direct HTTP client** (COMPLETED)
2.**Add Swagger/OpenAPI documentation** (COMPLETED)
3.**Generate SDK clients from Swagger spec** (DEFERRED - not currently needed)
4.**Add SDK-based BDD tests** (DEFERRED - not currently needed)
## Test Organization
```bash
features/
├── greet.feature # Direct HTTP tests
├── greet_sdk.feature # SDK client tests
├── health.feature # Direct HTTP tests
├── health_sdk.feature # SDK client tests
└── readiness.feature # Direct HTTP tests
```
## Links
* [OpenAPI Specification](https://swagger.io/specification/)
* [Swagger Codegen](https://github.com/swagger-api/swagger-codegen)
* [Godog GitHub](https://github.com/cucumber/godog)
* [Testing Pyramid](https://martinfowler.com/articles/practical-test-pyramid.html)
## Future Enhancements
### If We Need SDK Generation Later
* Add oapi-codegen for SDK generation
* Generate Go, TypeScript, Python clients
* Add SDK-based BDD tests
* Implement automated SDK generation in CI/CD
* Add SDK validation to workflow
### Current Focus (More Valuable)
* Add performance testing to BDD suite ✅
* Integrate contract testing ✅
* Add API version compatibility testing ✅
* Improve test coverage for edge cases ✅
* Add more realistic test scenarios ✅
## Monitoring and Maintenance
### Current Approach
* ✅ Regular review of test coverage
* ✅ Update tests when API changes
* ✅ Keep OpenAPI spec in sync with implementation
* ✅ Monitor test execution in CI/CD
* ✅ Review BDD scenarios for realism
### If We Add SDK Generation Later
* Monitor SDK generation for breaking changes
* Validate generated SDKs work correctly
* Update SDK-based tests when API changes
* Maintain compatibility between SDK versions
* Document SDK usage patterns
## Conclusion
### What We Actually Have (Current Implementation)
**BDD Testing**: Comprehensive behavioral testing with Godog
**OpenAPI Documentation**: Interactive Swagger UI with swaggo/swag
**Direct HTTP Testing**: 7 scenarios, 21 steps, 100% passing
**Production Ready**: Fully tested and operational
### What We Don't Have (Deferred)
**SDK Generation**: No generated clients from OpenAPI spec
**Hybrid Testing**: No SDK-based BDD tests
**Client Validation**: No automated client validation
**oapi-codegen**: Using swaggo instead
### Why This is the Right Approach
1. **Pragmatic**: Solves immediate needs without over-engineering
2. **Maintainable**: Simple infrastructure, easy to understand
3. **Effective**: Covers all functionality with direct HTTP testing
4. **Scalable**: Can add SDK generation later if needed
5. **Team-Appropriate**: Matches current team size and expertise
### Future Considerations
If we need SDK generation in the future:
- Add oapi-codegen alongside swaggo
- Generate Go, TypeScript, Python clients
- Add SDK-based BDD tests
- Implement true hybrid testing approach
**Current Status:** ✅ Partially Implemented (BDD + Documentation)
**BDD Tests:** http://localhost:8080/api/health (all passing)
**OpenAPI Docs:** http://localhost:8080/swagger/
**OpenAPI Spec:** http://localhost:8080/swagger/doc.json
**Proposed by:** Arcodange Team
**Implemented by:** 2026-04-05
**Last Updated:** 2026-04-05
**Status:** Production Ready for Current Needs