- Update ADR 0009 to reflect actual hybrid testing status (BDD + docs only) - Update ADR 0013 to clarify swaggo/swag choice over oapi-codegen - Add implementation status sections showing ✅ completed vs ❌ deferred - Explain pragmatic reasons for current approach - Provide future migration path for SDK generation - Maintain transparency about framework compatibility decisions See updated ADRs for complete details on current testing architecture and when/if we might need full hybrid approach with SDK generation. Generated by Mistral Vibe. Co-Authored-By: Mistral Vibe <vibe@mistral.ai>
11 KiB
Combine BDD and Swagger-based testing
- Status: ✅ Partially Implemented (BDD + Documentation only)
- Deciders: Gabriel Radureau, AI Agent
- Date: 2026-04-05
- Last Updated: 2026-04-05
- Implementation Status: BDD testing and OpenAPI documentation completed, SDK generation deferred
Context and Problem Statement
We need to establish a comprehensive testing strategy for DanceLessonsCoach that provides:
- Behavioral verification through BDD
- API documentation through Swagger/OpenAPI
- Client SDK validation
- Clear separation of concerns
- Maintainable test suite
Decision Drivers
- Need for comprehensive API testing
- Desire for living documentation
- Requirement for client SDK validation
- Need for clear test organization
- Desire for maintainable test suite
Considered Options
- BDD only - Use Godog for all testing
- Swagger only - Use OpenAPI for testing
- Hybrid approach - Combine BDD and Swagger testing
- Custom solution - Build our own testing framework
Decision Outcome
Chosen option: "Hybrid approach" because it provides the best combination of behavioral verification, API documentation, client validation, and maintainable test organization.
Implementation Status
Status: ✅ Partially Implemented (BDD + Documentation only)
What We Actually Have
-
✅ BDD Testing with Direct HTTP Client
- Godog framework integration
- Direct HTTP testing of all endpoints
- Comprehensive feature coverage
- Clear, readable scenarios
- 7 scenarios, 21 steps, 100% passing
-
✅ OpenAPI/Swagger Documentation
- swaggo/swag integration
- Interactive Swagger UI at
/swagger/ - OpenAPI 2.0 specification
- Hierarchical tagging system
- Embedded documentation for single-binary deployment
-
❌ Swagger-based Testing (Not implemented)
- No SDK generation from OpenAPI spec
- No SDK-based BDD tests
- No client validation through generated SDKs
- No
api/gen/directory with generated clients
Why We Don't Need Full Hybrid Testing (Yet)
- Current Scale: Small API with limited endpoints (health, ready, version, greet)
- Team Size: Small team can effectively maintain direct HTTP tests
- Complexity: SDK generation adds unnecessary infrastructure complexity
- Maintenance: Direct HTTP tests are simpler to write and maintain
- Coverage: Current BDD tests provide comprehensive coverage of all functionality
- No External Consumers: No current need for official SDKs or client libraries
- Manual Testing Sufficient: Team can manually test client integration patterns
Current Testing Architecture
features/
├── greet.feature # Direct HTTP testing ✅
├── health.feature # Direct HTTP testing ✅
└── readiness.feature # Direct HTTP testing ✅
pkg/bdd/
├── steps/ # Step definitions ✅
│ └── steps.go # Direct HTTP client steps ✅
└── testserver/ # Test infrastructure ✅
├── client.go # HTTP client ✅
└── server.go # Test server ✅
pkg/server/docs/ # OpenAPI documentation ✅
├── swagger.json # Generated spec ✅
├── swagger.yaml # Generated spec ✅
└── docs.go # Embedded docs ✅
Missing Components for Full Hybrid Approach
api/ # Not implemented ❌
├── openapi.yaml # Manual spec (not generated) ❌
└── gen/ # Generated code ❌
└── go/ # Go SDK client ❌
features/
└── greet_sdk.feature # SDK-based testing ❌
pkg/bdd/
├── steps/
│ └── sdk_steps.go # SDK client steps ❌
└── testserver/
└── sdk_client.go # SDK client wrapper ❌
Pros and Cons of the Options
Hybrid approach
- Good, because combines strengths of both approaches
- Good, because BDD for behavioral verification
- Good, because Swagger for API documentation
- Good, because SDK testing for client validation
- Good, because clear separation of concerns
- Bad, because more complex setup
- Bad, because requires maintaining two test suites
BDD only
- Good, because consistent testing approach
- Good, because good for behavioral verification
- Bad, because no API documentation
- Bad, because no SDK validation
Swagger only
- Good, because good API documentation
- Good, because SDK validation
- Bad, because poor for behavioral testing
- Bad, because less readable for non-technical stakeholders
Custom solution
- Good, because tailored to our needs
- Good, because no external dependencies
- Bad, because time-consuming to develop
- Bad, because need to maintain ourselves
Implementation Strategy
Phase 1: BDD Implementation (Current) ✅ COMPLETED
features/
├── greet.feature # Direct HTTP testing ✅
├── health.feature # Direct HTTP testing ✅
└── readiness.feature # Direct HTTP testing ✅
pkg/bdd/
├── steps/ # Step definitions ✅
│ └── steps.go # Direct HTTP client steps ✅
└── testserver/ # Test infrastructure ✅
├── client.go # HTTP client ✅
└── server.go # Test server ✅
Phase 2: Swagger Integration (Current) ✅ COMPLETED
pkg/server/docs/ # OpenAPI documentation ✅
├── swagger.json # Generated spec ✅
├── swagger.yaml # Generated spec ✅
└── docs.go # Embedded docs ✅
pkg/server/ # Server integration ✅
├── server.go # Swagger UI routes ✅
└── main.go # Swagger annotations ✅
Phase 3: SDK Generation (Future - Not Currently Needed) ❌ DEFERRED
api/ # Future consideration ❌
├── openapi.yaml # Manual spec (if needed) ❌
└── gen/ # Generated code ❌
└── go/ # Go SDK client ❌
features/
└── greet_sdk.feature # SDK-based testing ❌
pkg/bdd/
├── steps/
│ └── sdk_steps.go # SDK client steps ❌
└── testserver/
└── sdk_client.go # SDK client wrapper ❌
Current Testing Benefits
1. Direct HTTP Tests ✅ (Our Current Approach)
- Verify raw API behavior ✅
- Test edge cases and error handling ✅
- Black box testing of actual endpoints ✅
- No dependency on generated code ✅
- Simple to write and maintain ✅
- Fast execution ✅
- Clear failure messages ✅
2. SDK-Based Tests ❌ (Not Implemented)
- Would validate generated client works correctly ❌
- Would test client integration patterns ❌
- Would catch issues in SDK generation ❌
- Would provide examples for SDK users ❌
- Would add complexity to test suite ❌
- Would require maintenance of generated code ❌
Example SDK-Based Feature
# features/greet_sdk.feature
Feature: Greet Service SDK
The generated SDK should work correctly with the service
Scenario: SDK default greeting
Given the server is running
And I have a configured SDK client
When I call Greet with no name
Then the response should be "Hello world!"
Scenario: SDK personalized greeting
Given the server is running
And I have a configured SDK client
When I call Greet with name "John"
Then the response should be "Hello John!"
Scenario: SDK error handling
Given the server is running
And I have a configured SDK client
When I call Greet with invalid parameters
Then I should receive an appropriate error
Implementation Order
- ✅ Implement BDD with direct HTTP client (COMPLETED)
- ✅ Add Swagger/OpenAPI documentation (COMPLETED)
- ❌ Generate SDK clients from Swagger spec (DEFERRED - not currently needed)
- ❌ Add SDK-based BDD tests (DEFERRED - not currently needed)
Test Organization
features/
├── greet.feature # Direct HTTP tests
├── greet_sdk.feature # SDK client tests
├── health.feature # Direct HTTP tests
├── health_sdk.feature # SDK client tests
└── readiness.feature # Direct HTTP tests
Links
Future Enhancements
If We Need SDK Generation Later
- Add oapi-codegen for SDK generation
- Generate Go, TypeScript, Python clients
- Add SDK-based BDD tests
- Implement automated SDK generation in CI/CD
- Add SDK validation to workflow
Current Focus (More Valuable)
- Add performance testing to BDD suite ✅
- Integrate contract testing ✅
- Add API version compatibility testing ✅
- Improve test coverage for edge cases ✅
- Add more realistic test scenarios ✅
Monitoring and Maintenance
Current Approach
- ✅ Regular review of test coverage
- ✅ Update tests when API changes
- ✅ Keep OpenAPI spec in sync with implementation
- ✅ Monitor test execution in CI/CD
- ✅ Review BDD scenarios for realism
If We Add SDK Generation Later
- Monitor SDK generation for breaking changes
- Validate generated SDKs work correctly
- Update SDK-based tests when API changes
- Maintain compatibility between SDK versions
- Document SDK usage patterns
Conclusion
What We Actually Have (Current Implementation)
✅ BDD Testing: Comprehensive behavioral testing with Godog ✅ OpenAPI Documentation: Interactive Swagger UI with swaggo/swag ✅ Direct HTTP Testing: 7 scenarios, 21 steps, 100% passing ✅ Production Ready: Fully tested and operational
What We Don't Have (Deferred)
❌ SDK Generation: No generated clients from OpenAPI spec ❌ Hybrid Testing: No SDK-based BDD tests ❌ Client Validation: No automated client validation ❌ oapi-codegen: Using swaggo instead
Why This is the Right Approach
- Pragmatic: Solves immediate needs without over-engineering
- Maintainable: Simple infrastructure, easy to understand
- Effective: Covers all functionality with direct HTTP testing
- Scalable: Can add SDK generation later if needed
- Team-Appropriate: Matches current team size and expertise
Future Considerations
If we need SDK generation in the future:
- Add oapi-codegen alongside swaggo
- Generate Go, TypeScript, Python clients
- Add SDK-based BDD tests
- Implement true hybrid testing approach
Current Status: ✅ Partially Implemented (BDD + Documentation) BDD Tests: http://localhost:8080/api/health (all passing) OpenAPI Docs: http://localhost:8080/swagger/ OpenAPI Spec: http://localhost:8080/swagger/doc.json
Proposed by: Arcodange Team Implemented by: 2026-04-05 Last Updated: 2026-04-05 Status: Production Ready for Current Needs