and preserve complete architectural context for AI/developer reference.\n\n## Changes\n\n### Documentation Consolidation 🗂️\n- Simplified README.md by ~100 lines (25% reduction)\n- Removed redundant sections (project structure, configuration, API docs)\n- Added strategic cross-references between README.md and AGENTS.md\n- README.md now focused on user onboarding and basic usage\n- AGENTS.md maintained as complete technical reference\n\n### Architecture Decision Records ✅\n- Added comprehensive ADR directory with 9 decision records:\n * 0001-go-1.26.1-standard.md\n * 0002-chi-router.md\n * 0003-zerolog-logging.md (enhanced with Zap analysis)\n * 0004-interface-based-design.md\n * 0005-graceful-shutdown.md\n * 0006-configuration-management.md\n * 0007-opentelemetry-integration.md\n * 0008-bdd-testing.md\n * 0009-hybrid-testing-approach.md\n- Added adr/README.md with guidelines and template\n- Enhanced Zerolog ADR with detailed performance benchmarking vs Zap\n\n### Content Organization 📝\n- README.md: User-focused guide with quick start and basic examples\n- AGENTS.md: Developer/AI-focused complete technical reference\n- ADR directory: Architectural decision history and rationale\n\n## Impact\n- ✅ Better user onboarding experience\n- ✅ Preserved complete technical context for AI agents\n- ✅ Reduced maintenance burden through consolidation\n- ✅ Improved discoverability of advanced documentation\n- ✅ Established ADR process for future decisions\n\n## Related\n- Resolves documentation redundancy issues\n- Prepares for BDD implementation with clear context\n- Supports future Swagger integration decisions\n- Maintains project history for new contributors\n\nGenerated by Mistral Vibe.\nCo-Authored-By: Mistral Vibe <vibe@mistral.ai>
179 lines
5.0 KiB
Markdown
179 lines
5.0 KiB
Markdown
# Combine BDD and Swagger-based testing
|
|
|
|
* Status: Proposed
|
|
* Deciders: Gabriel Radureau, AI Agent
|
|
* Date: 2026-04-05
|
|
|
|
## Context and Problem Statement
|
|
|
|
We need to establish a comprehensive testing strategy for DanceLessonsCoach that provides:
|
|
- Behavioral verification through BDD
|
|
- API documentation through Swagger/OpenAPI
|
|
- Client SDK validation
|
|
- Clear separation of concerns
|
|
- Maintainable test suite
|
|
|
|
## Decision Drivers
|
|
|
|
* Need for comprehensive API testing
|
|
* Desire for living documentation
|
|
* Requirement for client SDK validation
|
|
* Need for clear test organization
|
|
* Desire for maintainable test suite
|
|
|
|
## Considered Options
|
|
|
|
* BDD only - Use Godog for all testing
|
|
* Swagger only - Use OpenAPI for testing
|
|
* Hybrid approach - Combine BDD and Swagger testing
|
|
* Custom solution - Build our own testing framework
|
|
|
|
## Decision Outcome
|
|
|
|
Chosen option: "Hybrid approach" because it provides the best combination of behavioral verification, API documentation, client validation, and maintainable test organization.
|
|
|
|
## Pros and Cons of the Options
|
|
|
|
### Hybrid approach
|
|
|
|
* Good, because combines strengths of both approaches
|
|
* Good, because BDD for behavioral verification
|
|
* Good, because Swagger for API documentation
|
|
* Good, because SDK testing for client validation
|
|
* Good, because clear separation of concerns
|
|
* Bad, because more complex setup
|
|
* Bad, because requires maintaining two test suites
|
|
|
|
### BDD only
|
|
|
|
* Good, because consistent testing approach
|
|
* Good, because good for behavioral verification
|
|
* Bad, because no API documentation
|
|
* Bad, because no SDK validation
|
|
|
|
### Swagger only
|
|
|
|
* Good, because good API documentation
|
|
* Good, because SDK validation
|
|
* Bad, because poor for behavioral testing
|
|
* Bad, because less readable for non-technical stakeholders
|
|
|
|
### Custom solution
|
|
|
|
* Good, because tailored to our needs
|
|
* Good, because no external dependencies
|
|
* Bad, because time-consuming to develop
|
|
* Bad, because need to maintain ourselves
|
|
|
|
## Implementation Strategy
|
|
|
|
### Phase 1: BDD Implementation (Current)
|
|
|
|
```
|
|
features/
|
|
├── greet.feature # Direct HTTP testing
|
|
├── health.feature
|
|
└── readiness.feature
|
|
|
|
pkg/bdd/
|
|
├── steps/ # Step definitions
|
|
│ └── http_steps.go # Direct HTTP client steps
|
|
└── testserver/ # Test infrastructure
|
|
```
|
|
|
|
### Phase 2: Swagger Integration (Future)
|
|
|
|
```
|
|
api/
|
|
├── openapi.yaml # OpenAPI specification
|
|
└── gen/ # Generated code
|
|
└── go/ # Go SDK client
|
|
|
|
features/
|
|
└── greet_sdk.feature # SDK-based testing (added)
|
|
|
|
pkg/bdd/
|
|
├── steps/
|
|
│ └── sdk_steps.go # SDK client steps (added)
|
|
└── testserver/
|
|
└── sdk_client.go # SDK client wrapper (added)
|
|
```
|
|
|
|
## Hybrid Testing Benefits
|
|
|
|
### 1. Direct HTTP Tests
|
|
- Verify raw API behavior
|
|
- Test edge cases and error handling
|
|
- Black box testing of actual endpoints
|
|
- No dependency on generated code
|
|
|
|
### 2. SDK-Based Tests
|
|
- Validate generated client works correctly
|
|
- Test client integration patterns
|
|
- Catch issues in SDK generation
|
|
- Provide examples for SDK users
|
|
|
|
## Example SDK-Based Feature
|
|
|
|
```gherkin
|
|
# features/greet_sdk.feature
|
|
Feature: Greet Service SDK
|
|
The generated SDK should work correctly with the service
|
|
|
|
Scenario: SDK default greeting
|
|
Given the server is running
|
|
And I have a configured SDK client
|
|
When I call Greet with no name
|
|
Then the response should be "Hello world!"
|
|
|
|
Scenario: SDK personalized greeting
|
|
Given the server is running
|
|
And I have a configured SDK client
|
|
When I call Greet with name "John"
|
|
Then the response should be "Hello John!"
|
|
|
|
Scenario: SDK error handling
|
|
Given the server is running
|
|
And I have a configured SDK client
|
|
When I call Greet with invalid parameters
|
|
Then I should receive an appropriate error
|
|
```
|
|
|
|
## Implementation Order
|
|
|
|
1. **Implement BDD with direct HTTP client** (Current focus)
|
|
2. **Add Swagger/OpenAPI documentation** (Next step)
|
|
3. **Generate SDK clients from Swagger spec**
|
|
4. **Add SDK-based BDD tests** (Final step)
|
|
|
|
## Test Organization
|
|
|
|
```bash
|
|
features/
|
|
├── greet.feature # Direct HTTP tests
|
|
├── greet_sdk.feature # SDK client tests
|
|
├── health.feature # Direct HTTP tests
|
|
├── health_sdk.feature # SDK client tests
|
|
└── readiness.feature # Direct HTTP tests
|
|
```
|
|
|
|
## Links
|
|
|
|
* [OpenAPI Specification](https://swagger.io/specification/)
|
|
* [Swagger Codegen](https://github.com/swagger-api/swagger-codegen)
|
|
* [Godog GitHub](https://github.com/cucumber/godog)
|
|
* [Testing Pyramid](https://martinfowler.com/articles/practical-test-pyramid.html)
|
|
|
|
## Future Enhancements
|
|
|
|
* Add performance testing to BDD suite
|
|
* Integrate contract testing
|
|
* Add API version compatibility testing
|
|
* Implement automated SDK generation in CI/CD
|
|
|
|
## Monitoring and Maintenance
|
|
|
|
* Regular review of test coverage
|
|
* Update tests when API changes
|
|
* Keep Swagger spec in sync with implementation
|
|
* Monitor SDK generation for breaking changes |