Periodic cleanup goroutine, started alongside the worker when DATABASE_URL
is set. Three concerns :
- DELETE rows with status='done' older than QUEUE_DONE_RETENTION (default
168h / 7 days). Past success rows have no value beyond debug runway.
- UPDATE rows stuck in status='running' for more than QUEUE_STUCK_TIMEOUT
(default 30m) back to 'pending' so a worker can retry. Handles the
case of a pod crashing mid-job (without this, jobs stay orphaned forever).
- 'dead' rows are NEVER auto-purged (volume negligible, kept for forensics).
Configurable via env :
- QUEUE_DONE_RETENTION (default 168h)
- QUEUE_STUCK_TIMEOUT (default 30m)
- QUEUE_JANITOR_INTERVAL (default 1h)
The janitor runs once immediately at startup (recovers anything orphaned
by the previous pod before opening for new traffic), then ticks on the
interval.
Queue interface gains PurgeDone + RecoverStuck — both use Postgres'
make_interval(secs) for safe parameterization.
4 new unit tests via fakeQueue mock (47 total, race clean).
Adds the async dispatch infrastructure :
- Postgres pool + embedded migration (CREATE TABLE/INDEX IF NOT EXISTS
gateway_jobs). Auto-applied at boot. lib/pq driver (matches webapp
convention).
- queue.go : Enqueue (idempotent on UNIQUE(bot_slug, update_id) — handles
Telegram redelivery), Pop with FOR UPDATE SKIP LOCKED, MarkDone,
MarkFailed with exponential backoff (30s → 2m → 10m → 1h → dead at 5).
- worker.go : goroutine that drains the queue, dispatches via the same
Handler interface as sync, schedules retries on failure, notifies the
user once when a job goes to dead.
- BotConfig gains `async: bool`. Registry refuses bots with async=true
if DATABASE_URL is unset (queue=nil).
- Server : when bot.Async, the webhook ack is immediate ; the update
payload is enqueued for the worker.
When DATABASE_URL is unset (current default), queue/worker stay disabled
and only sync handlers (echo, http, auth) work — no breaking change to
the running cluster.
Refs ~/.claude/plans/pour-les-notifications-on-inherited-seal.md § Phase 2.