Files
dance-lessons-coach/adr/0027-ollama-tier1-onboarding.md
2026-05-05 10:24:01 +02:00

129 lines
8.5 KiB
Markdown

# 27. Ollama Tier 1 onboarding via meta-trainer-bootstrap
**Date:** 2026-05-05
**Status:** Proposed
**Authors:** Gabriel Radureau, AI Agent (Claude Opus 4.7 Tier 3 inspector)
## Context and Problem Statement
The autonomous trainer day on 2026-05-05 validated that Mistral Vibe (cloud) can drive a complete PR lifecycle on this project: ICM workspace → phase-planner → implementation → verifier audit → PR open (cf. PR #54, Q-041 in `~/.vibe/memory/reference/mistral-quirks.md`). Two limitations remain:
1. **Vendor risk** — every autonomous run consumes the Mistral cloud forfait. If the forfait runs out mid-month or the API is unavailable, autonomous capability is lost.
2. **Sovereignty story** — ARCODANGE's stated direction (cf. `migration-claude-vers-mistral-phase-1.md`) is to reduce dependence on a single foreign vendor. The hardware exists locally (M4 128 GB) ; the missing link is wiring a local model into the same Tier 1 executor role Mistral plays today.
The user-flagged candidate models (cf. `~/.vibe/memory/reference/ollama-candidate-models.md`) :
* `nemotron-3-super`
* `gemma4:31b`
Both are large enough to plausibly handle the agentic coding role and small enough to fit in 128 GB RAM with headroom for tools. Neither has been tested under the ARCODANGE methodology (canary suite, ICM workspace traversal, verifier-skill discipline).
The methodology to onboard a new Tier 1 already exists : the `meta-trainer-bootstrap` skill at `~/.vibe/skills/meta-trainer-bootstrap/`. It runs a 10-canary suite (C-001..C-010), copies + adapts the skill library to the new model's harness tool names, stands up a `<model>-quirks.md` baseline, and produces a Tier 3 audit report. It has been validated on Mistral itself (we are currently running the methodology Mistral-on-Mistral, which is unusual — the canary suite was originally written for a different model).
## Decision Drivers
* **Forfait insurance** — a working local Tier 1 means autonomous capability survives a Mistral outage / forfait exhaustion
* **Sovereignty** — local execution removes the single-vendor dependency for the autonomous workflow
* **Methodology validation** — `meta-trainer-bootstrap` has never been run on a fresh model in production, only smoke-tested ; this is its first real test
* **Cost** — Ollama is local-only (no per-call price). The cost is the bootstrap effort + ongoing M4 power consumption.
* **Model maturity** — both candidates are recent ; their agentic coding ability is empirical, not theoretical
## Considered Options
### Option 1: Bootstrap `nemotron-3-super` first, then `gemma4:31b`
Run the canary suite on each, document quirks separately, decide based on canary pass rate and cost-per-task.
* Good — comparative data, makes the choice empirical
* Good — discovers any meta-trainer-bootstrap bugs early on the first attempt
* Bad — doubles the bootstrap effort (~4-8 hours per model)
* Bad — requires holding both models on disk (large)
### Option 2: Bootstrap one model only, picked on prior reputation
Pick one (e.g. `nemotron-3-super` per the user's explicit ordering in `ollama-candidate-models.md`) and commit. Skip the comparison.
* Good — half the effort, ships faster
* Bad — no fallback if the chosen model is unsuitable
* Bad — anchors the methodology to one model's quirks before we know they generalise
### Option 3: Defer until Mistral autonomous shows real strain
Do nothing yet. Wait for forfait pressure or a Mistral outage to force the issue. Reactive instead of proactive.
* Good — zero effort now
* Bad — when the trigger fires, we are unprepared and the bootstrap is rushed
* Bad — postpones validation of `meta-trainer-bootstrap` indefinitely
### Option 4: Skip Ollama, evaluate a different vendor (Anthropic, OpenAI)
Bring in a second cloud model as Tier 1 instead of going local.
* Good — likely higher quality than 31B local
* Bad — replaces vendor dependence with two-vendor dependence ; doesn't solve sovereignty
* Bad — we already have Claude as Tier 3 inspector via Anthropic ; mixing roles complicates the methodology
## Decision Outcome
Chosen option: **Option 2 — Bootstrap `nemotron-3-super` first**, deferring `gemma4:31b` to a follow-up ADR if `nemotron-3-super` underperforms or shows unfixable quirks.
Rationale :
- Forfait pressure is real but not immediate (~3.5% of monthly forfait spent on the heavy autonomous trainer day 2026-05-05) — we have time but should not procrastinate
- Comparative testing (Option 1) is technically right but pragmatically slow for an unproven methodology
- The user's explicit ordering signals their prior on which to try first ; respect it
- If the canary suite fails substantially on `nemotron-3-super`, we pivot to `gemma4:31b` with the lessons (and per-model quirks file) from the first attempt — net learning either way
## Implementation Plan
1. **Pre-flight** — verify `ollama` is installed, the model is pulled (`ollama pull nemotron-3-super`), and the M4 has enough free RAM (model size + ~16 GB headroom for tools).
2. **Run `meta-trainer-bootstrap` skill** — pointing `TARGET_MODEL_ID=nemotron-3-super`, `TARGET_HARNESS=ollama run nemotron-3-super`, `TARGET_PROJECT_ROOT=<a fresh clone or worktree>`. Budget : 5 EUR-equivalent of Mistral Tier-2 orchestration cost + 2-4 hours of trainer attention.
3. **Canary suite** — run C-001..C-010 ; record each result in `~/.vibe/memory/reference/nemotron-3-super-quirks.md` as `Q-101..Q-110` (the `Q-001..Q-099` range is reserved for the legacy Mistral baseline).
4. **Skill library adaptation** — for each ARCODANGE skill currently relying on Mistral-specific tool names (`read_file`, `write_file`, etc.), adapt to whatever Ollama exposes. Document deltas.
5. **Smoke test** — run a single small task end-to-end on a low-risk project. Use the ICM workspace pattern. Verify worktree isolation (Q-038 fix) still applies.
6. **Tier 3 report** — produce `bootstrap-report.md` for Claude inspector review. Include canary pass rate, key quirks, KPI baseline numbers, open friction points.
7. **Decision gate** — based on the report, either (a) promote `nemotron-3-super` to production Tier 1 and update `~/.vibe/config.toml` accordingly, (b) try `gemma4:31b` as a follow-up, or (c) escalate to Tier 3 for a strategic pivot.
## Pros and Cons of the Options
### Option 1 (Bootstrap both)
* Good — comparative data
* Good — early bug detection on the methodology
* Bad — double effort
* Bad — no clear way to choose without significant additional time investment for the second model
### Option 2 (Chosen — `nemotron-3-super` first)
* Good — concrete forward motion
* Good — methodology gets its first real test
* Good — `meta-trainer-bootstrap` skill validated end-to-end (currently only smoke-tested)
* Bad — risk of picking the wrong model and wasting the bootstrap effort
* Mitigation: per-model quirks files mean the second attempt is cheaper (skill adaptations transfer)
### Option 3 (Defer)
* Good — zero effort
* Bad — reactive, increases risk under outage scenarios
### Option 4 (Different vendor)
* Good — likely higher quality
* Bad — does not solve sovereignty
* Bad — methodology already has Claude as Tier 3 ; another Anthropic-family model in Tier 1 conflates roles
## Consequences
* `meta-trainer-bootstrap` skill is exercised end-to-end for the first time. Discoveries during this run will likely produce Q-042+ entries in `mistral-quirks.md` and a separate `nemotron-3-super-quirks.md`.
* `~/.vibe/config.toml` may need a new model alias (e.g. `local-nemotron`) configured for testing without affecting the production `mistral-vibe-cli-latest` default.
* If successful, the next ADR (0028 or higher) will document the production switch (or split, e.g. routine tasks → local, complex tasks → cloud).
* Forfait usage from this bootstrap : Tier 2 Mistral orchestration only ; Tier 1 Ollama runs are free at the API level.
## Links
* Three-tier methodology : `~/.vibe/skills/meta-trainer-bootstrap/references/three-tier-tutor.md`
* Candidate models reference : `~/.vibe/memory/reference/ollama-candidate-models.md`
* `meta-trainer-bootstrap` skill : `~/.vibe/skills/meta-trainer-bootstrap/SKILL.md`
* Canary suite : `~/.vibe/skills/meta-trainer-bootstrap/canaries/INDEX.md`
* Q-041 (autonomy story validated on Mistral) : `~/.vibe/memory/reference/mistral-quirks.md`
* Related ADRs : [ADR-0007](0007-opentelemetry-integration.md) (cloud / sovereignty considerations historically) ; [ADR-0023](0023-config-hot-reloading.md) (hot-reload may need different patterns under Ollama)