Make harness verification portable

This commit is contained in:
2026-04-25 16:37:13 +08:00
parent 2c5e9af02a
commit 2dc2815620
2 changed files with 21 additions and 7 deletions

View File

@@ -48,12 +48,23 @@ Improve AITuner convergence for the `dash0` internal vLLM + Qwen3.5-27B 0-8k cha
## Remote Experiment Log
Pending. Next steps:
### 2026-04-25 16:30-16:45 CST
1. Commit and push the harness implementation.
2. Pull on `dash0` in `/home/admin/cpfs/wjh/aituner/aituner`.
3. Start a real harness-guided Qwen3.5-27B 0-8k chat tuning run from `configs/examples/dash0_qwen27b_tight_slo_run4_0_8k.json`.
4. Compare the first few iterations against the prior 12-iteration behavior:
- Pushed commit `2c5e9af` to `origin/main` and pulled it on `dash0`.
- Remote prompt check command:
- `PYTHONPATH=src python3 -m aituner.cli study prompt --study-root /tmp/aituner-harness-prompt-check/dash0-qwen27b-tight-slo-10min-run4-chat-0-8k --store-root /tmp/aituner-harness-prompt-check --prompt-name harness-check`
- Harness profile for `chat_w20260311_1000`, after applying the 0-8k filter:
- L: p50 1992, p95 7628, p99 8102, tail ratio 3.83, regime `moderate_tail_prefill_sensitive`.
- C: repeated token ratio estimate 0.191, repeated block ratio 0.189, multi-turn ratio 0.160, regime `low_prefix_reuse`.
- A: request rate 29.52 req/s, p95 1s QPS 40, burst ratio 1.36, regime `smooth`.
- Active harnesses: `tensor-parallel-size` and `max-num-batched-tokens`, which matches a TTFT/prefill-sensitive 0-8k chat workload.
- Remote `compileall` passed.
- Remote `unittest discover` initially exposed two pre-existing path-sensitive tests that hardcoded `/home/gahow/phd/aituner`; fixed them to derive `REPO_ROOT` from the test file path.
Remaining next steps:
1. Start a real harness-guided Qwen3.5-27B 0-8k chat tuning run from `configs/examples/dash0_qwen27b_tight_slo_run4_0_8k.json`.
2. Compare the first few iterations against the prior 12-iteration behavior:
- best request rate per GPU should improve or reach the known good region in fewer trials;
- proposals should follow the active bottleneck harness;
- if the incumbent has converged, the LLM should emit `should_stop=true` instead of proposing a weak exploratory config.