Clarify base-relative validation patches

This commit is contained in:
2026-04-30 06:52:09 +08:00
parent 46e9040613
commit f59919e21c
3 changed files with 12 additions and 6 deletions

View File

@@ -49,7 +49,8 @@ The active run is now seeded from the real run5 baseline and continues from `tri
- `proposal-0002`: legal adjacent decode topology move from `TP4/DP2/EP8` to `TP2/DP4/EP8`; no EP-size search and no testcase threshold.
- `trial-0002`: completed, 0.3767 request/s, 0.0471 request/s/GPU, pass rate 0.9779.
- `trial-0003`: completed with no feasible point for `TP1/DP8/EP8`.
- `proposal-0004`: generated a plausible same-topology `max-num-seqs=160` follow-up, but the raw JSON used an object for `observation`; schema validation rejected it and the tuning CLI exited before materializing `trial-0004`.
- `trial-0004`: completed with no feasible point for `max-num-seqs=160`.
- Important caveat: `trial-0004` did not actually validate `TP2/DP4/EP8 + max-num-seqs=160`. AITuner applies `config_patch` relative to the study base config, and the proposal only patched `max-num-seqs`. The actual launch therefore used the base topology `TP4/DP2/EP8 + max-num-seqs=160`, so this is not evidence that same-topology refinement around `trial-0002` is exhausted.
The `trial-0002` proposal matches the first useful topology direction from the earlier before-harness run, but the new harness-controlled run measured substantially better throughput for that topology.
@@ -60,18 +61,18 @@ Fig-18-style raw throughput table:
| Run | Iter 1 | Iter 2 | Iter 3 | Iter 4 | Iter 5 | Iter 6 | Iter 7 | Iter 8 | Iter 9 | Iter 10 | Iter 11 | Iter 12 |
| --- | ---: | ---: | --- | --- | --- | --- | --- | --- | ---: | --- | --- | --- |
| before harness request/s | 0.1267 | 0.2450 | infeasible | launch fail | infeasible | infeasible | infeasible | infeasible | 0.2817 | infeasible | infeasible | infeasible |
| harness request/s | 0.1267 | 0.3767 | infeasible | not run | not run | not run | not run | not run | not run | not run | not run | not run |
| harness request/s | 0.1267 | 0.3767 | infeasible | infeasible | not run | not run | not run | not run | not run | not run | not run | not run |
Per-GPU throughput table:
| Run | Iter 1 | Iter 2 | Iter 3 | Iter 4 | Iter 5 | Iter 6 | Iter 7 | Iter 8 | Iter 9 | Iter 10 | Iter 11 | Iter 12 |
| --- | ---: | ---: | --- | --- | --- | --- | --- | --- | ---: | --- | --- | --- |
| before harness req/s/GPU | 0.0158 | 0.0306 | infeasible | launch fail | infeasible | infeasible | infeasible | infeasible | 0.0352 | infeasible | infeasible | infeasible |
| harness req/s/GPU | 0.0158 | 0.0471 | infeasible | not run | not run | not run | not run | not run | not run | not run | not run | not run |
| harness req/s/GPU | 0.0158 | 0.0471 | infeasible | infeasible | not run | not run | not run | not run | not run | not run | not run | not run |
Decision: the harness accelerated convergence on qwen235b decode-only, but this is not a proof of global optimality after one proposal. The before-harness run first reached its best observed throughput at iter 9 with 0.2817 request/s. The harness run exceeded that value at iter 2 with 0.3767 request/s, a 1.34x improvement over the before-harness 12-iter best and a 2.97x improvement over the baseline config.
The harness did not stop cleanly after finding the strong incumbent. It spent one additional trial on `TP1/DP8/EP8`, which found no feasible point, and then the next LLM proposal failed schema validation before trial materialization. So the performance convergence goal is met, but the tuning loop should be hardened so a strong incumbent causes deterministic stop or a schema-repair retry rather than relying only on prompt instructions.
The harness did not stop cleanly after finding the strong incumbent. It spent one additional trial on `TP1/DP8/EP8`, which found no feasible point. The next proposal intended same-topology runtime validation, but omitted the incumbent topology fields, so the materialized trial validated the base topology instead. So the performance convergence goal is met, but local optimality has not been fully proven yet.
Important interpretation: `trial-0002` should be called the current best observed config, not "proven best". The harness got there quickly because the decode-only harness biases the first proposal toward the most relevant adjacent topology redistribution, `TP4/DP2/EP8 -> TP2/DP4/EP8`, instead of spending trials on prefill-oriented runtime knobs. Later iterations are still needed to validate local optimality by testing nearby topologies and same-topology runtime knobs.
@@ -84,9 +85,9 @@ Follow-up implementation after this result:
After the implementation fix, the previously rejected `proposal-0004` was resumed as a validation trial:
- `trial-0004`: same topology validation with `max-num-seqs=160`.
- `trial-0004`: intended same-topology validation with `max-num-seqs=160`, but actually ran on base topology because the proposal omitted `TP2/DP4/EP8`.
- Remote tmux: `aituner_qwen235b_decode_harness_validate_20260428`.
- Status as of 2026-04-28 13:20 UTC on dash0: running; no result has been written yet.
- Result: completed with no feasible point. This is useful negative evidence for the base topology plus `max-num-seqs=160`, but not for the `trial-0002` incumbent topology.
## Follow-up Fix