- Add sim.input_length_{min,max} (+ CLI overrides) that drop requests
outside the bucket after trace load, enabling per-bucket ablation
(e.g. 0-40k) without rewriting the trace file. Applied uniformly in
both `run`/`ablate` driver path and `oracle` analysis.
- Add cache_score_strong router (alpha=1, beta=1) to isolate how much
of cache_affinity's win is reproducible by just retuning beta in the
existing cache_score framework (no rendezvous, no meta-store bonus).
- Add --auto-instances to ablate: sweeps --auto-candidates ascending
with --auto-probe-router and picks the smallest cluster size whose
TTFT mean <= --auto-target-ttft-mean. Per-candidate calibration
results are persisted under runs/<output_dir>/auto_instances/ so the
pick is auditable; the chosen N is then used for the whole ablation.
Add `compute_dtype` field to ModelConfig ("bf16", "fp8", "fp4") which
controls two things:
- GPU FLOPS tier: auto-selects from preset FP4/FP8/BF16 TFLOPS
- Weight bytes: uses 0.5/1.0/2.0 bytes per param for memory-bound check
Hardware presets now include per-GPU FP8 and FP4 dense FLOPS for all
GPUs that support them (H100/H800/H20: FP8, B200/B300: FP8+FP4).
Config resolution auto-selects the right FLOPS when compute_dtype is
set and the user hasn't explicitly overridden gpu_flops.
GLM-5-NVFP4 on 8xB300 now correctly uses 13.5 PFLOPS/GPU FP4 (6x
faster prefill) and 0.5 bytes/param weights (halved memory footprint).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add NVIDIA B300 (Blackwell Ultra) to hardware presets: same GB202 die as
B200 (2.25 PFLOPS BF16 dense) but with HBM3e 12-Hi stacks (288 GB,
12 TB/s — 50% more capacity and bandwidth than B200).
Add nvidia/GLM-5-NVFP4 HuggingFace config.json and a matching simulation
config for 8xB300: FP4 weights (~372 GB) leave ~1.9 TB for KV cache,
yielding 82k blocks per instance (3.8x more than the BF16-on-B200 setup).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>