Add NVIDIA B300 (Blackwell Ultra) to hardware presets: same GB202 die as
B200 (2.25 PFLOPS BF16 dense) but with HBM3e 12-Hi stacks (288 GB,
12 TB/s — 50% more capacity and bandwidth than B200).
Add nvidia/GLM-5-NVFP4 HuggingFace config.json and a matching simulation
config for 8xB300: FP4 weights (~372 GB) leave ~1.9 TB for KV cache,
yielding 82k blocks per instance (3.8x more than the BF16-on-B200 setup).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Cover all 11 routing policies (including new prefix_affinity, cache_load,
cache_score, estimated_ttft, least_tokens), HuggingFace config.json
auto-parsing, GPU hardware presets, architecture-aware compute model
(MoE/MLA/DSA/GQA), router parameter tuning, bundled model configs and
config files, and available traces.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>