4 Commits

Author SHA1 Message Date
996511f300 feat: new router and benchmark setup 2026-04-16 14:23:53 +08:00
eaf574cd4e fix: kvcache evict workflow 2026-04-14 15:46:36 +08:00
663ca9c5b9 Support compute_dtype for FP4/FP8 tensor core FLOPS selection
Add `compute_dtype` field to ModelConfig ("bf16", "fp8", "fp4") which
controls two things:
- GPU FLOPS tier: auto-selects from preset FP4/FP8/BF16 TFLOPS
- Weight bytes: uses 0.5/1.0/2.0 bytes per param for memory-bound check

Hardware presets now include per-GPU FP8 and FP4 dense FLOPS for all
GPUs that support them (H100/H800/H20: FP8, B200/B300: FP8+FP4).
Config resolution auto-selects the right FLOPS when compute_dtype is
set and the user hasn't explicitly overridden gpu_flops.

GLM-5-NVFP4 on 8xB300 now correctly uses 13.5 PFLOPS/GPU FP4 (6x
faster prefill) and 0.5 bytes/param weights (halved memory footprint).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 11:54:10 +08:00
84696604e8 Add B300 GPU preset and GLM-5-NVFP4 on 8xB300 config
Add NVIDIA B300 (Blackwell Ultra) to hardware presets: same GB202 die as
B200 (2.25 PFLOPS BF16 dense) but with HBM3e 12-Hi stacks (288 GB,
12 TB/s — 50% more capacity and bandwidth than B200).

Add nvidia/GLM-5-NVFP4 HuggingFace config.json and a matching simulation
config for 8xB300: FP4 weights (~372 GB) leave ~1.9 TB for KV cache,
yielding 82k blocks per instance (3.8x more than the BF16-on-B200 setup).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 11:37:20 +08:00