Files
kvcache-simulator/configs/glm5-nvfp4-8xb300.yaml
Gahow Wang 84696604e8 Add B300 GPU preset and GLM-5-NVFP4 on 8xB300 config
Add NVIDIA B300 (Blackwell Ultra) to hardware presets: same GB202 die as
B200 (2.25 PFLOPS BF16 dense) but with HBM3e 12-Hi stacks (288 GB,
12 TB/s — 50% more capacity and bandwidth than B200).

Add nvidia/GLM-5-NVFP4 HuggingFace config.json and a matching simulation
config for 8xB300: FP4 weights (~372 GB) leave ~1.9 TB for KV cache,
yielding 82k blocks per instance (3.8x more than the BF16-on-B200 setup).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 11:37:20 +08:00

32 lines
826 B
YAML

# GLM-5-NVFP4 (nvidia/GLM-5-NVFP4) on 8 x B300 (Blackwell Ultra, 288GB each).
# Architecture auto-loaded from HuggingFace config.json.
#
# FP4 weights: ~744B params * 0.5 bytes = ~372 GB across 8 GPUs.
# Total HBM: 8 * 288 GB = 2304 GB. KV budget: ~1900 GB after weights.
model:
config_json: ../models/GLM-5-NVFP4/config.json
name: glm-5-nvfp4
dtype_bytes: 1 # FP8 KV cache
block_size_tokens: 512
hardware:
type: 8xb300
hbm_bytes: 1900.0e9 # KV budget after FP4 weights (~372 GB)
cluster:
num_instances: 32
meta_store:
ttl_seconds: 120.0
router:
mode: prefix_affinity
prefix_k: 8
load_alpha: 1.0
sim:
trace_path: bailian-traces/glm_coder_blksz_512_040915-040917.jsonl
max_requests: null
output_dir: runs/glm5_nvfp4_8xb300
sample_interval_s: 1.0
seed: 42