Add `compute_dtype` field to ModelConfig ("bf16", "fp8", "fp4") which
controls two things:
- GPU FLOPS tier: auto-selects from preset FP4/FP8/BF16 TFLOPS
- Weight bytes: uses 0.5/1.0/2.0 bytes per param for memory-bound check
Hardware presets now include per-GPU FP8 and FP4 dense FLOPS for all
GPUs that support them (H100/H800/H20: FP8, B200/B300: FP8+FP4).
Config resolution auto-selects the right FLOPS when compute_dtype is
set and the user hasn't explicitly overridden gpu_flops.
GLM-5-NVFP4 on 8xB300 now correctly uses 13.5 PFLOPS/GPU FP4 (6x
faster prefill) and 0.5 bytes/param weights (halved memory footprint).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
33 lines
907 B
YAML
33 lines
907 B
YAML
# GLM-5-NVFP4 (nvidia/GLM-5-NVFP4) on 8 x B300 (Blackwell Ultra, 288GB each).
|
|
# Architecture auto-loaded from HuggingFace config.json.
|
|
#
|
|
# FP4 weights: ~744B params * 0.5 bytes = ~372 GB across 8 GPUs.
|
|
# Total HBM: 8 * 288 GB = 2304 GB. KV budget: ~1900 GB after weights.
|
|
|
|
model:
|
|
config_json: ../models/GLM-5-NVFP4/config.json
|
|
name: glm-5-nvfp4
|
|
compute_dtype: fp4 # FP4 weights → selects FP4 tensor core FLOPS
|
|
dtype_bytes: 1 # FP8 KV cache
|
|
block_size_tokens: 512
|
|
|
|
hardware:
|
|
type: 8xb300
|
|
hbm_bytes: 1900.0e9 # KV budget after FP4 weights (~372 GB)
|
|
|
|
cluster:
|
|
num_instances: 32
|
|
meta_store:
|
|
ttl_seconds: 120.0
|
|
router:
|
|
mode: prefix_affinity
|
|
prefix_k: 8
|
|
load_alpha: 1.0
|
|
|
|
sim:
|
|
trace_path: bailian-traces/glm_coder_blksz_512_040915-040917.jsonl
|
|
max_requests: null
|
|
output_dir: runs/glm5_nvfp4_8xb300
|
|
sample_interval_s: 1.0
|
|
seed: 42
|