Files
kvcache-simulator/configs/glm5-8xb200-hf.yaml
Gahow Wang ec73a95e05 KVCache simulator for LLM serving cluster routing research
Discrete-event simulator for evaluating KV cache-aware routing policies
in prefill-disaggregated LLM serving clusters. Models a two-tier KV cache
hierarchy (L0 GPU HBM + L1 CPU DRAM) with RDMA/PCIe link contention,
architecture-derived roofline compute (MoE, MLA, DSA), and a cluster-wide
meta-store for prefix-aware routing decisions.

Includes 11 routing policies (random, round_robin, least_loaded,
least_tokens, ttl_aware, precise, min_pd, cache_load, cache_score,
estimated_ttft, prefix_affinity), HuggingFace config.json auto-parsing,
built-in GPU hardware presets (H100/H800/H20/A100/B200), and ablation
tooling for systematic policy comparison across real Alibaba serving traces.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 01:16:02 +08:00

41 lines
1.3 KiB
YAML

# GLM-5 using HuggingFace config.json + hardware preset.
#
# This config demonstrates the simplified format:
# model.config_json — loads architecture from HF config.json
# hardware.type — loads GPU specs from built-in preset
#
# Only deployment-specific fields need to be set explicitly.
# Any field from config_json or the preset can be overridden in YAML.
model:
# Auto-detect architecture: MoE, MLA, DSA, head dims, etc.
config_json: ../models/GLM-5/config.json
name: glm-5 # override HF model_type
dtype_bytes: 1 # BF16 (not in HF config.json)
block_size_tokens: 512 # matches bailian-traces blksz_512
hardware:
type: 8xb200 # 8 x B200 SXM (192GB each)
# Override preset values for this specific deployment:
hbm_bytes: 500.0e9 # KV budget after FP8 weights + activations
dram_bytes: 1.5e12 # ~1.5 TB usable CPU DRAM per node
max_batch_slots: 256
cluster:
num_instances: 32
meta_store:
ttl_seconds: 300.0
router:
mode: min_pd
precise_probe_latency_us: 50.0
precise_probe_topk: 4
load_alpha: 1.0
prefix_k: 8
sim:
trace_path: bailian-traces/glm_coder_blksz_512_040915-040917.jsonl
max_requests: null
output_dir: runs/glm5_8xb200_hf
sample_interval_s: 1.0
seed: 42