KVCache simulator for LLM serving cluster routing research
Discrete-event simulator for evaluating KV cache-aware routing policies in prefill-disaggregated LLM serving clusters. Models a two-tier KV cache hierarchy (L0 GPU HBM + L1 CPU DRAM) with RDMA/PCIe link contention, architecture-derived roofline compute (MoE, MLA, DSA), and a cluster-wide meta-store for prefix-aware routing decisions. Includes 11 routing policies (random, round_robin, least_loaded, least_tokens, ttl_aware, precise, min_pd, cache_load, cache_score, estimated_ttft, prefix_affinity), HuggingFace config.json auto-parsing, built-in GPU hardware presets (H100/H800/H20/A100/B200), and ablation tooling for systematic policy comparison across real Alibaba serving traces. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
36
configs/qwen2.5-coder-7b-preset.yaml
Normal file
36
configs/qwen2.5-coder-7b-preset.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
# Qwen2.5-Coder-7B using hardware preset.
|
||||
#
|
||||
# Model architecture is specified inline (no config.json needed for simple
|
||||
# models). Hardware uses preset "h800" with a single override for hbm_bytes.
|
||||
|
||||
model:
|
||||
name: qwen2.5-coder-7b
|
||||
num_layers: 28
|
||||
hidden_size: 3584
|
||||
num_attention_heads: 28
|
||||
num_kv_heads: 4
|
||||
head_dim: 128
|
||||
intermediate_size: 18944
|
||||
dtype_bytes: 2
|
||||
block_size_tokens: 16
|
||||
|
||||
hardware:
|
||||
type: h800 # single H800 SXM (80GB)
|
||||
hbm_bytes: 60.0e9 # KV budget after 7B model weights
|
||||
|
||||
cluster:
|
||||
num_instances: 16
|
||||
meta_store:
|
||||
ttl_seconds: 60.0
|
||||
router:
|
||||
mode: ttl_aware
|
||||
precise_probe_latency_us: 50.0
|
||||
precise_probe_topk: 4
|
||||
load_alpha: 1.0
|
||||
|
||||
sim:
|
||||
trace_path: qwen-bailian-usagetraces-anon/qwen_coder_blksz_16.jsonl
|
||||
max_requests: null
|
||||
output_dir: runs/qwen7b_preset
|
||||
sample_interval_s: 1.0
|
||||
seed: 42
|
||||
Reference in New Issue
Block a user