Commit Graph

8 Commits

Author SHA1 Message Date
a3f386c858 feat: update ttft modeling and add cache affinity 2026-04-15 19:08:10 +08:00
ff316c6873 fix: cache calculation 2026-04-15 17:31:39 +08:00
365ceac3be chore: update ablation and clean configs 2026-04-15 14:48:59 +08:00
eaf574cd4e fix: kvcache evict workflow 2026-04-14 15:46:36 +08:00
663ca9c5b9 Support compute_dtype for FP4/FP8 tensor core FLOPS selection
Add `compute_dtype` field to ModelConfig ("bf16", "fp8", "fp4") which
controls two things:
- GPU FLOPS tier: auto-selects from preset FP4/FP8/BF16 TFLOPS
- Weight bytes: uses 0.5/1.0/2.0 bytes per param for memory-bound check

Hardware presets now include per-GPU FP8 and FP4 dense FLOPS for all
GPUs that support them (H100/H800/H20: FP8, B200/B300: FP8+FP4).
Config resolution auto-selects the right FLOPS when compute_dtype is
set and the user hasn't explicitly overridden gpu_flops.

GLM-5-NVFP4 on 8xB300 now correctly uses 13.5 PFLOPS/GPU FP4 (6x
faster prefill) and 0.5 bytes/param weights (halved memory footprint).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 11:54:10 +08:00
84696604e8 Add B300 GPU preset and GLM-5-NVFP4 on 8xB300 config
Add NVIDIA B300 (Blackwell Ultra) to hardware presets: same GB202 die as
B200 (2.25 PFLOPS BF16 dense) but with HBM3e 12-Hi stacks (288 GB,
12 TB/s — 50% more capacity and bandwidth than B200).

Add nvidia/GLM-5-NVFP4 HuggingFace config.json and a matching simulation
config for 8xB300: FP4 weights (~372 GB) leave ~1.9 TB for KV cache,
yielding 82k blocks per instance (3.8x more than the BF16-on-B200 setup).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 11:37:20 +08:00
8d41123418 Update README with full feature documentation
Cover all 11 routing policies (including new prefix_affinity, cache_load,
cache_score, estimated_ttft, least_tokens), HuggingFace config.json
auto-parsing, GPU hardware presets, architecture-aware compute model
(MoE/MLA/DSA/GQA), router parameter tuning, bundled model configs and
config files, and available traces.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 01:21:28 +08:00
ec73a95e05 KVCache simulator for LLM serving cluster routing research
Discrete-event simulator for evaluating KV cache-aware routing policies
in prefill-disaggregated LLM serving clusters. Models a two-tier KV cache
hierarchy (L0 GPU HBM + L1 CPU DRAM) with RDMA/PCIe link contention,
architecture-derived roofline compute (MoE, MLA, DSA), and a cluster-wide
meta-store for prefix-aware routing decisions.

Includes 11 routing policies (random, round_robin, least_loaded,
least_tokens, ttl_aware, precise, min_pd, cache_load, cache_score,
estimated_ttft, prefix_affinity), HuggingFace config.json auto-parsing,
built-in GPU hardware presets (H100/H800/H20/A100/B200), and ablation
tooling for systematic policy comparison across real Alibaba serving traces.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 01:16:02 +08:00