Commit Graph

22 Commits

Author SHA1 Message Date
b5a6fb964c feat: wire bucket identities through driver outputs 2026-04-17 17:52:49 +08:00
3a84c15068 fix: harden bucket routing review follow-up 2026-04-17 15:15:18 +08:00
fa381b5db3 feat: add bucketed service and strict global routing 2026-04-17 15:03:10 +08:00
96019082cc fix: complete global router config and recoverable cluster init 2026-04-17 14:50:47 +08:00
008fe2fe5d fix: reject bucketed configs in cluster constructor 2026-04-17 14:44:21 +08:00
7de38fa998 fix: guard legacy runtime paths for bucketed configs 2026-04-17 14:35:09 +08:00
d8a0796506 fix: close bucketed cluster config model gaps 2026-04-17 14:21:34 +08:00
a723d7a811 feat: model explicit bucketed cluster config 2026-04-17 14:16:56 +08:00
bb280c8ba0 chore: ignore local worktrees 2026-04-17 13:37:25 +08:00
92d593d59b docs: add bucket-aware routing design 2026-04-17 13:26:51 +08:00
82b3e2985f chore 2026-04-17 10:56:30 +08:00
67eef78244 chore: git ignore 2026-04-16 14:30:29 +08:00
996511f300 feat: new router and benchmark setup 2026-04-16 14:23:53 +08:00
c86d931d8f feat(ablate): input-length bucketing + auto-instance sizing
- Add sim.input_length_{min,max} (+ CLI overrides) that drop requests
  outside the bucket after trace load, enabling per-bucket ablation
  (e.g. 0-40k) without rewriting the trace file. Applied uniformly in
  both `run`/`ablate` driver path and `oracle` analysis.

- Add cache_score_strong router (alpha=1, beta=1) to isolate how much
  of cache_affinity's win is reproducible by just retuning beta in the
  existing cache_score framework (no rendezvous, no meta-store bonus).

- Add --auto-instances to ablate: sweeps --auto-candidates ascending
  with --auto-probe-router and picks the smallest cluster size whose
  TTFT mean <= --auto-target-ttft-mean. Per-candidate calibration
  results are persisted under runs/<output_dir>/auto_instances/ so the
  pick is auditable; the chosen N is then used for the whole ablation.
2026-04-15 19:42:28 +08:00
a3f386c858 feat: update ttft modeling and add cache affinity 2026-04-15 19:08:10 +08:00
ff316c6873 fix: cache calculation 2026-04-15 17:31:39 +08:00
365ceac3be chore: update ablation and clean configs 2026-04-15 14:48:59 +08:00
eaf574cd4e fix: kvcache evict workflow 2026-04-14 15:46:36 +08:00
663ca9c5b9 Support compute_dtype for FP4/FP8 tensor core FLOPS selection
Add `compute_dtype` field to ModelConfig ("bf16", "fp8", "fp4") which
controls two things:
- GPU FLOPS tier: auto-selects from preset FP4/FP8/BF16 TFLOPS
- Weight bytes: uses 0.5/1.0/2.0 bytes per param for memory-bound check

Hardware presets now include per-GPU FP8 and FP4 dense FLOPS for all
GPUs that support them (H100/H800/H20: FP8, B200/B300: FP8+FP4).
Config resolution auto-selects the right FLOPS when compute_dtype is
set and the user hasn't explicitly overridden gpu_flops.

GLM-5-NVFP4 on 8xB300 now correctly uses 13.5 PFLOPS/GPU FP4 (6x
faster prefill) and 0.5 bytes/param weights (halved memory footprint).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 11:54:10 +08:00
84696604e8 Add B300 GPU preset and GLM-5-NVFP4 on 8xB300 config
Add NVIDIA B300 (Blackwell Ultra) to hardware presets: same GB202 die as
B200 (2.25 PFLOPS BF16 dense) but with HBM3e 12-Hi stacks (288 GB,
12 TB/s — 50% more capacity and bandwidth than B200).

Add nvidia/GLM-5-NVFP4 HuggingFace config.json and a matching simulation
config for 8xB300: FP4 weights (~372 GB) leave ~1.9 TB for KV cache,
yielding 82k blocks per instance (3.8x more than the BF16-on-B200 setup).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 11:37:20 +08:00
8d41123418 Update README with full feature documentation
Cover all 11 routing policies (including new prefix_affinity, cache_load,
cache_score, estimated_ttft, least_tokens), HuggingFace config.json
auto-parsing, GPU hardware presets, architecture-aware compute model
(MoE/MLA/DSA/GQA), router parameter tuning, bundled model configs and
config files, and available traces.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 01:21:28 +08:00
ec73a95e05 KVCache simulator for LLM serving cluster routing research
Discrete-event simulator for evaluating KV cache-aware routing policies
in prefill-disaggregated LLM serving clusters. Models a two-tier KV cache
hierarchy (L0 GPU HBM + L1 CPU DRAM) with RDMA/PCIe link contention,
architecture-derived roofline compute (MoE, MLA, DSA), and a cluster-wide
meta-store for prefix-aware routing decisions.

Includes 11 routing policies (random, round_robin, least_loaded,
least_tokens, ttl_aware, precise, min_pd, cache_load, cache_score,
estimated_ttft, prefix_affinity), HuggingFace config.json auto-parsing,
built-in GPU hardware presets (H100/H800/H20/A100/B200), and ablation
tooling for systematic policy comparison across real Alibaba serving traces.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-14 01:16:02 +08:00