KVCache simulator for LLM serving cluster routing research
Discrete-event simulator for evaluating KV cache-aware routing policies in prefill-disaggregated LLM serving clusters. Models a two-tier KV cache hierarchy (L0 GPU HBM + L1 CPU DRAM) with RDMA/PCIe link contention, architecture-derived roofline compute (MoE, MLA, DSA), and a cluster-wide meta-store for prefix-aware routing decisions. Includes 11 routing policies (random, round_robin, least_loaded, least_tokens, ttl_aware, precise, min_pd, cache_load, cache_score, estimated_ttft, prefix_affinity), HuggingFace config.json auto-parsing, built-in GPU hardware presets (H100/H800/H20/A100/B200), and ablation tooling for systematic policy comparison across real Alibaba serving traces. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
6
src/cluster/mod.rs
Normal file
6
src/cluster/mod.rs
Normal file
@@ -0,0 +1,6 @@
|
||||
pub mod meta_store;
|
||||
#[allow(clippy::module_inception)]
|
||||
pub mod cluster;
|
||||
|
||||
pub use cluster::Cluster;
|
||||
pub use meta_store::MetaStore;
|
||||
Reference in New Issue
Block a user