Objectives - Auto LLM inference config tuner Key Results - [7/10] Build the first version auto tuner system - [7/10] Check the current situation of parallelism config optimization - [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically - [0/10] Trace vLLM compute graph and data flow - [3/10] Implement a minimal Rust inference framework - [1/10] Define the IR for automatic optimization - [5/10] Profile different parallelism setup with real trace and analysis their difference - [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack] Last Week - [KR1] Update workload generator from real workload, give a more precise spec abstraction. [c969f366](https://ipads.se.sjtu.edu.cn:1312/wangjh/auto-tuner/-/commit/c969f366b05cad03447e1d7bdd9f30785dd792e4)~[7407149d](https://ipads.se.sjtu.edu.cn:1312/wangjh/auto-tuner/-/commit/7407149d1052d3d610fd1fb3e51ce60068ba4981) - [KR1] Benchmark and compare for generated workloads and raw workloads. Find that if input/output length are generated, will cause the performance varies a lot. Next Week - Find the root cause for workload performance variation. - Summary the intelligence for auto tuning path.