Objectives - Auto LLM inference config tuner Key Results - [5/10] Build the first version auto tuner system - [5/10] Check the current situation of parallelism config optimization - [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically - [0/10] Trace vLLM compute graph and data flow - [3/10] Implement a minimal Rust inference framework - [1/10] Define the IR for automatic optimization - [5/10] Profile different parallelism setup with real trace and analysis their difference - [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack] Last Week - [KR1] Struggle to prepare running environment for Qwen3-Max-fp4. Try to fix/bypass a lot of dependencies/code problems. - [mics] Prepare the first version review agent w/ Yingyi. [0b288d64](https://ipads.se.sjtu.edu.cn:1312/shadowpc/deep-review/-/commit/0b288d643301edcb19be6baf394710ce35a2dd74) ~ [57093ff4](https://ipads.se.sjtu.edu.cn:1312/shadowpc/deep-review/-/commit/57093ff4a5782dbfa6e40456b9c0825df5576f8b) Next Week - Think about the insight in our system target. - Continue to implement the tuner part in our system.