Objectives - Auto LLM inference config tuner Key Results - [4/10] Build the first version auto tuner system - [5/10] Check the current situation of parallelism config optimization - [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically - [0/10] Trace vLLM compute graph and data flow - [3/10] Implement a minimal Rust inference framework - [1/10] Define the IR for automatic optimization - [5/10] Profile different parallelism setup with real trace and analysis their difference - [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack] Last Week - [KR1] Write code for basic config generator and benchmark to check the performance. - [KR1] Trying to find a way to tune the config for better performance. Next Week - Benchmark for baseline and some human-tuned configs to prove the necessary of config tuning. - Continue to design a way for auto tuning.