Objectives - Auto distributed LLM inference config optimization Key Results - [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically - [0/10] Trace vLLM compute graph and data flow - [3/10] Implement a minimal Rust inference framework - [1/10] Define the IR for automatic optimization - [5/10] Profile different parallelism setup with real trace and analysis their difference - [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack] Last Week - [KR1] Learn about how vLLM implements the DBO. Check the feasibility to apply an execution flow automatically with a generated config. - [misc] Write a paper commentary for SOSP. Next Week - Summary the optimizations in Qwen. - Profile model's different stage (module), analyze the overlap status.