Objectives - Auto distributed LLM inference config optimization Key Results - [3/10] Implement a minimal Rust inference framework - [1/10] Define the IR for automatic optimization - [0/10] Trace vLLM compute graph and data flow - [2/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically - [5/10] Profile different parallelism setup with real trace and analysis their difference - [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack] Last Week - [KR2] Rethink about the project target and the definition of IR for automatic distribution optimization. - [KR2] Learn something about category for IR abstraction. - [KR2] Survey the TVM and MLC LLM to learning about their IR abstraction. Next Week - Profile the compute and communication time for kernels to show the bubble in micro-batch under different models and different input lengths.