Objectives - Auto distributed LLM inference config optimization Key Results - [3/10] Implement a minimal Rust inference framework - [0/10] Trace vLLM compute graph and data flow - [6/10] Profile vLLM to get compute graph - [2/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically - [5/10] Profile different parallelism setup with real trace and analysis their difference - [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack] Last Week - [KR1] Learn and implement a simple LLM inference in candle. - [KR1] Debug for the float precision problem in candle, trying to figure out the root cause: kernel library or rust float precision. Next Week - Think about the structure of inference framework. - Continue the rust code implementation.