Objectives - Heterogenous parallelism in cluster - EP design for inference performance [untracked] Key Results - [6/10] Profile vLLM to get compute graph - [2/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically - [5/10] Profile different parallelism setup with real trace and analysis their difference - [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup - [0/10] Understand how EP (static/dynamic) influence performance fully - [4/10] Analysis correlations between MoE layers [suspended] Last Week - [KR2] Learn about triton (vLLM has many kernel implemented in triton), run a demo to compile the python triton kernel to get ptx then loaded and called in Rust. - [KR2] Try a demo to run vLLM's flash-attention in Rust. Next Week - Find a way to get the full compute flow and data flow in vLLM, then replay in Rust.