Files
obsidian/phd/weekly-report/25/251026.md

20 lines
928 B
Markdown

Objectives
- Auto distributed LLM inference config optimization
Key Results
- [5/10] Check the current situation of parallelism config optimization
- [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically
- [0/10] Trace vLLM compute graph and data flow
- [3/10] Implement a minimal Rust inference framework
- [1/10] Define the IR for automatic optimization
- [5/10] Profile different parallelism setup with real trace and analysis their difference
- [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack]
Last Week
- [KR1] Summary the optimizations for Qwen models about fused_moe kernels, attention optimization and data copy reduction.
- [KR1] Survey in Ali about the workflow for parallelism config search.
- [misc] Finish 3 homework for courses.
Next Week
- Find the possibility to search configs automatically with AI like alpha evolve.