1.2 KiB
1.2 KiB
Objectives
- Auto LLM inference config tuner
Key Results
- [7/10] Build the first version auto tuner system
- [7/10] Check the current situation of parallelism config optimization
- [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically
- [0/10] Trace vLLM compute graph and data flow
- [3/10] Implement a minimal Rust inference framework
- [1/10] Define the IR for automatic optimization
- [5/10] Profile different parallelism setup with real trace and analysis their difference
- [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack]
Last Week
- [KR1] Update workload generator from real workload, give a more precise spec abstraction. c969f366~7407149d
- [KR1] Benchmark and compare for generated workloads and raw workloads. Find that if input/output length are generated, will cause the performance varies a lot.
Next Week
- Find the root cause for workload performance variation.
- Summary the intelligence for auto tuning path.