1.3 KiB
1.3 KiB
Objectives
- Auto LLM inference config tuner
Key Results
- [7/10] Build the first version auto tuner system
- [7/10] Check the current situation of parallelism config optimization
- [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically
- [0/10] Trace vLLM compute graph and data flow
- [3/10] Implement a minimal Rust inference framework
- [1/10] Define the IR for automatic optimization
- [5/10] Profile different parallelism setup with real trace and analysis their difference
- [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack]
Last Week
- [KR1] Update workload generator from real workload, not only support different timestamps, but also input_length, output_length and KVCache hit ratio from real workloads. Then benchmark to check whether we can use an abstract spec to replay the similar performance. b0bcfa63~fb1f0848
- [KR1] Check the root cause of performance gap under different similar workloads. The difference mainly comes from different inference load.
Next Week
- Update the workload abstraction spec for more precise replayed performance.