20 lines
1.3 KiB
Markdown
20 lines
1.3 KiB
Markdown
Objectives
|
|
- Auto LLM inference config tuner
|
|
|
|
Key Results
|
|
- [7/10] Build the first version auto tuner system
|
|
- [7/10] Check the current situation of parallelism config optimization
|
|
- [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically
|
|
- [0/10] Trace vLLM compute graph and data flow
|
|
- [3/10] Implement a minimal Rust inference framework
|
|
- [1/10] Define the IR for automatic optimization
|
|
- [5/10] Profile different parallelism setup with real trace and analysis their difference
|
|
- [0/10] Meta-analysis for the theory maximum improvement with heterogenous setup [offtrack]
|
|
|
|
Last Week
|
|
- [KR1] Update workload generator from real workload, not only support different timestamps, but also input_length, output_length and KVCache hit ratio from real workloads. Then benchmark to check whether we can use an abstract spec to replay the similar performance. [b0bcfa63](https://ipads.se.sjtu.edu.cn:1312/wangjh/auto-tuner/-/commit/b0bcfa6326f69755aaaf859d89ad2def2409cd48)~[fb1f0848](https://ipads.se.sjtu.edu.cn:1312/wangjh/auto-tuner/-/commit/fb1f084815342d6b8379f3b191ed152a3c1cda67)
|
|
- [KR1] Check the root cause of performance gap under different similar workloads. The difference mainly comes from different inference load.
|
|
|
|
Next Week
|
|
- Update the workload abstraction spec for more precise replayed performance.
|