22 lines
1.2 KiB
Markdown
22 lines
1.2 KiB
Markdown
Objectives
|
|
- Auto LLM inference config tuner
|
|
|
|
Key Results
|
|
- [8/10] Build the agentic tuner system
|
|
- [2/10] Paper outline
|
|
- [10/10] Build the first version auto tuner system
|
|
- [2/10] Workload grouping methods
|
|
- [8/10] Check the current situation of parallelism config optimization
|
|
- [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically
|
|
- [1/10] Define the IR for automatic optimization
|
|
- [5/10] Profile different parallelism setup with real trace and analysis their difference
|
|
|
|
Last Week
|
|
- [KR1] Update agentic AITuner to support DP vs replicas, early-stop error handling, fix problems/illegal constrains in large search space. [6c0940e7](https://ipads.se.sjtu.edu.cn:1312/wangjh/auto-tuner/-/commit/6c0940e7b0a234265290398fe0a7ca7b7f3d4178) ~ [0cbc1727](https://ipads.se.sjtu.edu.cn:1312/wangjh/auto-tuner/-/commit/0cbc1727c06589ea9b021b223883d0fd114fd4c7)
|
|
- [KR2] Prepare draft for paper outline, summarize current story and what to do next.
|
|
- [misc] Prepare a [paper template](https://ipads.se.sjtu.edu.cn:1312/wangjh/paper-ai-tuner).
|
|
- [misc] Open source our new trace and trace-replayer at https://github.com/alibaba-edu/qwen-bailian-usagetraces-anon.
|
|
|
|
Next Week
|
|
- Compare to Ali production environment's configs.
|