22 lines
1.1 KiB
Markdown
22 lines
1.1 KiB
Markdown
Objectives
|
|
- Auto LLM inference config tuner
|
|
|
|
Key Results
|
|
- [4/10] Build the agentic tuner system
|
|
- [10/10] Build the first version auto tuner system
|
|
- [2/10] Workload grouping methods
|
|
- [8/10] Check the current situation of parallelism config optimization
|
|
- [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically
|
|
- [1/10] Define the IR for automatic optimization
|
|
- [5/10] Profile different parallelism setup with real trace and analysis their difference
|
|
|
|
Last Week
|
|
- [KR1] Refactor the first version of auto tuner system to make it more agentic. [4e3b15b6](https://ipads.se.sjtu.edu.cn:1312/wangjh/auto-tuner/-/commit/4e3b15b60819fb61d04148302be68bb66e9dda7b) ~ [095c1edd](https://ipads.se.sjtu.edu.cn:1312/wangjh/auto-tuner/-/commit/095c1edda49bfd8dad70bed20e81564c29ae3e8a)
|
|
- Support a tool library for our tuner system to call
|
|
- Speedup the tuning time
|
|
- Support early stop for bad configs
|
|
- Support LLM to predict the performance trend and reflection
|
|
|
|
Next Week
|
|
- Summarize the advantages and agentic tuner system and continue to optimize it.
|