1.1 KiB
1.1 KiB
Objectives
- Auto LLM inference config tuner
Key Results
- [6/10] Build the agentic tuner system
- [10/10] Build the first version auto tuner system
- [2/10] Workload grouping methods
- [8/10] Check the current situation of parallelism config optimization
- [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically
- [1/10] Define the IR for automatic optimization
- [5/10] Profile different parallelism setup with real trace and analysis their difference
Last Week
- [KR1] Update agentic AITuner to support new trace benchmark / new vLLM flags/ objective score. 0a012bdd ~ 788da3d8
- [KR1] Survey the related works. Some works build an agent for LLM training / storage system / ..., a work use BO for LLM inference config tuning.
- [misc] Prepare an open-sourced version of new traces (thinking and coder) and update readme.
Next Week
- Optimize the agentic AITuner.
- Test SCOOT as one of the baseline.