Files
obsidian/phd/weekly-report/260125.md

1.2 KiB

Objectives

  • Auto LLM inference config tuner

Key Results

  • [8/10] Build the agentic tuner system
  • [2/10] Paper outline
  • [10/10] Build the first version auto tuner system
  • [2/10] Workload grouping methods
  • [8/10] Check the current situation of parallelism config optimization
  • [4/10] Understand the possibility/challenges in LLM inference compute graph arrangement automatically
  • [1/10] Define the IR for automatic optimization
  • [5/10] Profile different parallelism setup with real trace and analysis their difference

Last Week

Next Week

  • Compare to Ali production environment's configs.