R1-2410725
discussion
Updated summary of Evaluation Results for AI/ML CSI compression
From Qualcomm
Summary
This document from Qualcomm presents comprehensive evaluation results for AI/ML based CSI compression in NR air interface, containing 16 main observations across different test cases covering SGCS performance, FTP traffic, full buffer scenarios, CSI feedback reduction, and localized models with varying complexity levels.
Position
Qualcomm, as the moderator, is advocating for comprehensive standardization of AI/ML based CSI compression techniques across multiple scenarios and configurations. They are pushing FOR detailed performance characterization across different payload sizes, traffic patterns, and model complexities, while ensuring multi-vendor interoperability through separate training approaches. They are systematically documenting performance gains to establish technical baselines for standardization.
Key proposals
- Observation 1 (Sec 2): AI/ML CSI compression Case 1 shows SGCS performance gains of 4.9-29.94% compared to non-AI/ML benchmark across different payload sizes
- Observation 9 (Sec 2): Case 2 temporal domain AI/ML compression achieves median SGCS gains of 10.05% (medium payload) and 5.9% (large payload) for Layer 1 with 9-14 sources
- Observation 3 (Sec 2): FTP traffic Case 1 shows 26% mean UPT performance gains at high resource utilization (RU ≥70%) with Max Rank 2
- Observation 17 (Sec 2): Case 2 FTP evaluation demonstrates 3-15% mean UPT gains for Max Rank 1 and up to 6-29% for Max Rank 2 at high RU
- Observation 7 (Sec 2): AI/ML compression achieves 92% CSI feedback reduction compared to non-AI/ML benchmark for Case 1
- Observation on SGCS Case 3 (Sec 2): Mixed 80% indoor/20% outdoor scenario shows 1.37-28% SGCS gains for Layer 1 at small payload
- Observation on Full buffer Case 2 (Sec 2): Performance gains of 1-30% for Max Rank 2 with median gain of 16.6% at small overhead
- Observation on multi-vendor training (Sec 2): Type 3 NW first separate training shows -0.86% to 0.1% performance difference vs joint training baseline
- Observation on localized models Option 1 (Sec 2): Localized models achieve 4.5-25% SGCS gains over benchmark with 1-11% improvement over global models
- Observation on localized models Option 2 (Sec 2): Alternative localized approach shows -2.65% to 6% performance range compared to global models
- Performance vs complexity trade-off (Sec 2): Combined complexity analysis across CSI generation and reconstruction parts for Cases 0, 2, and 3
- Case 4 SGCS performance (Sec 2): Mixed scenario evaluation shows 7.2% Layer 1 gain and up to 104.1% Layer 4 gain at small payload
- Case 5 evaluation (Sec 2): Performance gains of 10.22-10.9% vs non-AI/ML benchmark and 1.7-6.3% vs Case 0 compression
- CSI feedback reduction Case 2 (Sec 2): Comprehensive reduction analysis showing 67-92% reduction for different ranks and traffic scenarios
- FTP performance localized models (Sec 2): Option 2 localized models show 0-1.2% performance gains with complexity trade-offs