R1-2410425
discussion
Discussion and evaluation results on AI/ML for CSI Compression
Summary
This document from Indian Institute of Technology presents detailed implementation approaches for AI/ML-based CSI compression inter-vendor compatibility options, specifically evaluating parameter exchange (Option 3a-1) versus dataset exchange (Option 4-1) methods. The document contains 11 observations across intervendor compatibility implementations and simulation results.
Position
IITM advocates FOR detailed implementation of both Option 3a-1 (parameter exchange) and Option 4-1 (dataset exchange) with specific multi-step training procedures, supporting dataset sharing for lower payload sizes while showing both methods are viable for higher payloads. They push FOR comprehensive evaluation across different channel conditions to demonstrate robustness, particularly highlighting the superiority of dataset exchange at lower CSI payload sizes.
Key proposals
- Observation 1 (Intervendor compatibility): Implement Option 3a-1 Alternative 1 where NW does joint training, shares encoder model/parameters with Target CSI to UE, UE trains decoder then freezes it, UE does joint training of encoder with frozen decoder using own dataset
- Observation 3 (Intervendor compatibility): Implement Option 4-1 Alternative 1 where NW shares dataset containing Target CSI and CSI Feedback, UE trains and freezes decoder, then does joint training of encoder with frozen decoder using own dataset
- Observation 5 (Simulation Results): For same backbone but different model structures at lower CSI payload sizes, sharing dataset (Option 4-1) performs better than sharing model/parameters (Option 3a-1)
- Observation 6 (Simulation Results): For higher CSI payload sizes, sharing dataset (Option 4-1) and sharing model/parameters (Option 3a-1) show similar performance
- Observation 8 (Simulation Results): When training and test datasets both use CDLC-30, performance is high
- Observation 10 (Simulation Results): When training uses CDLC-30 and testing uses CDLC-300 with different delay-spread, performance is lowest compared to other scenarios
- Observation 2 (Intervendor compatibility): Implement Option 3a-1 Alternative 2 where UE directly trains encoder using CSI feedback generated from target CSI and transferred parameters
- Observation 4 (Intervendor compatibility): Implement Option 4-1 Alternative 2 where UE directly trains encoder using Target CSI and CSI Feedback shared by NW
- Observation 9 (Simulation Results): Training on CDLC-30 and testing on CDLC-30 with different drop maintains high performance
- Observation 11 (Simulation Results): Training on CDLC-30 and testing on CDLA-30 with different channel model shows relatively high performance