R1-2409988
discussion
AI/ML for CSI Compression
From Nokia
Summary
Nokia's contribution analyzes AI/ML for CSI compression, focusing on inter-vendor training collaboration across three directions (A, B, C) and proposes deprioritizing Directions B and C while advancing Direction A. The document presents 6 proposals and 6 observations covering dataset transfer optimization, phase normalization impacts, and recurrent quantization benefits.
Position
Nokia strongly advocates FOR Direction A (parameter/dataset sharing for UE-side offline engineering) while pushing AGAINST Directions B and C. They argue Direction B creates unsolvable proprietary information disclosure issues and implementation complexity, while Direction C fails to deliver its main benefit of avoiding inter-vendor collaboration due to performance degradation requiring remedial measures. Nokia positions offline transfer mechanisms as superior to over-the-air transfers and emphasizes the critical importance of phase normalization in dataset mismatch scenarios.
Key proposals
- Proposal 1 (Sec 2.2): For further study of Option 4-1, prioritize offline transfer of the dataset from the NW-side to the UE-side
- Proposal 2 (Sec 2.2): For further study of Option 3a-1, prioritize offline transfer of the encoder and/or dataset from the NW-side to the UE-side
- Proposal 3 (Sec 2.2): Further study the impact of phase normalization on dataset mismatch and mitigation methods
- Proposal 4 (Sec 2.2): While the UE-side can supplement the target CSI in Option 3a-1, target CSI should also be provided by the NW-side
- Proposal 5 (Sec 2.3): Deprioritize Direction B due to issues with matching the downloaded model to various UE implementations and concern over disclosure of proprietary information
- Proposal 6 (Sec 2.4): It is recommended to deprioritize Direction C, in case its main benefit of low or no level of required inter-vendor training collaboration effort cannot be achieved due to potential performance degradation and its corresponding remedial measures
- Observation 1 (Sec 2.2): In Option 4-1, dataset transfer over-the-air to the UE is redundant if offline training occurs at the UE-side OTT server
- Observation 3 (Sec 2.2): For the dataset distribution mismatch conditions studied, only phase normalization significantly affected model performance
- Observation 4 (Sec 2.2): Combined dataset training mitigates the performance loss due to dataset distribution mismatch between the UE and gNB
- Observation 5 (Sec 2.5): Explicitly designed recurrent quantizer / inverse quantizer shows significant performance advantages over the conventional quantization scheme when used with SF encoder / decoder