R1-2500203 discussion

Discussion on AI/ML-based CSI prediction

From CATT
Status: not treated
WI: NR_AIML_air
Agenda: 9.1.3
Release: Rel-19
Source: 3gpp.org ↗
CATT's prior position on 9.1.3 at RAN1#119 · AI-synthesized, paraphrased
verify sources →
Advocates for relaxing consistency requirements for network-side antenna configurations like tilt angles and TXRU mapping, demonstrating through simulations that they have negligible impact. Pushes for performance monitoring-based methods to handle other consistency factors, such as interference distributions, which cannot be addressed through associated IDs. Positions against overly restrictive consistency requirements that would unnecessarily complicate CSI prediction implementations.

Summary

CATT presents simulation results demonstrating that antenna tilt angles and TXRU mappings have negligible impact on UE-sided CSI prediction performance, concluding that strict consistency between training and inference for these parameters is unnecessary. The document proposes introducing a new processing unit type for AI/ML inference, distinguishing AI/ML reports from legacy ones, and deprioritizing Type 2 performance monitoring due to overhead and accuracy concerns, while supporting SGCS as the metric for Type 1 and Type 3 monitoring.

Position

CATT concludes that consistency between training and inference regarding antenna down tilt and TXRU mapping is not required, citing simulation results showing negligible performance impact on SGCS. They propose introducing a new processing unit type or enhanced CPU for AI/ML-based CSI processing, which is separately counted from the legacy CPU pool but shared among CSI-related AI/ML functionalities. CATT supports distinguishing AI/ML CSI reports from legacy reports via a new report quantity or identifier. For performance monitoring, they prefer SGCS as the metric for Type 1 and Type 3 monitoring and explicitly deprioritize Type 2 monitoring due to high overhead and quantization errors. They propose reusing the CSI framework for monitoring configuration, allowing either reuse of inference resources or dedicated monitoring resources.

Key proposals

Your notes

Private to your account