R1-2409787
discussion
Discussion on AI/ML beam management
From Apple
Summary
Apple presents a comprehensive framework for AI/ML-based beam management in Rel-19, focusing on the lifecycle of models, overhead control for beam reporting, and consistency between training and inference. The document contains 13 proposals and 4 observations addressing UE capabilities, data collection mechanisms, UCI feedback design, and CPU occupancy rules.
Position
Apple proposes separating UE capabilities for data collection for training versus inference/monitoring to reflect that UEs capable of inference may not support training data collection. They require the associated ID in assisted information to be PLMN unique and managed by the core network or O&M to ensure consistency between training and inference, embedding this ID in reference signal configuration. They propose leveraging the MDT framework for NW-side model training data collection and supporting L1 beam reporting for performance monitoring. For overhead control, they propose two-part beam reporting using bitmaps to omit weak beams' RSRPs and differential RSRPs for un-omitted beams, with the strongest beam index indicated across measurement occasions for BM Case-2. They argue that the effective time for beam reporting should reference the CSI measurement source rather than the beam report time to handle fixed time gaps in AI/ML models. Finally, they present two options for CPU occupancy rules, allowing either shared processing with conventional CSI or separate capability reporting for AI/ML BM.
Key proposals
- Proposal 3-1 (Sec 3): NW-side and UE-side model UE capability feature groups should include separate capabilities for data collection for training and data collection for inference/monitoring.
- Proposal 3-2 (Sec 3.1): The MDT framework can be leveraged for training data collection for NW-side models.
- Proposal 3-3 (Sec 3.1): L1 beam reporting for performance monitoring for NW-side models is supported.
- Proposal 3-4 (Sec 3.2): Data collection for UE-side model training can be initiated/triggered by configuration from NW or requested from UE and configured by NW at its discretion.
- Proposal 4-1 (Sec 4.4): To control feedback overhead, beam reporting for BM Case-1 consists of strongest beam index, strongest beam RSRP, bitmap for un-omitted beams, differential RSRPs, and indication of the number of un-omitted beams.
- Proposal 4-2 (Sec 4.4): For BM Case-2 NW-side model, set B beam reporting includes strongest beam index across occasions, bitmap for un-omitted beams, strongest beam RSRP, differential RSRPs, and count of un-omitted beams.
- Proposal 4-3 (Sec 4.4): For UE-side model BM Case-2, beam reporting consists of indicating a subset of top beams across time instances and a bitmap of selected top beams over the cardinality of the subset by future time instances.
- Proposal 5-1 (Sec 5): For AI/ML beam management, the effective time for beam reporting has the CSI measurement source as the reference instead of beam report time.
- Proposal 5-2 (Sec 5): timeRestrictionForChannelMeasurements or a new IE can be set to a numerical value to ensure NW and UE have the same understanding regarding Tx/Rx beam usage.
- Proposal 6-1 (Sec 6): The associated ID in assisted information needs to be PLMN unique, with core network or O&M involved in assigning/managing it.
- Proposal 6-2 (Sec 6): The associated ID/assistance information, if assigned by higher layer, is embedded as part of reference signal configuration.
- Proposal 6-5 (Sec 6): The set A RS configuration with an associated ID during inference report and performance monitoring should have QCL-type D relationship with the set A RS configuration with the same associated ID during training.
- Proposal 7 (Sec 7): CPU occupancy rules for AI/ML BM inference support two options: same rule as conventional CSI/BM reporting or separate UE capability reporting.