R1-2500556
discussion
Discussion on AIML positioning
From TCL
Summary
TCL presents 11 proposals and 2 observations regarding specification support for AI/ML-based positioning in NR, focusing on performance monitoring, training data collection, and consistency between training and inference. The document argues for reducing signaling overhead in label-based monitoring by preferring Option A-2 and performing label-free monitoring at the inference entity, while proposing AI-specific reference signal configurations to ensure model consistency.
Position
TCL prefers Option A-2 for label-based model performance monitoring in Case 1 to minimize overhead, arguing that the target UE can autonomously derive ground truth labels from position calculation assistance data. They propose that monitoring outcomes be signaled only upon performance deterioration and recommend that label-free monitoring metrics be calculated at the model inference entity. For training data collection in Case 3b, TCL proposes down-selecting between UE-side validation via implementation or LMF-side validation using immobility duration information. To ensure consistency between training and inference, TCL proposes introducing AI-specific reference signal configurations for PRS and SRS, allowing the UE to distinguish between AI-based and non-AI-based measurements. Regarding sensitive location data (info #7), TCL prefers Alternative 1, where geographical coordinates are provided implicitly via an associated ID, and proposes down-selecting whether this ID maps to a set of TRP coordinates or a single TRP coordinate.
Key proposals
- Proposal 1 (Performance monitoring): Prefers Option A-2 for label-based model monitoring metric calculation in Case 1 to reduce data transfer overhead, as the UE can autonomously obtain measurement and label data from position calculation assistance data.
- Proposal 2 (Performance monitoring): Proposes that the monitoring outcome sent from the target UE to LMF can be a Boolean value or a value between 0 and 1 to indicate model performance levels.
- Proposal 3 (Performance monitoring): Suggests that the monitoring outcome can be signaled by the UE only when model performance deteriorates to save signaling overhead.
- Proposal 4 (Performance monitoring): Recommends performing monitoring metric calculation at the model inference entity for label-free model monitoring to reduce signaling overhead.
- Proposal 5 (Performance monitoring): States that the LMF is responsible for functionality management decision making, such as activation, deactivation, and fallback.
- Proposal 6 (Performance monitoring): Proposes further discussion on whether monitoring decision making is performed at the LMF or the entity for monitoring metric calculation (UE or NG-RAN) for Case 1 and Case 3a.
- Proposal 7 (Training data collection): Proposes discussing assistance information for training data association in Case 3b when models are trained at LMF and Part B is generated by the UE.
- Proposal 8 (Training data collection): Proposes down-selecting the method for training data association: either UE validates association by implementation or LMF validates using immobility duration information.
- Proposal 9 (Consistency between training and inference): Proposes introducing AI-specific or AI model-specific reference signal configurations (PRS and SRS) indicated to the UE to enable implementation of specified models or distinguish transmitting methods.
- Proposal 10 (Consistency between training and inference): Prefers Alternative 1 for the indication of info #7 (geographical coordinates of TRPs) from legacy UE-based DL-TDOA, where info #7 is provided implicitly via an associated ID.
- Proposal 11 (Consistency between training and inference): Proposes down-selecting the format of the associated ID: either one ID corresponds to a set of geographical coordinates of TRPs served by the gNB, or one ID corresponds to the coordinate of one TRP.