R1-2508366
discussion
AI/ML in 6GR interface
From Kyocera
Summary
Kyocera provides 6 observations and 24 proposals across 3 main sections, totaling 30 numbered items, focusing on down-selecting 6GR AI/ML study to up to 4 use cases, prioritizing one-sided models, and defining efficiency metrics for UE-side AI/ML solutions.
Position
Kyocera proposes down-selecting the 6GR AI/ML study to a maximum of 4 new use cases and requires that one-sided AI/ML model use cases be given high priority, arguing that two-sided models compound deployment challenges and risk redundancy with Release 20 NR AI/ML studies. They prioritize studying DM-RS overhead reduction with neural receivers, low overhead CSI-RS with AI/ML, low overhead SRS, and inter-cell beam management. Kyocera presents a technical case for early study of DM-RS overhead reduction to directly impact design of DM-RS configurations and signalling for PDSCH/PUSCH, and proposes that proponents of low overhead CSI-RS with AI/ML must address practical label collection conditions including hybrid strategies using high-SNR full-port CSI-RS feedback, filtering/averaging, and confidence-weighted training. They further propose that UE-side AI/ML solutions shall prioritize compact architectures and require proponents to provide model efficiency metrics—including parameter count, memory footprint, FLOPs per inference, inference time, energy per inference, and link-level or system-level throughput—to substantiate performance gains without disclosing proprietary details.
Key proposals
- Proposal 1 (Sec 2 intro): RAN1 should consider the following aspects during the 6GR study: Down select the 6GR study to up to 4 new use cases. From the AI/ML model type perspective, one-sided AI/ML model use cases should be high priority.
- Proposal 4 (Sec 2.2): Proponents for low overhead CSI-RS with AI/ML should consider practical conditions for collecting the AI/ML model labels.
- Observation 3 (Sec 2.1): Studying DM-RS overhead reduction at the beginning of Rel-20 may directly impact the design of DM-RS configurations and signalling for PDSCH/PUSCH.
- Proposal 5 (Sec 2.4): UE-side AI/ML solutions should prioritize compact architectures that preserve task performance while minimizing latency and energy.
- Proposal 6 (Sec 2.4): Recommendation: proponents of UE-side AI/ML use cases should provide model efficiency metrics without disclosing proprietary details. This should include core complexity and runtime metrics to substantiate the claimed performance gains under realistic UE conditions.
- Proposal 7 (Sec 2.4): Suggested metrics for model complexity and energy consumption calculation:
- Proposal 8 (Sec 2.4): Model size: parameter count and memory footprint.
- Proposal 9 (Sec 2.4): Computational cost: FLOPs per inference for the target input configuration.
- Proposal 10 (Sec 2.4): Runtime behavior: measured inference time on representative UE hardware and software stacks.
- Proposal 11 (Sec 2.4): Energy efficiency: energy per inference and, where useful, energy per FLOP; report measurement setup and conditions.
- Proposal 12 (Sec 2.4): System impact: link-level or system-level throughput under the proposed deployment profile to quantify cost-benefit relative to baseline and light weight AI/ML methods.