当前位置:   article > 正文

RAN4#108 meeting FS_NR_AIML_air提案总结_r18 空口ai two sided model

r18 空口ai two sided model

Meeting Agenda

Meeting Agenda for #108 FS_NR_AIML_air

8.21 Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface

8.21.1 General and work plan

8.21.2 Specific issues related to use case for AI/ML

8.21.3 Interoperability and testability aspect

8.21.4 Moderator summary and conclusions

Meeting schedule for #108 FS_NR_AIML_air

RAN4 Main session, 11:00-13:00, August 23

RAN4 Main Ad hoc session, 18:20-19:20, August 25

General and work plan sub-topics summary

  1. For Requirements for data collection

    •Option 1: RAN4 to study requirements for data collection (e.g. accuracy) especially for training data

    •Option 2: RAN4 to study requirements for data collection depending on outcome of other groups.

    •Option 3: RAN4 should not study requirements for data collection(in particular for training)

  2. For Handling of generalization - robustness, whether should RAN4 requirements/tests ensure that performance is maintained under different scenarios (AI/ML model maintains performance level under “unseen” inputs in training)?

  3. For Handling of generalization - dynamically changing environment, whether should RAN4 study the requirements/tests?

  4. For AI/ML model complexity,

    •Option 1: KPIs related to model computation complexity should be considered (actual KPIs can be further discussed: FLOPS, # of parameters, etc.).

    •Option 2: Model complexity should be set as a side condition for requirements.

  5. For Requirements for LCM,

    •Option 1: Wait for progress in other working groups before further discussing any LCM related topics

    •Option 2: Study multi-sample / multi-user involved performance evaluation.

    •Option 3: Study requirements definition for dynamically changing scenarios(accuracy and latency of monitoring)

    •Option 4: No need to study anything else

  6. What is RAN4 testing goals?

    •Option 1: The testing goal is to verify whether a specific AI/ML model can be conducted in a proper way.

    •Option 2: The testing goal is to verify whether the performance gain of AI/ML model can be achieved for a static scenario/configuration.

  7. Whether is there need to study a framework to enable post deployment tests for model updates and/or drift validation(and possible other use cases)?

  8. Whether should Overhead be considered when formulating performance requirements and comparing with legacy performance?

  9. For Encoder/decoder terminology for two sided model, whether is there need for reference encoder/decoder definition ?

Specific issues related to use case for AI/ML sub-topics summary

  1. For Metrics/KPIs for CSI requirements/tests,

    •Option 1: Only use throughput (absolute or relative)

    •Option 2: Use throughput and other intermediate metrics/KPIs(SGCS, NMSE, etc).

    •Option 3: use throughput and overhead

  2. For Metrics/KPIs for Beam prediction requirements/tests,

    •Option 1: RSRP accuracy

    •Option 2: beam prediction accuracy :Top-1(%), Top-K(%)

    •Option 3: maximum RSRP among top-K predicted beams is larger than the RSRP of the strongest beam – x dB, x>relative measurement accuracy requirement

    •Option 4: overhead/latency reduction

    •Option 5: combinations of above options

  3. For Metrics/KPIs for positioning requirements/tests,

    •Option 1: direct positioning accuracy (ground truth vs. reported)

    •Option 2: RSTD/UE Rx-Tx accuracy

    •Option 3: CIR/PDP/RSRP accuracy

    •Option 4: LOS/NLOS

  4. Whether is there need to develop requirements for model delivery/update/transfer?

  5. To study the Feasibility of Intermediate KPIs for CSI requirements or LCM

    •How such metrics(SGCS, NMSE, etc) can be accessed and how to set a requirement on them or compare them to the ground truth

    •How can the ground truth be established in a testing environment

  6. Whether is there need to study the possibility of defining accuracy requirements for measurement data or labelled data?

  7. Whether is there need to study a framework for beam prediction requirements on the network side?

Interoperability and testability aspect Sub-topics summary

  1. For Encoder/decoder for 2-sided model, Pros/cons are listed below.

    •Recommended WF: Down-select option 6

Optioncontentproscons
1reference decoder is provided by the UE vendor of the encoder under test so that the encoder and decoder are jointly designed and trained• alleviate the impact of model mismatch.• Encoder may not work for the decoders of structure mismatch or not being jointly trained.
2reference decoder is provided by the vendor of the decoder(infra-vendors) so that the encoder and decoder are jointly designed and trained• Can test with real infra vendor decoder, no additional RAN4 decoder used in practice and the test can reflect the performance in the field.• different network vendors provide different models.
3The reference decoder(s) are fully specified and captured in RAN4 spec to ensure identical implementation across equipment vendors without additional training procedure needed• Simpler testing procedure since TE can directly implement the decoder• Possible lengthy RAN4 discussion to agree on one (or more) fully specified reference decoder
4The reference decoder(s) are partially specified and captured in RAN4 spec• TBD• the unspecified part is left to TE vendor implementation, TEs may have different reference decoders
6Test decoder is specified and captured in RAN4 and is provided by test environment vendor. The encoder and decoder can be jointly trained• TBD• TBD
  1. For Test encoder/decoder further discussion, pros, cons, TE implementation issues, high level test procedure, RAN4 testing issues and more relevant details will be gathered and discussed.

  2. For Reference block diagram for 1-sided model,

在这里插入图片描述

  1. For Reference block diagram for 2-sided model,
    在这里插入图片描述

  2. For Interoperability aspects,

Model TrainingModel monitoring and Model selection/(de)activation/switching/fallbackModel Inference
N/W-UE Collaboration Level-xN/A (training in non-3GPP entities or offline training as baseline, model training perf. guaranteed by model inference perf.)N/AInteroperability guaranteed by - Use case KPI
N/W-UE Collaboration Level-yN/A (training in non-3GPP entities or offline training as baseline, model training perf. guaranteed by model inference perf.)Interoperability guaranteed by - Model monitoring perf. - Model selection/(de)activation/switching/fallback perf.Interoperability guaranteed by - Use case KPI
N/W-UE Collaboration Level-zN/A for one-sided model training (training in non-3GPP entities or offline training as baseline, model training perf. guaranteed by model inference perf.)Interoperability guaranteed by - Model monitoring perf. - Model selection/(de)activation/switching/fallback perf.Interoperability guaranteed by - Use case KPI
  1. For Channel Models for testing,

    •Option 1: RAN4 should start discussing/developing CDL models

    •Option 2: TDL models are enough.

    •Option 3: Postpone this discussion for now

  2. Whether should RAN4 send an LS to RAN1 for now to ask about how graceful the degradation of an AI model is when scenarios are changing?

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/黑客灵魂/article/detail/1014863
推荐阅读
相关标签
  

闽ICP备14008679号