当前位置:   article > 正文

STM32CubeIDE开发(三十一), stm32人工智能开发应用实践(Cube.AI).篇一_cubeai

cubeai

目录

一、cube.AI简介及cubeIDE集成

       1.1 cube.AI介绍

       1.2 cube.AI与cubeIDE集成与安装

        1.3 cube.AI支持硬件平台

        1.4 cube.AI应用的好处

 二、FP-AI-SENSING1

       2.1 FP-AI-SENSING1简介

          2.2 FP-AI-SENSING1软件包支持硬件平台

三、FP-AI-SENSING1部署

       3.1 B-L475E-IOT01A开发板

        3.2 FP-AI-SENSING1软件包下载及配置

         3.3 固件烧录

       3.4 FP-AI-SENSING1示例工程部署

 四、数据采集

       4.1 STBLE-Sensor软件下载安装

         4.2 STBLESensor配置数据采集

五、数据治理及模型训练

       5.1 从开发板取出采集记录数据文件

       5.2 神经网络模型训练

  六、 cube.AI将训练模型转换为c语言模型

       6.1 创建cube.AI支持的STM32工程

         6.2 cube.AI神经网络配置

         6.3 模型分析与PC端验证

         6.4 c语言神经网络模型生成及源码输出

 七、c语言神经网络模型使用

       7.1 C语言神经网络模型源文件

       7.2 串口功能自定义实现

        7.3 c语言神经网络模型API使用

       7.4 编译及程序运行测试

         7.5 补充说明


一、cube.AI简介及cubeIDE集成

       1.1 cube.AI介绍

        cube.AI准确来说是STM32Cube.AI,它是ST公司的打造的STM32Cube生态体系的扩展包X-CUBE-AI,专用于帮助开发者实现人工智能开发。确切地说,是将基于各种人工智能开发框架训练出来的算法模型,统一转换为c语言支持的源码模型,然后将c模型与STM32的硬件产品结合,实现人工智能模型可直接部署在前端或边缘端设备,实现人工智能就地计算。关于cube.AI 的各种信息可以从ST官网上查看和下载其相关资料:X-CUBE-AI - STM32CubeMX的AI扩展包 - STMicroelectronics

        cube.AI 以插件形式支持ST相关开发平台如cubeIDE、cubeMX、keil等,整体开发过程分为三个主要部分,1)收集及治理数据,2)训练及验证模型,3)c模型生成及前端或边缘端部署,如下图所示:

         目前cube.AI支持各种深度学习框架的训练模型,如Keras和TensorFlow™ Lite,并支持可导出为ONNX标准格式的所有框架,如PyTorch™、Microsoft® Cognitive Toolkit、MATLAB®等,然后通过 cube.MX可视化配置界面导入这些深度学习框架导出的训练模型来配置及生成c模型,进而部署在STM32芯片上。

       1.2 cube.AI与cubeIDE集成与安装

        在cubeIDE的帮助菜单栏,选择嵌入式软件包管理项(Embedded Software Packages Manager)进入X-CUBE-AI扩展包安装页面。选择X-CUBE-AI需要的版本进行安装即可,如下图所示,安装完成后,该版本前面方框呈绿色标记。

        1.3 cube.AI支持硬件平台

        得益于ST公司不断的优化及迭代X-CUBE-AI扩展包,神经网络模型生成c模型后得以使用更小的算力资源和几乎无损的算法精度,因此使其能部署到STM32绝大多数的芯片上,目前其支持的MCU及MPU型号如下图所示。

        1.4 cube.AI应用的好处

        将神经网络边缘化部署后,减少延迟、节约能源、提高云利用率,并通过大限度地减少互联网上的数据交换来保护隐私,而结合X-CUBE-AI使得神经网络部署在边缘端端的便捷、灵活、低成本,微机智能成为更多产品的选择。

 二、FP-AI-SENSING1

       2.1 FP-AI-SENSING1简介

        FP-AI-SENSING1是ST公司提供的STM32Cube.AI示例,可通过BLE(低功耗蓝牙)将物联网节点连接到智能手机,并使用STBLESensor应用程序,配置设备,实现数据采集,使得用于训练神经网络模型的数据更贴近实际使用场景,具有更好的训练效果和精度。

        FP-AI-SENSING1软件包更多介绍及信息请参考ST官网:

FP-AI-SENSING1 - 具有基于声音和运动感应的人工智能(AI)应用的超低功耗IoT节点的STM32Cube功能包 - STMicroelectronics

        在FP-AI-SENSING1案例页面,下载源码包及其数据手册。

          2.2 FP-AI-SENSING1软件包支持硬件平台

        ST公司为FP-AI-SENSING1示例运行提供了硬件平台,支持开发者快速学习了解FP-AI-SENSING1示例,从而了解Cube.AI的开发过程。

三、FP-AI-SENSING1部署

       3.1 B-L475E-IOT01A开发板

        本文采用ST公司的B-L475E-IOT01A开发板,打开CubeMX工具,选择Start my project from ST Board,搜索B-L475E-IOT01A,如下图所示,可以在1、2、3页面下载开发板相关的原理框图、文档及案例、说明手册。

        3.2 FP-AI-SENSING1软件包下载及配置

        下载FP-AI-SENSING1软件包后,解压下载的源码包:en.fp-ai-sensing1.zip,进入“STM32CubeFunctionPack_SENSING1_V4.0.3\Projects\B-L475E-IOT01A\Applications\SENSING1\STM32CubeIDE”目录,用文本编辑工具打开“CleanSENSING1.bat”,(linux系统的,采用CleanSENSING1.sh文件)。

       CleanSENSING1.bat运行依赖STM32cube生态的另一个开发工具:STM32CubeProgrammer,该工具可以帮助开发者读取、写入和验证设备内存等。

STM32CubeProg - 用于STM32产品编程的STM32CubeProgrammer软件 - STMicroelectronics

         在STM32CubeProgrammer工具下载页面,下载该工具及说明手册:

         下载并安装STM32CubeProgrammer工具,例如本文安装目录为:D:\workForSoftware\STM32CubeProgrammer

         修改CleanSENSING1.bat依赖工具“STM32CubeProgrammer”的路径:

         3.3 固件烧录

        将B-L475E-IOT01A开发板用Micro USB连接到电脑上,

         连接之后,驱动会自动安装,进入设备管理页面,确认串口编号和配置串口参数。

         右键CleanSENSING1.bat文件以管理员身份运行,将在开发板安装引导加载程序和更新固件。

         该脚本可以对B-L475E-IOT01A开发板实现以下操作,

•完全闪存擦除
•在右侧闪存区域加载BootLoader
•在右侧闪存区域加载程序(编译后)
•重置电路板

       3.4 FP-AI-SENSING1示例工程部署

        在该目录下,进入“B-L475E-IOT01A”目录,用CubeIDE打开.project,打开FP-AI-SENSING1工程。

         打开该工程后如下图所示,用户可调整源码在User目录,关于本工程信息请查看readme.txt文件。

         在main.c函数中找到Init_BlueNRG_Stack函数,该函数可以设置BLE(低功耗蓝牙)的服务名,

  1. static void Init_BlueNRG_Stack(void)
  2. {
  3. char BoardName[8];
  4. uint16_t service_handle, dev_name_char_handle, appearance_char_handle;
  5. int ret;
  6. for(int i=0; i<7; i++) {
  7. BoardName[i]= NodeName[i+1];
  8. }

        该函数采用默认的BLE名称,该默认名称定义在SENSING1.h设置,例如:IAI_403

         现在调整BLE名称为AI_Test

  1. static void Init_BlueNRG_Stack(void)
  2. {
  3. // char BoardName[8];
  4. char BoardName[8] = {'A','I','_','T','e','s','t'};
  5. uint16_t service_handle, dev_name_char_handle, appearance_char_handle;
  6. int ret;
  7. for(int i=0; i<7; i++) {
  8. // BoardName[i]= NodeName[i+1];
  9. NodeName[i+1] = BoardName[i];
  10. }

        配置工程输出格式支持如下:

         配置运行设置如下:

         然后编译及下载程序:

         打开串口工具,连接上对于串口,点击开发板上的重置按钮(黑色按钮),串口日志输出如下,日志显示BLE模块启动成功:

 四、数据采集

       4.1 STBLE-Sensor软件下载安装

        确保手机支持低功耗蓝牙通信,进入ST的BLE传感器应用下载页面,

STBLESensor - 用于安卓和iOS的BLE传感器应用 - STMicroelectronics

         下载对应的手机应用程序:

         4.2 STBLESensor配置数据采集

        本文用的是华为手机及android系统,安装完成APP后启动进入界面(当前版本是4.14),点击搜索,得到AI_Test蓝牙服务名。

         选择AI_Test蓝牙服务后,进入页面,(android)在左上角菜单下拉菜单选择,Data Log(sensing1),进入数据采集页面,选择Accelerometer(三轴加速度计),并设置参数为1.0Hz、26Hz、1.0X。

         在数据记录操作页面,先新建标签,例如Jogging(慢跑),Walking(走了),Stationary(静立)等等。

        1)开启数据采集记录时:先打开标签,再点击START LOGGING按钮开启

        2)关闭数据采集记录时,先点击START LOGGING按钮关闭,再关闭标签。

         例如,本文按上述操作记录了Walking、Jogging两次数据记录,将生成两个.csv文件。

五、数据治理及模型训练

       5.1 从开发板取出采集记录数据文件

        断掉开发板与电脑的USB连接,在开发板背面将在1-2跳线帽拔掉,插入5-6跳线,然后USB连接从ST-LINK连接转到USB-OTG接口接线,如下图(1->2)。

         开发板重新上电后,保持按下user按钮(蓝色),同时按下reset按钮(黑色),然后先松开reset按钮,在松开user按钮,激活USB-OTG。

         USB-OTG激活后,开发板将作为一个U盘显示在电脑上,里面有刚才数据采集保存的CSV文件。

         在“STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR”目录创建一个文件目录Log_data,将该文件拷贝到该目录下:

       该CSV记录数据格式如下,时间、行为、三个传感数值:

       5.2 神经网络模型训练

         “Training Scripts\HAR ”是官方提供的一个人类行为姿态识别训练项目,默认是采用,采用Keras前端+tensorflow后端实现。先安装tensorflow、Keras等支持。

        本文安装如下:

  1. #已安装python3.6
  2. pip3 install tensorflow==1.14 -i https://pypi.tuna.tsinghua.edu.cn/simple
  3. ERROR: tensorboard 1.14.0 has requirement setuptools>=41.0.0, but you'll have setuptools 28.8.0 which is incompatible.
  4. python3 -m pip install --upgrade pip -i https://pypi.tuna.tsinghua.edu.cn/simple
  5. pip3 install keras==2.2.4 -i https://pypi.tuna.tsinghua.edu.cn/simple

         根据HAR 项目的readme.txt通过pip install -r requirements.txt命令安装requirements.txt文件制定的相关模块,但本文是采用常用命令逐个安装各个模块的“pip3 install 模块名==版本 -i 源”

  1. numpy==1.16.4
  2. argparse
  3. os
  4. logging
  5. warnings
  6. datetime
  7. pandas==0.25.1
  8. scipy==1.3.1
  9. matplotlib==3.1.1
  10. mpl_toolkits
  11. sklearn-learn==0.21.3
  12. keras==2.2.4
  13. tensorflow==1.14.0
  14. tqdm==4.36.1
  15. keras-tqdm==2.0.1

         完成安装后,进入datasets目录,打开ReplaceWithWISDMDataset.txt文件,根据其提供的网址去下载

         下载WISDM实验室的数据集支持。

         下载文件如下,将这些文件拷贝到datasets目录下覆盖。

        打开RunMe.py文件,可以看到关于各个运行参数的设置:

         运行python3 .\RunMe.py -h命令,查看运行参数含义,其中:--dataset使用的是前面下载的WISDM实验室的数据集来训练模型,而--dataDir是指定采用自行采集的数据集训练模型:

  1. PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR> python3 .\RunMe.py -h
  2. Using TensorFlow backend.
  3. usage: RunMe.py [-h] [--model MODEL] [--dataset DATASET] [--dataDir DATADIR]
  4. [--seqLength SEQLENGTH] [--stepSize STEPSIZE] [-m MERGE]
  5. [--preprocessing PREPROCESSING] [--trainSplit TRAINSPLIT]
  6. [--validSplit VALIDSPLIT] [--epochs N] [--lr LR]
  7. [--decay DECAY] [--batchSize N] [--verbose N]
  8. [--nrSamplesPostValid NRSAMPLESPOSTVALID]
  9. Human Activity Recognition (HAR) in Keras with Tensorflow as backend on WISDM
  10. and WISDM + self logged datasets
  11. optional arguments:
  12. -h, --help show this help message and exit
  13. --model MODEL choose one of the two availavle choices, IGN or GMP, (
  14. default = IGN )
  15. --dataset DATASET choose a dataset to use out of two choices, WISDM or
  16. AST, ( default = WISDM )
  17. --dataDir DATADIR path to new data collected using STM32 IoT board
  18. recorded at 26Hz as sampling rate, (default = )
  19. --seqLength SEQLENGTH
  20. input sequence lenght (default:24)
  21. --stepSize STEPSIZE step size while creating segments (default:24, equal
  22. to seqLen)
  23. -m MERGE, --merge MERGE
  24. if to merge activities (default: True)
  25. --preprocessing PREPROCESSING
  26. gravity rotation filter application (default = True)
  27. --trainSplit TRAINSPLIT
  28. train and test split (default = 0.6 (60 precent for
  29. train and 40 precent for test))
  30. --validSplit VALIDSPLIT
  31. train and validation data split (default = 0.7 (70
  32. percent for train and 30 precent for validation))
  33. --epochs N number of total epochs to run (default: 20)
  34. --lr LR initial learning rate
  35. --decay DECAY decay in learning rate, (default = 1e-6)
  36. --batchSize N mini-batch size (default: 64)
  37. --verbose N verbosity of training and test functions in keras, 0,
  38. 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar,
  39. 2 = one line per epoch (default: 1)
  40. --nrSamplesPostValid NRSAMPLESPOSTVALID
  41. Number of samples to save from every class for post
  42. training and CubeAI conversion validation. (default =
  43. 2)
  44. PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR>

        在RunMe.py文件后面加入下面语句:

  1. #保存Cube.AI支持的数据集格式,用于后续验证测试使用
  2. testx_f=resultDirName+"testx.npy"
  3. testy_f=resultDirName+"testy.npy"
  4. np.save(testx_f,TestX)
  5. np.save(testy_f,TestY)

        打开命令工具,输入命令python3 .\RunMe.py --dataDir=Log_data ,可以根据实际需要进行参数设置,本文先采用默认参数训练模型,输出日志如下,这显然是一个分类问题,分类为Jogging 、Stationary 、Stairs 、Walking,有卷积层、池化层、2全连接层、压平层、Dropout层等。

  1. PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR> python3 .\RunMe.py --dataDir=Log_data
  2. Using TensorFlow backend.
  3. Running HAR on WISDM dataset, with following variables
  4. merge = True
  5. modelName = IGN,
  6. segmentLength = 24
  7. stepSize = 24
  8. preprocessing = True
  9. trainTestSplit = 0.6
  10. trainValidationSplit = 0.7
  11. nEpochs = 20
  12. learningRate = 0.0005
  13. decay =1e-06
  14. batchSize = 64
  15. verbosity = 1
  16. dataDir = Log_data
  17. nrSamplesPostValid = 2
  18. Segmenting Train data
  19. Segments built : 100%|███████████████████████████████████████████████████| 27456/27456 [00:28<00:00, 953.24 segments/s]
  20. Segmenting Test data
  21. Segments built : 100%|██████████████████████████████████████████████████| 18304/18304 [00:14<00:00, 1282.96 segments/s]
  22. Segmentation finished!
  23. preparing data file from all the files in directory Log_data
  24. parsing data from IoT01-MemsAnn_11_Jan_23_16h_57m_17s.csv
  25. parsing data from IoT01-MemsAnn_11_Jan_23_16h_57m_53s.csv
  26. Segmenting the AI logged Train data
  27. Segments built : 100%|████████████████████████████████████████████████████████| 25/25 [00:00<00:00, 3133.35 segments/s]
  28. Segmenting the AI logged Test data
  29. Segments built : 100%|████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 2852.35 segments/s]
  30. Segmentation finished!
  31. _________________________________________________________________
  32. Layer (type) Output Shape Param #
  33. =================================================================
  34. conv2d_1 (Conv2D) (None, 9, 3, 24) 408
  35. _________________________________________________________________
  36. max_pooling2d_1 (MaxPooling2 (None, 3, 3, 24) 0
  37. _________________________________________________________________
  38. flatten_1 (Flatten) (None, 216) 0
  39. _________________________________________________________________
  40. dense_1 (Dense) (None, 12) 2604
  41. _________________________________________________________________
  42. dropout_1 (Dropout) (None, 12) 0
  43. _________________________________________________________________
  44. dense_2 (Dense) (None, 4) 52
  45. =================================================================
  46. Total params: 3,064
  47. Trainable params: 3,064
  48. Non-trainable params: 0
  49. _________________________________________________________________
  50. Train on 19263 samples, validate on 8216 samples
  51. Epoch 1/20
  52. 2023-01-24 14:41:03.484083: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
  53. 19263/19263 [==============================] - 3s 167us/step - loss: 1.1442 - acc: 0.5430 - val_loss: 0.6674 - val_acc: 0.7372
  54. Epoch 2/20
  55. 19263/19263 [==============================] - 1s 40us/step - loss: 0.7173 - acc: 0.7089 - val_loss: 0.5126 - val_acc: 0.7928
  56. Epoch 3/20
  57. 19263/19263 [==============================] - 1s 40us/step - loss: 0.5954 - acc: 0.7522 - val_loss: 0.4470 - val_acc: 0.8051
  58. Epoch 4/20
  59. 19263/19263 [==============================] - 1s 39us/step - loss: 0.5288 - acc: 0.7810 - val_loss: 0.4174 - val_acc: 0.8335
  60. Epoch 5/20
  61. 19263/19263 [==============================] - 1s 36us/step - loss: 0.4925 - acc: 0.7994 - val_loss: 0.3897 - val_acc: 0.8477
  62. Epoch 6/20
  63. 19263/19263 [==============================] - 1s 35us/step - loss: 0.4647 - acc: 0.8173 - val_loss: 0.3607 - val_acc: 0.8647
  64. Epoch 7/20
  65. 19263/19263 [==============================] - 1s 37us/step - loss: 0.4404 - acc: 0.8301 - val_loss: 0.3493 - val_acc: 0.8777
  66. Epoch 8/20
  67. 19263/19263 [==============================] - 1s 38us/step - loss: 0.4200 - acc: 0.8419 - val_loss: 0.3271 - val_acc: 0.8827
  68. Epoch 9/20
  69. 19263/19263 [==============================] - 1s 38us/step - loss: 0.3992 - acc: 0.8537 - val_loss: 0.3163 - val_acc: 0.8890
  70. Epoch 10/20
  71. 19263/19263 [==============================] - 1s 40us/step - loss: 0.3878 - acc: 0.8576 - val_loss: 0.3039 - val_acc: 0.8991
  72. Epoch 11/20
  73. 19263/19263 [==============================] - 1s 40us/step - loss: 0.3799 - acc: 0.8667 - val_loss: 0.2983 - val_acc: 0.8985
  74. Epoch 12/20
  75. 19263/19263 [==============================] - 1s 40us/step - loss: 0.3662 - acc: 0.8736 - val_loss: 0.2922 - val_acc: 0.9007
  76. Epoch 13/20
  77. 19263/19263 [==============================] - 1s 36us/step - loss: 0.3613 - acc: 0.8760 - val_loss: 0.2837 - val_acc: 0.9051
  78. Epoch 14/20
  79. 19263/19263 [==============================] - 1s 40us/step - loss: 0.3574 - acc: 0.8775 - val_loss: 0.2910 - val_acc: 0.8985
  80. Epoch 15/20
  81. 19263/19263 [==============================] - 1s 39us/step - loss: 0.3513 - acc: 0.8796 - val_loss: 0.2814 - val_acc: 0.9080
  82. Epoch 16/20
  83. 19263/19263 [==============================] - 1s 38us/step - loss: 0.3482 - acc: 0.8816 - val_loss: 0.2737 - val_acc: 0.9116
  84. Epoch 17/20
  85. 19263/19263 [==============================] - 1s 35us/step - loss: 0.3362 - acc: 0.8875 - val_loss: 0.2742 - val_acc: 0.9114
  86. Epoch 18/20
  87. 19263/19263 [==============================] - 1s 38us/step - loss: 0.3325 - acc: 0.8892 - val_loss: 0.2661 - val_acc: 0.9137
  88. Epoch 19/20
  89. 19263/19263 [==============================] - 1s 40us/step - loss: 0.3257 - acc: 0.8927 - val_loss: 0.2621 - val_acc: 0.9161
  90. Epoch 20/20
  91. 19263/19263 [==============================] - 1s 37us/step - loss: 0.3249 - acc: 0.8918 - val_loss: 0.2613 - val_acc: 0.9188
  92. 12806/12806 [==============================] - 0s 25us/step
  93. Accuracy for each class is given below.
  94. Jogging : 97.28 %
  95. Stationary : 98.77 %
  96. Stairs : 66.33 %
  97. Walking : 87.49 %
  98. PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR>

        训练模型及相关输出信息在results目录下,每次训练输出依据时间生成一个独立目录,由于是keras训练模型,因此输出训练模型是一个名为*.h5格式文件,例如har_IGN.h5:

  六、 cube.AI将训练模型转换为c语言模型

       6.1 创建cube.AI支持的STM32工程

        在CubeIDE中新建一个STM32项目,在cubeMX中选择以开发板形式创建

         创建一个B-L475E-IOT01A_cube.ai工程名的STM32工程,如下图。

         完成创建后,双击.ioc文件打开cube.MX配置界面。

         6.2 cube.AI神经网络配置

        选择X-CUBE-AI包支持,回到主页面后,会多出software Packs栏及多出STMicroelectronics .X-CUBE-AI选项,进入该页面,勾选下图标识的2、3 项,在5中选择采用哪个串口支持程序及调试。

         知识点,在X-CUBE-AI配置选项页面,停靠时,会出现说明框,快捷键“CTRL+D”会进一步出现X-CUBE-AI相关文档,

         有详细的文档资料:

         或者也可以从cube.AI安装目录直接进入,例如:D:\workForSoftware\STM32CubeMX\Repository\Packs\STMicroelectronics\X-CUBE-AI\7.3.0\Documentation

        另外,需要注意,开启X-CUBE-AI支持后,其依赖CRC功能,会自动开启CRC。

         6.3 模型分析与PC端验证

        添加(add network)神经网络如下,在3中可以修改神经网络模型名称,在4中选择支持框架及选择模型文件,例如“STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR\results\2023_Jan_24_14_40_13\har_IGN.h5”,在5、6中,可以选择随机数据进行模型研制,也可以选择生成的研制数据进行验证(前面训练模型时,在RunMe.py文件后面加入语句,输出testx.npy、testy.npy文件):

         可以点击设置按钮进入,在该页面下可以对神经网络进行更多设置和更详细信息查看,主要是以模型优化为主,本文先保持默认。

         点击分析按钮(Analyze),输出该模型相关信息及部署模型需要的计算资源(ram、flash等):

  1. Analyzing model
  2. D:/workForSoftware/STM32CubeMX/Repository/Packs/STMicroelectronics/X-CUBE-AI/7.3.0/Utilities/windows/stm32ai analyze --name har_ign -m D:/tools/arm_tool/STM32CubeIDE/STM32CubeFunctionPack_SENSING1_V4.0.3/Utilities/AI_Ressources/Training Scripts/HAR/results/2023_Jan_11_17_50_03/har_IGN.h5 --type keras --compression none --verbosity 1 --workspace C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace465785871649500151581099545474794 --output C:\Users\py_hp\.stm32cubemx\network_output --allocate-inputs --allocate-outputs 
  3. Neural Network Tools for STM32AI v1.6.0 (STM.ai v7.3.0-RC5)
  4.  
  5.  Exec/report summary (analyze)
  6.  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  7.  model file         :   D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR\results\2023_Jan_11_17_50_03\har_IGN.h5
  8.  type               :   keras                                                                                                                                                    
  9.  c_name             :   har_ign                                                                                                                                                  
  10.  compression        :   none                                                                                                                                                     
  11.  options            :   allocate-inputs, allocate-outputs                                                                                                                        
  12.  optimization       :   balanced                                                                                                                                                 
  13.  target/series      :   generic                                                                                                                                                  
  14.  workspace dir      :   C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace465785871649500151581099545474794                                                                        
  15.  output dir         :   C:\Users\py_hp\.stm32cubemx\network_output                                                                                                               
  16.  model_fmt          :   float                                                                                                                                                    
  17.  model_name         :   har_IGN                                                                                                                                                  
  18.  model_hash         :   ff0080dbe395a3d8fd3f63243d2326d5                                                                                                                         
  19.  params #           :   3,064 items (11.97 KiB)                                                                                                                                  
  20.  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  21.  input 1/1          :   'input_0' (domain:activations/**default**)                                                                                                               
  22.                     :   72 items, 288 B, ai_float, float, (1,24,3,1)                                                                                                             
  23.  output 1/1         :   'dense_2' (domain:activations/**default**)                                                                                                               
  24.                     :   4 items, 16 B, ai_float, float, (1,1,1,4)                                                                                                                
  25.  macc               :   14,404                                                                                                                                                   
  26.  weights (ro)       :   12,256 B (11.97 KiB) (1 segment)                                                                                                                         
  27.  activations (rw)   :   2,016 B (1.97 KiB) (1 segment) *                                                                                                                         
  28.  ram (total)        :   2,016 B (1.97 KiB) = 2,016 + 0 + 0                                                                                                                       
  29.  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  30.  (*) 'input'/'output' buffers can be used from the activations buffer
  31.  
  32.  Model name - har_IGN ['input_0'] ['dense_2']
  33.  ------------------------------------------------------------------------------------------------------
  34.  id   layer (original)                 oshape                  param/size     macc     connected to   
  35.  ------------------------------------------------------------------------------------------------------
  36.  0    input_0 (None)                   [b:None,h:24,w:3,c:1]                                          
  37.       conv2d_1_conv2d (Conv2D)         [b:None,h:9,w:3,c:24]   408/1,632      10,392   input_0        
  38.       conv2d_1 (Conv2D)                [b:None,h:9,w:3,c:24]                  648      conv2d_1_conv2d
  39.  ------------------------------------------------------------------------------------------------------
  40.  1    max_pooling2d_1 (MaxPooling2D)   [b:None,h:3,w:3,c:24]                  648      conv2d_1       
  41.  ------------------------------------------------------------------------------------------------------
  42.  2    flatten_1 (Flatten)              [b:None,c:216]                                  max_pooling2d_1
  43.  ------------------------------------------------------------------------------------------------------
  44.  3    dense_1_dense (Dense)            [b:None,c:12]           2,604/10,416   2,604    flatten_1      
  45.  ------------------------------------------------------------------------------------------------------
  46.  5    dense_2_dense (Dense)            [b:None,c:4]            52/208         52       dense_1_dense  
  47.       dense_2 (Dense)                  [b:None,c:4]                           60       dense_2_dense  
  48.  ------------------------------------------------------------------------------------------------------
  49.  model/c-model: macc=14,404/14,404  weights=12,256/12,256  activations=--/2,016 io=--/0
  50.  
  51.  Number of operations per c-layer
  52.  -----------------------------------------------------------------------------------
  53.  c_id    m_id   name (type)                          #op (type)                    
  54.  -----------------------------------------------------------------------------------
  55.  0       1      conv2d_1_conv2d (optimized_conv2d)            11,688 (smul_f32_f32)
  56.  1       3      dense_1_dense (dense)                          2,604 (smul_f32_f32)
  57.  2       5      dense_2_dense (dense)                             52 (smul_f32_f32)
  58.  3       5      dense_2 (nl)                                      60 (op_f32_f32)  
  59.  -----------------------------------------------------------------------------------
  60.  total                                                        14,404               
  61.  
  62.    Number of operation types
  63.    ---------------------------------------------
  64.    smul_f32_f32              14,344       99.6%
  65.    op_f32_f32                    60        0.4%
  66.  
  67.  Complexity report (model)
  68.  ------------------------------------------------------------------------------------
  69.  m_id   name              c_macc                    c_rom                     c_id  
  70.  ------------------------------------------------------------------------------------
  71.  1      max_pooling2d_1   ||||||||||||||||  81.1%   |||               13.3%   [0]   
  72.  3      dense_1_dense     ||||              18.1%   ||||||||||||||||  85.0%   [1]   
  73.  5      dense_2_dense     |                  0.8%   |                  1.7%   [2, 3]
  74.  ------------------------------------------------------------------------------------
  75.  macc=14,404 weights=12,256 act=2,016 ram_io=0
  76. Creating txt report file C:\Users\py_hp\.stm32cubemx\network_output\har_ign_analyze_report.txt
  77. elapsed time (analyze): 7.692s
  78. Getting Flash and Ram size used by the library
  79. Model file:      har_IGN.h5
  80. Total Flash:     29880 B (29.18 KiB)
  81.     Weights:     12256 B (11.97 KiB)
  82.     Library:     17624 B (17.21 KiB)
  83. Total Ram:       4000 B (3.91 KiB)
  84.     Activations: 2016 B (1.97 KiB)
  85.     Library:     1984 B (1.94 KiB)
  86.     Input:       288 B (included in Activations)
  87.     Output:      16 B (included in Activations)
  88. Done
  89. Analyze complete on AI model

         点击PC桌面验证按钮(validation on desktop),对训练模型进行验证,主要是验证原始模型和转为c语言支持的模型时,验证前后计算资源、模型精度等差异情况,验证数据就是我们刚指定的testx.npy、testy.npy文件。

  1. Starting AI validation on desktop with custom dataset : D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR\results\2023_Jan_24_14_40_13\testx.npy...
  2. D:/workForSoftware/STM32CubeMX/Repository/Packs/STMicroelectronics/X-CUBE-AI/7.3.0/Utilities/windows/stm32ai validate --name har_ign -m D:/tools/arm_tool/STM32CubeIDE/STM32CubeFunctionPack_SENSING1_V4.0.3/Utilities/AI_Ressources/Training Scripts/HAR/results/2023_Jan_11_17_50_03/har_IGN.h5 --type keras --compression none --verbosity 1 --workspace C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace46601041973700012072836595678733048 --output C:\Users\py_hp\.stm32cubemx\network_output --allocate-inputs --allocate-outputs --valoutput D:/tools/arm_tool/STM32CubeIDE/STM32CubeFunctionPack_SENSING1_V4.0.3/Utilities/AI_Ressources/Training Scripts/HAR/results/2023_Jan_24_14_40_13/testy.npy --valinput D:/tools/arm_tool/STM32CubeIDE/STM32CubeFunctionPack_SENSING1_V4.0.3/Utilities/AI_Ressources/Training Scripts/HAR/results/2023_Jan_24_14_40_13/testx.npy 
  3. Neural Network Tools for STM32AI v1.6.0 (STM.ai v7.3.0-RC5)
  4. Copying the AI runtime files to the user workspace: C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace46601041973700012072836595678733048\inspector_har_ign\workspace
  5.  
  6.  Exec/report summary (validate)
  7.  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  8.  model file         :   D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR\results\2023_Jan_11_17_50_03\har_IGN.h5
  9.  type               :   keras                                                                                                                                                    
  10.  c_name             :   har_ign                                                                                                                                                  
  11.  compression        :   none                                                                                                                                                     
  12.  options            :   allocate-inputs, allocate-outputs                                                                                                                        
  13.  optimization       :   balanced                                                                                                                                                 
  14.  target/series      :   generic                                                                                                                                                  
  15.  workspace dir      :   C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace46601041973700012072836595678733048                                                                      
  16.  output dir         :   C:\Users\py_hp\.stm32cubemx\network_output                                                                                                               
  17.  vinput files       :   D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR\results\2023_Jan_24_14_40_13\testx.npy 
  18.  voutput files      :   D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR\results\2023_Jan_24_14_40_13\testy.npy 
  19.  model_fmt          :   float                                                                                                                                                    
  20.  model_name         :   har_IGN                                                                                                                                                  
  21.  model_hash         :   ff0080dbe395a3d8fd3f63243d2326d5                                                                                                                         
  22.  params #           :   3,064 items (11.97 KiB)                                                                                                                                  
  23.  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  24.  input 1/1          :   'input_0' (domain:activations/**default**)                                                                                                               
  25.                     :   72 items, 288 B, ai_float, float, (1,24,3,1)                                                                                                             
  26.  output 1/1         :   'dense_2' (domain:activations/**default**)                                                                                                               
  27.                     :   4 items, 16 B, ai_float, float, (1,1,1,4)                                                                                                                
  28.  macc               :   14,404                                                                                                                                                   
  29.  weights (ro)       :   12,256 B (11.97 KiB) (1 segment)                                                                                                                         
  30.  activations (rw)   :   2,016 B (1.97 KiB) (1 segment) *                                                                                                                         
  31.  ram (total)        :   2,016 B (1.97 KiB) = 2,016 + 0 + 0                                                                                                                       
  32.  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  33.  (*) 'input'/'output' buffers can be used from the activations buffer
  34. Setting validation data...
  35.  loading file: D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR\results\2023_Jan_24_14_40_13\testx.npy
  36.  - samples are reshaped: (12806, 24, 3, 1) -> (12806, 24, 3, 1)
  37.  loading file: D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR\results\2023_Jan_24_14_40_13\testy.npy
  38.  - samples are reshaped: (12806, 4) -> (12806, 1, 1, 4)
  39.  I[1]: (12806, 24, 3, 1)/float32, min/max=[-26.319, 32.844], mean/std=[0.075, 5.034], input_0
  40.  O[1]: (12806, 1, 1, 4)/float32, min/max=[0.000, 1.000], mean/std=[0.250, 0.433], dense_2
  41. Running the STM AI c-model (AI RUNNER)...(name=har_ign, mode=x86)
  42.  X86 shared lib (C:\Users\py_hp\AppData\Local\Temp\mxAI_workspace46601041973700012072836595678733048\inspector_har_ign\workspace\lib\libai_har_ign.dll) ['har_ign']
  43.  Summary "har_ign" - ['har_ign']
  44.  --------------------------------------------------------------------------------
  45.  inputs/outputs       : 1/1
  46.  input_1              : (1,24,3,1), float32, 288 bytes, in activations buffer
  47.  output_1             : (1,1,1,4), float32, 16 bytes, in activations buffer
  48.  n_nodes              : 4
  49.  compile_datetime     : Jan 25 2023 22:55:51 (Wed Jan 25 22:55:47 2023)
  50.  activations          : 2016
  51.  weights              : 12256
  52.  macc                 : 14404
  53.  --------------------------------------------------------------------------------
  54.  runtime              : STM.AI 7.3.0 (Tools 7.3.0)
  55.  capabilities         : ['IO_ONLY''PER_LAYER''PER_LAYER_WITH_DATA']
  56.  device               : AMD64 Intel64 Family 6 Model 158 Stepping 9, GenuineIntel (Windows)
  57.  --------------------------------------------------------------------------------
  58. STM.IO:   0%|          | 0/12806 [00:00<?, ?it/s]
  59. STM.IO:  11%|█         | 1424/12806 [00:00<00:00, 14136.31it/s]
  60. STM.IO:  14%|█▍        | 1849/12806 [00:00<00:04, 2293.62it/s] 
  61. STM.IO:  17%|█▋        | 2170/12806 [00:00<00:05, 1774.74it/s]
  62. STM.IO:  19%|█▉        | 2429/12806 [00:01<00:06, 1520.26it/s]
  63. STM.IO:  21%|██        | 2645/12806 [00:01<00:07, 1348.30it/s]
  64. STM.IO:  22%|██▏       | 2828/12806 [00:01<00:07, 1291.67it/s]
  65. STM.IO:  23%|██▎       | 2992/12806 [00:01<00:07, 1245.52it/s]
  66. STM.IO:  25%|██▍       | 3141/12806 [00:01<00:08, 1194.67it/s]
  67. STM.IO:  26%|██▌       | 3278/12806 [00:01<00:08, 1107.82it/s]
  68. STM.IO:  27%|██▋       | 3407/12806 [00:02<00:08, 1154.55it/s]
  69. STM.IO:  28%|██▊       | 3548/12806 [00:02<00:07, 1218.70it/s]
  70. STM.IO:  29%|██▊       | 3678/12806 [00:02<00:07, 1175.60it/s]
  71. STM.IO:  30%|██▉       | 3811/12806 [00:02<00:07, 1215.67it/s]
  72. STM.IO:  31%|███       | 3938/12806 [00:02<00:07, 1139.58it/s]
  73. STM.IO:  32%|███▏      | 4075/12806 [00:02<00:07, 1197.83it/s]
  74. STM.IO:  33%|███▎      | 4199/12806 [00:02<00:07, 1207.70it/s]
  75. STM.IO:  34%|███▍      | 4323/12806 [00:02<00:07, 1078.59it/s]
  76. STM.IO:  35%|███▍      | 4451/12806 [00:02<00:07, 1129.92it/s]
  77. STM.IO:  36%|███▌      | 4590/12806 [00:03<00:06, 1194.76it/s]
  78. STM.IO:  37%|███▋      | 4718/12806 [00:03<00:06, 1216.59it/s]
  79. STM.IO:  38%|███▊      | 4843/12806 [00:03<00:06, 1195.77it/s]
  80. STM.IO:  39%|███▉      | 4965/12806 [00:03<00:06, 1159.48it/s]
  81. STM.IO:  40%|███▉      | 5083/12806 [00:03<00:06, 1116.81it/s]
  82. STM.IO:  41%|████      | 5197/12806 [00:03<00:06, 1095.57it/s]
  83. STM.IO:  41%|████▏     | 5308/12806 [00:03<00:06, 1078.25it/s]
  84. STM.IO:  42%|████▏     | 5433/12806 [00:03<00:06, 1122.47it/s]
  85. STM.IO:  43%|████▎     | 5547/12806 [00:03<00:06, 1056.59it/s]
  86. STM.IO:  44%|████▍     | 5655/12806 [00:04<00:06, 1055.01it/s]
  87. STM.IO:  45%|████▍     | 5762/12806 [00:04<00:06, 1035.74it/s]
  88. STM.IO:  46%|████▌     | 5867/12806 [00:04<00:06, 1022.60it/s]
  89. STM.IO:  47%|████▋     | 5981/12806 [00:04<00:06, 1053.06it/s]
  90. STM.IO:  48%|████▊     | 6098/12806 [00:04<00:06, 1083.31it/s]
  91. STM.IO:  48%|████▊     | 6208/12806 [00:04<00:06, 1025.35it/s]
  92. STM.IO:  49%|████▉     | 6312/12806 [00:04<00:06, 952.27it/s] 
  93. STM.IO:  50%|█████     | 6410/12806 [00:04<00:07, 910.42it/s]
  94. STM.IO:  51%|█████     | 6509/12806 [00:04<00:06, 930.92it/s]
  95. STM.IO:  52%|█████▏    | 6620/12806 [00:04<00:06, 976.37it/s]
  96. STM.IO:  52%|█████▏    | 6720/12806 [00:05<00:06, 926.81it/s]
  97. STM.IO:  53%|█████▎    | 6818/12806 [00:05<00:06, 940.17it/s]
  98. STM.IO:  54%|█████▍    | 6914/12806 [00:05<00:06, 930.36it/s]
  99. STM.IO:  55%|█████▍    | 7008/12806 [00:05<00:06, 852.84it/s]
  100. STM.IO:  55%|█████▌    | 7106/12806 [00:05<00:06, 885.63it/s]
  101. STM.IO:  56%|█████▌    | 7197/12806 [00:05<00:06, 805.83it/s]
  102. STM.IO:  57%|█████▋    | 7299/12806 [00:05<00:06, 858.49it/s]
  103. STM.IO:  58%|█████▊    | 7388/12806 [00:05<00:07, 744.49it/s]
  104. STM.IO:  58%|█████▊    | 7473/12806 [00:06<00:07, 755.34it/s]
  105. STM.IO:  59%|█████▉    | 7560/12806 [00:06<00:06, 785.88it/s]
  106. STM.IO:  60%|█████▉    | 7642/12806 [00:06<00:06, 782.78it/s]
  107. STM.IO:  60%|██████    | 7723/12806 [00:06<00:06, 768.90it/s]
  108. STM.IO:  61%|██████    | 7825/12806 [00:06<00:06, 828.66it/s]
  109. STM.IO:  62%|██████▏   | 7937/12806 [00:06<00:05, 897.30it/s]
  110. STM.IO:  63%|██████▎   | 8033/12806 [00:06<00:05, 913.23it/s]
  111. STM.IO:  63%|██████▎   | 8127/12806 [00:06<00:05, 913.79it/s]
  112. STM.IO:  64%|██████▍   | 8254/12806 [00:06<00:04, 994.44it/s]
  113. STM.IO:  65%|██████▌   | 8358/12806 [00:06<00:04, 1005.50it/s]
  114. STM.IO:  66%|██████▌   | 8466/12806 [00:07<00:04, 1024.62it/s]
  115. STM.IO:  67%|██████▋   | 8579/12806 [00:07<00:04, 1052.03it/s]
  116. STM.IO:  68%|██████▊   | 8712/12806 [00:07<00:03, 1111.93it/s]
  117. STM.IO:  69%|██████▉   | 8826/12806 [00:07<00:03, 1044.19it/s]
  118. STM.IO:  70%|██████▉   | 8933/12806 [00:07<00:03, 1005.29it/s]
  119. STM.IO:  71%|███████   | 9036/12806 [00:07<00:03, 1010.21it/s]
  120. STM.IO:  71%|███████▏  | 9150/12806 [00:07<00:03, 1043.83it/s]
  121. STM.IO:  72%|███████▏  | 9277/12806 [00:07<00:03, 1100.57it/s]
  122. STM.IO:  73%|███████▎  | 9404/12806 [00:07<00:02, 1144.16it/s]
  123. STM.IO:  74%|███████▍  | 9521/12806 [00:08<00:02, 1135.98it/s]
  124. STM.IO:  75%|███████▌  | 9648/12806 [00:08<00:02, 1170.75it/s]
  125. STM.IO:  76%|███████▋  | 9780/12806 [00:08<00:02, 1209.41it/s]
  126. STM.IO:  77%|███████▋  | 9903/12806 [00:08<00:02, 1184.92it/s]
  127. STM.IO:  78%|███████▊  | 10032/12806 [00:08<00:02, 1212.12it/s]
  128. STM.IO:  79%|███████▉  | 10155/12806 [00:08<00:02, 1214.79it/s]
  129. STM.IO:  80%|████████  | 10278/12806 [00:08<00:02, 1096.01it/s]
  130. STM.IO:  81%|████████  | 10391/12806 [00:08<00:02, 1100.40it/s]
  131. STM.IO:  82%|████████▏ | 10506/12806 [00:08<00:02, 1112.34it/s]
  132. STM.IO:  83%|████████▎ | 10619/12806 [00:09<00:02, 1035.66it/s]
  133. STM.IO:  84%|████████▎ | 10725/12806 [00:09<00:02, 914.43it/s] 
  134. STM.IO:  84%|████████▍ | 10821/12806 [00:09<00:02, 889.74it/s]
  135. STM.IO:  85%|████████▌ | 10920/12806 [00:09<00:02, 915.76it/s]
  136. STM.IO:  86%|████████▌ | 11014/12806 [00:09<00:02, 819.91it/s]
  137. STM.IO:  87%|████████▋ | 11100/12806 [00:09<00:02, 738.28it/s]
  138. STM.IO:  87%|████████▋ | 11178/12806 [00:09<00:02, 740.24it/s]
  139. STM.IO:  88%|████████▊ | 11255/12806 [00:09<00:02, 657.58it/s]
  140. STM.IO:  89%|████████▊ | 11364/12806 [00:10<00:02, 702.16it/s]
  141. STM.IO:  89%|████████▉ | 11455/12806 [00:10<00:01, 752.49it/s]
  142. STM.IO:  90%|█████████ | 11548/12806 [00:10<00:01, 794.66it/s]
  143. STM.IO:  91%|█████████ | 11631/12806 [00:10<00:01, 796.56it/s]
  144. STM.IO:  92%|█████████▏| 11748/12806 [00:10<00:01, 879.46it/s]
  145. STM.IO:  93%|█████████▎| 11853/12806 [00:10<00:01, 922.73it/s]
  146. STM.IO:  93%|█████████▎| 11949/12806 [00:10<00:00, 895.23it/s]
  147. STM.IO:  94%|█████████▍| 12049/12806 [00:10<00:00, 922.41it/s]
  148. STM.IO:  95%|█████████▍| 12163/12806 [00:10<00:00, 976.60it/s]
  149. STM.IO:  96%|█████████▌| 12280/12806 [00:10<00:00, 1025.50it/s]
  150. STM.IO:  97%|█████████▋| 12412/12806 [00:11<00:00, 1096.80it/s]
  151. STM.IO:  98%|█████████▊| 12525/12806 [00:11<00:00, 1072.91it/s]
  152. STM.IO:  99%|█████████▉| 12663/12806 [00:11<00:00, 1147.57it/s]
  153. STM.IO: 100%|█████████▉| 12781/12806 [00:11<00:00, 1118.51it/s]
  154.  Results for 12806 inference(s) - average per inference
  155.   device              : AMD64 Intel64 Family 6 Model 158 Stepping 9, GenuineIntel (Windows)
  156.   duration            : 0.057ms
  157.   c_nodes             : 4
  158.  c_id  m_id  desc                output                   ms          %
  159.  -------------------------------------------------------------------------------
  160.  0     1     Conv2dPool (0x109)  (1,3,3,24)/float32/864B       0.049   86.5%
  161.  1     3     Dense (0x104)       (1,1,1,12)/float32/48B        0.005    9.1%
  162.  2     5     Dense (0x104)       (1,1,1,4)/float32/16B         0.001    1.8%
  163.  3     5     NL (0x107)          (1,1,1,4)/float32/16B         0.001    2.5%
  164.  -------------------------------------------------------------------------------
  165.                                                                0.057 ms
  166.  NOTE: duration and exec time per layer is just an indication. They are dependent of the HOST-machine work-load.
  167. Running the Keras model...
  168. Saving validation data...
  169.  output directory: C:\Users\py_hp\.stm32cubemx\network_output
  170.  creating C:\Users\py_hp\.stm32cubemx\network_output\har_ign_val_io.npz
  171.  m_outputs_1: (12806, 1, 1, 4)/float32, min/max=[0.000, 1.000], mean/std=[0.250, 0.376], dense_2
  172.  c_outputs_1: (12806, 1, 1, 4)/float32, min/max=[0.000, 1.000], mean/std=[0.250, 0.376], dense_2
  173. Computing the metrics...
  174.  Accuracy report #1 for the generated x86 C-model
  175.  ----------------------------------------------------------------------------------------------------
  176.  notes: - computed against the provided ground truth values
  177.         - 12806 samples (4 items per sample)
  178.   acc=86.72%, rmse=0.224433631, mae=0.096160948, l2r=0.496649474, nse=73.14%
  179.   4 classes (12806 samples)
  180.   ----------------------------
  181.   C0      3678   .   62   41
  182.   C1        .  1124  14    .
  183.   C2       254  10  1806  662
  184.   C3       66    .   592 4497
  185.  Accuracy report #1 for the reference model
  186.  ----------------------------------------------------------------------------------------------------
  187.  notes: - computed against the provided ground truth values
  188.         - 12806 samples (4 items per sample)
  189.   acc=86.72%, rmse=0.224433631, mae=0.096160948, l2r=0.496649474, nse=73.14%
  190.   4 classes (12806 samples)
  191.   ----------------------------
  192.   C0      3678   .   62   41
  193.   C1        .  1124  14    .
  194.   C2       254  10  1806  662
  195.   C3       66    .   592 4497
  196.  Cross accuracy report #1 (reference vs C-model)
  197.  ----------------------------------------------------------------------------------------------------
  198.  notes: - the output of the reference model is used as ground truth/reference value
  199.         - 12806 samples (4 items per sample)
  200.   acc=100.00%, rmse=0.000000063, mae=0.000000024, l2r=0.000000139, nse=100.00%
  201.   4 classes (12806 samples)
  202.   ----------------------------
  203.   C0      3998   .    .    .
  204.   C1        .  1134   .    .
  205.   C2        .    .  2474   .
  206.   C3        .    .    .  5200
  207.  
  208.  Evaluation report (summary)
  209.  ----------------------------------------------------------------------------------------------------------------------------------------------------------
  210.  Output              acc       rmse          mae           l2r           mean           std           nse           tensor                                
  211.  ----------------------------------------------------------------------------------------------------------------------------------------------------------
  212.  x86 c-model #1      86.72%    0.224433631   0.096160948   0.496649474   -0.000000000   0.224435821   0.731362987   dense_2, ai_float, (1,1,1,4), m_id=[5]
  213.  original model #1   86.72%    0.224433631   0.096160948   0.496649474   -0.000000001   0.224435821   0.731362987   dense_2, ai_float, (1,1,1,4), m_id=[5]
  214.  X-cross #1          100.00%   0.000000063   0.000000024   0.000000139   0.000000000    0.000000063   1.000000000   dense_2, ai_float, (1,1,1,4), m_id=[5]
  215.  ----------------------------------------------------------------------------------------------------------------------------------------------------------
  216.  
  217.   rmse : Root Mean Squared Error
  218.   mae  : Mean Absolute Error
  219.   l2r  : L2 relative error
  220.   nse  : Nash-Sutcliffe efficiency criteria
  221. Creating txt report file C:\Users\py_hp\.stm32cubemx\network_output\har_ign_validate_report.txt
  222. elapsed time (validate): 26.458s
  223. Validation

         6.4 c语言神经网络模型生成及源码输出

        将开发板重新选择ST-LINK连接(5-6跳线帽拔出,插入1-2跳线中)

        为了后续源码讲解方便,只生产c语言的神经网络模型源码,不输出应用示例程序(有个弊端就是在新建程序加载到开发板后,validation on target功能无法使用),如下图所示。

         配置输出工程

 七、c语言神经网络模型使用

       7.1 C语言神经网络模型源文件

        在cubeMX配置神经网络模型时,指明了名称是har_ign,会生成如下文件har_ign.h/c、har_ign_data.h/c、har_ign_data_params.h/c、har_ign_config.h这些源码文件就是转换后的c语言神经网络模型,提供了一系列的API,这些API通过调用cube.AI软件包的内置功能,工程实现了神经网络计算功能:

                 其中har_ign_generate_report.txt文件是生成c语言神经网络模型的过程记录。

       7.2 串口功能自定义实现

        由于本文没有选择生成配套的应用程序代码,因此串口功能还需要自己实现,因此我移植了串口功能代码,在工程目录下,创建了ICore源目录,并创建print、usart子目录,分别在两个子目录加入print.h/c和usart.h/c源码。

         print.h如下:

  1. #ifndef INC_RETARGET_H_
  2. #define INC_RETARGET_H_
  3. #include "stm32l4xx_hal.h"
  4. #include "stdio.h"//用于printf函数串口重映射
  5. #include <sys/stat.h>
  6. void ResetPrintInit(UART_HandleTypeDef *huart);
  7. int _isatty(int fd);
  8. int _write(int fd, char* ptr, int len);
  9. int _close(int fd);
  10. int _lseek(int fd, int ptr, int dir);
  11. int _read(int fd, char* ptr, int len);
  12. int _fstat(int fd, struct stat* st);
  13. #endif /* INC_RETARGET_H_ */

        print.c如下:

  1. #include <_ansi.h>
  2. #include <_syslist.h>
  3. #include <errno.h>
  4. #include <sys/time.h>
  5. #include <sys/times.h>
  6. #include <limits.h>
  7. #include <signal.h>
  8. #include <stdint.h>
  9. #include <stdio.h>
  10. #include "print.h"
  11. #if !defined(OS_USE_SEMIHOSTING)
  12. #define STDIN_FILENO 0
  13. #define STDOUT_FILENO 1
  14. #define STDERR_FILENO 2
  15. UART_HandleTypeDef *gHuart;
  16. void ResetPrintInit(UART_HandleTypeDef *huart) {
  17. gHuart = huart;
  18. /* Disable I/O buffering for STDOUT stream, so that
  19. * chars are sent out as soon as they are printed. */
  20. setvbuf(stdout, NULL, _IONBF, 0);
  21. }
  22. int _isatty(int fd) {
  23. if (fd >= STDIN_FILENO && fd <= STDERR_FILENO)
  24. return 1;
  25. errno = EBADF;
  26. return 0;
  27. }
  28. int _write(int fd, char* ptr, int len) {
  29. HAL_StatusTypeDef hstatus;
  30. if (fd == STDOUT_FILENO || fd == STDERR_FILENO) {
  31. hstatus = HAL_UART_Transmit(gHuart, (uint8_t *) ptr, len, HAL_MAX_DELAY);
  32. if (hstatus == HAL_OK)
  33. return len;
  34. else
  35. return EIO;
  36. }
  37. errno = EBADF;
  38. return -1;
  39. }
  40. int _close(int fd) {
  41. if (fd >= STDIN_FILENO && fd <= STDERR_FILENO)
  42. return 0;
  43. errno = EBADF;
  44. return -1;
  45. }
  46. int _lseek(int fd, int ptr, int dir) {
  47. (void) fd;
  48. (void) ptr;
  49. (void) dir;
  50. errno = EBADF;
  51. return -1;
  52. }
  53. int _read(int fd, char* ptr, int len) {
  54. HAL_StatusTypeDef hstatus;
  55. if (fd == STDIN_FILENO) {
  56. hstatus = HAL_UART_Receive(gHuart, (uint8_t *) ptr, 1, HAL_MAX_DELAY);
  57. if (hstatus == HAL_OK)
  58. return 1;
  59. else
  60. return EIO;
  61. }
  62. errno = EBADF;
  63. return -1;
  64. }
  65. int _fstat(int fd, struct stat* st) {
  66. if (fd >= STDIN_FILENO && fd <= STDERR_FILENO) {
  67. st->st_mode = S_IFCHR;
  68. return 0;
  69. }
  70. errno = EBADF;
  71. return 0;
  72. }
  73. #endif //#if !defined(OS_USE_SEMIHOSTING)

        usart.h

  1. #ifndef INC_USART_H_
  2. #define INC_USART_H_
  3. #include "stm32l4xx_hal.h" //HAL库文件声明
  4. #include <string.h>//用于字符串处理的库
  5. #include "../print/print.h"//用于printf函数串口重映射
  6. extern UART_HandleTypeDef huart1;//声明LPUSART的HAL库结构体
  7. #define USART_REC_LEN 256//定义LPUSART最大接收字节数
  8. extern uint8_t USART_RX_BUF[USART_REC_LEN];//接收缓冲,最大USART_REC_LEN个字节.末字节为换行符
  9. extern uint16_t USART_RX_STA;//接收状态标记
  10. extern uint8_t USART_NewData;//当前串口中断接收的1个字节数据的缓存
  11. void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart);//串口中断回调函数声明
  12. #endif /* INC_USART_H_ */

        usart.c如下:

  1. #include "usart.h"
  2. uint8_t USART_RX_BUF[USART_REC_LEN];//接收缓冲,最大USART_REC_LEN个字节.末字节为换行符
  3. /*
  4. * bit15:接收到回车(0x0d)时设置HLPUSART_RX_STA|=0x8000;
  5. * bit14:接收溢出标志,数据超出缓存长度时,设置HLPUSART_RX_STA|=0x4000;
  6. * bit13:预留
  7. * bit12:预留
  8. * bit11~0:接收到的有效字节数目(0~4095)
  9. */
  10. uint16_t USART_RX_STA=0;接收状态标记//bit15:接收完成标志,bit14:接收到回车(0x0d),bit13~0:接收到的有效字节数目
  11. uint8_t USART_NewData;//当前串口中断接收的1个字节数据的缓存
  12. void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)//串口中断回调函数
  13. {
  14. if(huart ==&huart1)//判断中断来源(串口1:USB转串口)
  15. {
  16. if(USART_NewData==0x0d){//回车标记
  17. USART_RX_STA|=0x8000;//标记接到回车
  18. }else{
  19. if((USART_RX_STA&0X0FFF)<USART_REC_LEN){
  20. USART_RX_BUF[USART_RX_STA&0X0FFF]=USART_NewData; //将收到的数据放入数组
  21. USART_RX_STA++; //数据长度计数加1
  22. }else{
  23. USART_RX_STA|=0x4000;//数据超出缓存长度,标记溢出
  24. }
  25. }
  26. HAL_UART_Receive_IT(&huart1,(uint8_t *)&USART_NewData,1); //再开启接收中断
  27. }
  28. }

        7.3 c语言神经网络模型API使用

        先不管底层机理,本文给下面代码,看看如何实现这些API调用的,在main.c文件中,通过aiInit函数,实现har_ign模型初始化,并打印模型相关信息。在主函数循环体中,通过串口输入信息,获得数据生成因子,调用acquire_and_process_data进行输入数据生成,然后调用aiRun,并传入生成数据及输出缓存,进行神经网络模型调用。然后调用post_process打印输出信息。

  1. /* USER CODE END Header */
  2. /* Includes ------------------------------------------------------------------*/
  3. #include "main.h"
  4. #include "crc.h"
  5. #include "dfsdm.h"
  6. #include "i2c.h"
  7. #include "quadspi.h"
  8. #include "spi.h"
  9. #include "usart.h"
  10. #include "usb_otg.h"
  11. #include "gpio.h"
  12. /* Private includes ----------------------------------------------------------*/
  13. /* USER CODE BEGIN Includes */
  14. #include "../../ICore/print/print.h"
  15. #include "../../ICore/usart/usart.h"
  16. #include "../../X-CUBE-AI/app/har_ign.h"
  17. #include "../../X-CUBE-AI/app/har_ign_data.h"
  18. /* USER CODE END Includes */
  19. /* Private typedef -----------------------------------------------------------*/
  20. /* USER CODE BEGIN PTD */
  21. /* USER CODE END PTD */
  22. /* Private define ------------------------------------------------------------*/
  23. /* USER CODE BEGIN PD */
  24. /* USER CODE END PD */
  25. /* Private macro -------------------------------------------------------------*/
  26. /* USER CODE BEGIN PM */
  27. /* USER CODE END PM */
  28. /* Private variables ---------------------------------------------------------*/
  29. /* USER CODE BEGIN PV */
  30. /* USER CODE END PV */
  31. /* Private function prototypes -----------------------------------------------*/
  32. void SystemClock_Config(void);
  33. /* USER CODE BEGIN PFP */
  34. /* USER CODE END PFP */
  35. /* Private user code ---------------------------------------------------------*/
  36. /* USER CODE BEGIN 0 */
  37. /* Global handle to reference the instantiated C-model */
  38. static ai_handle network = AI_HANDLE_NULL;
  39. /* Global c-array to handle the activations buffer */
  40. AI_ALIGNED(32)
  41. static ai_u8 activations[AI_HAR_IGN_DATA_ACTIVATIONS_SIZE];
  42. /* Array to store the data of the input tensor */
  43. AI_ALIGNED(32)
  44. static ai_float in_data[AI_HAR_IGN_IN_1_SIZE];
  45. /* or static ai_u8 in_data[AI_HAR_IGN_IN_1_SIZE_BYTES]; */
  46. /* c-array to store the data of the output tensor */
  47. AI_ALIGNED(32)
  48. static ai_float out_data[AI_HAR_IGN_OUT_1_SIZE];
  49. /* static ai_u8 out_data[AI_HAR_IGN_OUT_1_SIZE_BYTES]; */
  50. /* Array of pointer to manage the model's input/output tensors */
  51. static ai_buffer *ai_input;
  52. static ai_buffer *ai_output;
  53. static ai_buffer_format fmt_input;
  54. static ai_buffer_format fmt_output;
  55. void buf_print(void)
  56. {
  57. printf("in_data:");
  58. for (int i=0; i<AI_HAR_IGN_IN_1_SIZE; i++)
  59. {
  60. printf("%f ",((ai_float*)in_data)[i]);
  61. }
  62. printf("\n");
  63. printf("out_data:");
  64. for (int i=0; i<AI_HAR_IGN_OUT_1_SIZE; i++)
  65. {
  66. printf("%f ",((ai_float*)out_data)[i]);
  67. }
  68. printf("\n");
  69. }
  70. void aiPrintBufInfo(const ai_buffer *buffer)
  71. {
  72. printf("(%lu, %lu, %lu, %lu)", AI_BUFFER_SHAPE_ELEM(buffer, AI_SHAPE_BATCH),
  73. AI_BUFFER_SHAPE_ELEM(buffer, AI_SHAPE_HEIGHT),
  74. AI_BUFFER_SHAPE_ELEM(buffer, AI_SHAPE_WIDTH),
  75. AI_BUFFER_SHAPE_ELEM(buffer, AI_SHAPE_CHANNEL));
  76. printf(" buffer_size:%d ", (int)AI_BUFFER_SIZE(buffer));
  77. }
  78. void aiPrintDataType(const ai_buffer_format fmt)
  79. {
  80. if (AI_BUFFER_FMT_GET_TYPE(fmt) == AI_BUFFER_FMT_TYPE_FLOAT)
  81. printf("float%d ", (int)AI_BUFFER_FMT_GET_BITS(fmt));
  82. else if (AI_BUFFER_FMT_GET_TYPE(fmt) == AI_BUFFER_FMT_TYPE_BOOL) {
  83. printf("bool%d ", (int)AI_BUFFER_FMT_GET_BITS(fmt));
  84. } else { /* integer type */
  85. printf("%s%d ", AI_BUFFER_FMT_GET_SIGN(fmt)?"i":"u",
  86. (int)AI_BUFFER_FMT_GET_BITS(fmt));
  87. }
  88. }
  89. void aiPrintDataInfo(const ai_buffer *buffer,const ai_buffer_format fmt)
  90. {
  91. if (buffer->data)
  92. printf(" @0x%X/%d \n",
  93. (int)buffer->data,
  94. (int)AI_BUFFER_BYTE_SIZE(AI_BUFFER_SIZE(buffer), fmt)
  95. );
  96. else
  97. printf(" (User Domain)/%d \n",
  98. (int)AI_BUFFER_BYTE_SIZE(AI_BUFFER_SIZE(buffer), fmt)
  99. );
  100. }
  101. void aiPrintNetworkInfo(const ai_network_report report)
  102. {
  103. printf("Model name : %s\n", report.model_name);
  104. printf(" model signature : %s\n", report.model_signature);
  105. printf(" model datetime : %s\r\n", report.model_datetime);
  106. printf(" compile datetime : %s\r\n", report.compile_datetime);
  107. printf(" runtime version : %d.%d.%d\r\n",
  108. report.runtime_version.major,
  109. report.runtime_version.minor,
  110. report.runtime_version.micro);
  111. if (report.tool_revision[0])
  112. printf(" Tool revision : %s\r\n", (report.tool_revision[0])?report.tool_revision:"");
  113. printf(" tools version : %d.%d.%d\r\n",
  114. report.tool_version.major,
  115. report.tool_version.minor,
  116. report.tool_version.micro);
  117. printf(" complexity : %lu MACC\r\n", (unsigned long)report.n_macc);
  118. printf(" c-nodes : %d\r\n", (int)report.n_nodes);
  119. printf(" map_activations : %d\r\n", report.map_activations.size);
  120. for (int idx=0; idx<report.map_activations.size;idx++) {
  121. const ai_buffer *buffer = &report.map_activations.buffer[idx];
  122. printf(" [%d] ", idx);
  123. aiPrintBufInfo(buffer);
  124. printf("\r\n");
  125. }
  126. printf(" map_weights : %d\r\n", report.map_weights.size);
  127. for (int idx=0; idx<report.map_weights.size;idx++) {
  128. const ai_buffer *buffer = &report.map_weights.buffer[idx];
  129. printf(" [%d] ", idx);
  130. aiPrintBufInfo(buffer);
  131. printf("\r\n");
  132. }
  133. }
  134. /*
  135. * Bootstrap
  136. */
  137. int aiInit(void) {
  138. ai_error err;
  139. /* Create and initialize the c-model */
  140. const ai_handle acts[] = { activations };
  141. err = ai_har_ign_create_and_init(&network, acts, NULL);
  142. if (err.type != AI_ERROR_NONE) {
  143. printf("ai_error_type:%d,ai_error_code:%d\r\n",err.type,err.code);
  144. };
  145. ai_network_report report;
  146. if (ai_har_ign_get_report(network, &report) != true) {
  147. printf("ai get report error\n");
  148. return -1;
  149. }
  150. aiPrintNetworkInfo(report);
  151. /* Reteive pointers to the model's input/output tensors */
  152. ai_input = ai_har_ign_inputs_get(network, NULL);
  153. ai_output = ai_har_ign_outputs_get(network, NULL);
  154. //
  155. fmt_input = AI_BUFFER_FORMAT(ai_input);
  156. fmt_output = AI_BUFFER_FORMAT(ai_output);
  157. printf(" n_inputs/n_outputs : %u/%u\r\n", report.n_inputs,
  158. report.n_outputs);
  159. printf("input :");
  160. aiPrintBufInfo(ai_input);
  161. aiPrintDataType(fmt_input);
  162. aiPrintDataInfo(ai_input, fmt_input);
  163. //
  164. printf("output :");
  165. aiPrintBufInfo(ai_output);
  166. aiPrintDataType(fmt_output);
  167. aiPrintDataInfo(ai_output, fmt_output);
  168. return 0;
  169. }
  170. int acquire_and_process_data(void *in_data,int factor)
  171. {
  172. printf("in_data:");
  173. for (int i=0; i<AI_HAR_IGN_IN_1_SIZE; i++)
  174. {
  175. switch(i%3){
  176. case 0:
  177. ((ai_float*)in_data)[i] = -175+(ai_float)(i*factor*1.2)/10.0;
  178. break;
  179. case 1:
  180. ((ai_float*)in_data)[i] = 50+(ai_float)(i*factor*0.6)/100.0;
  181. break;
  182. case 2:
  183. ((ai_float*)in_data)[i] = 975-(ai_float)(i*factor*1.8)/100.0;
  184. break;
  185. default:
  186. break;
  187. }
  188. printf("%f ",((ai_float*)in_data)[i]);
  189. }
  190. printf("\n");
  191. return 0;
  192. }
  193. /*
  194. * Run inference
  195. */
  196. int aiRun(const void *in_data, void *out_data) {
  197. ai_i32 n_batch;
  198. ai_error err;
  199. /* 1 - Update IO handlers with the data payload */
  200. ai_input[0].data = AI_HANDLE_PTR(in_data);
  201. ai_output[0].data = AI_HANDLE_PTR(out_data);
  202. /* 2 - Perform the inference */
  203. n_batch = ai_har_ign_run(network, &ai_input[0], &ai_output[0]);
  204. if (n_batch != 1) {
  205. err = ai_har_ign_get_error(network);
  206. printf("ai_error_type:%d,ai_error_code:%d\r\n",err.type,err.code);
  207. };
  208. return 0;
  209. }
  210. int post_process(void *out_data)
  211. {
  212. printf("out_data:");
  213. for (int i=0; i<AI_HAR_IGN_OUT_1_SIZE; i++)
  214. {
  215. printf("%f ",((ai_float*)out_data)[i]);
  216. }
  217. printf("\n");
  218. return 0;
  219. }
  220. /* USER CODE END 0 */
  221. /**
  222. * @brief The application entry point.
  223. * @retval int
  224. */
  225. int main(void)
  226. {
  227. /* USER CODE BEGIN 1 */
  228. /* USER CODE END 1 */
  229. /* MCU Configuration--------------------------------------------------------*/
  230. /* Reset of all peripherals, Initializes the Flash interface and the Systick. */
  231. HAL_Init();
  232. /* USER CODE BEGIN Init */
  233. /* USER CODE END Init */
  234. /* Configure the system clock */
  235. SystemClock_Config();
  236. /* USER CODE BEGIN SysInit */
  237. /* USER CODE END SysInit */
  238. /* Initialize all configured peripherals */
  239. MX_GPIO_Init();
  240. MX_DFSDM1_Init();
  241. MX_I2C2_Init();
  242. MX_QUADSPI_Init();
  243. MX_SPI3_Init();
  244. MX_USART1_UART_Init();
  245. MX_USART3_UART_Init();
  246. MX_USB_OTG_FS_PCD_Init();
  247. MX_CRC_Init();
  248. /* USER CODE BEGIN 2 */
  249. ResetPrintInit(&huart1);
  250. HAL_UART_Receive_IT(&huart1,(uint8_t *)&USART_NewData, 1); //再开启接收中断
  251. USART_RX_STA = 0;
  252. aiInit();
  253. uint8_t factor = 1;
  254. buf_print();
  255. /* USER CODE END 2 */
  256. /* Infinite loop */
  257. /* USER CODE BEGIN WHILE */
  258. while (1)
  259. {
  260. if(USART_RX_STA&0xC000){//溢出或换行,重新开始
  261. printf("uart1:%.*s\r\n",USART_RX_STA&0X0FFF, USART_RX_BUF);
  262. if(strstr((const char*)USART_RX_BUF,(const char*)"test"))
  263. {
  264. factor = ((uint8_t)USART_RX_BUF[4]-0x30);
  265. printf("factor:%d\n",factor);
  266. acquire_and_process_data(in_data,factor);
  267. aiRun(in_data, out_data);
  268. post_process(out_data);
  269. }
  270. USART_RX_STA=0;//接收错误,重新开始
  271. HAL_Delay(100);//等待
  272. }
  273. /* USER CODE END WHILE */
  274. /* USER CODE BEGIN 3 */
  275. }
  276. /* USER CODE END 3 */
  277. }
  278. //其他生产代码
  279. .......

       7.4 编译及程序运行测试

        配置工程输出文件格式支持,并设置运行配置:

         编译及下载程序:

         打开串口助手,查看日志输出,发送信息,例如test7,即7作为因子生成输入数据,然后看输出结果。

         7.5 补充说明

        目前只能说是采用cubeIDE+cube.AI+keras的STM32嵌入式人工智能开发走通了流程,但是串口反馈回来的日志信息是不合理的,因为在数据采集时我们只采集了传感器的三个数值,但在训练模型时,默认的数据输入量是24,显然是不合理的,因此需要还需要重新分析官方提供的HAR训练模型的项目,使得模型训练与采集数据匹配起来,请阅读篇二。

         但考虑到官方提供的HAR训练模型的工程项目还是过于复杂,不助于学习和了解cube.AI的真正用法,因此后面将抛弃官方提供的HAR训练模型的项目,自行撰写一个训练模型项目+实际采集数据生成神经网络模型,是的数据输入和输出匹配,并将采用传感器实时采集到的数据进行计算评估,请阅读偏三。   

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/很楠不爱3/article/detail/539587
推荐阅读
相关标签
  

闽ICP备14008679号