- 2024-05-30
-
发表了主题帖:
#AI挑战营终点站#手写数字识别模型部署
一、镜像烧写与环境搭建
1、镜像烧写
https://wiki.luckfox.com/zh/Luckfox-Pico/Luckfox-Pico-prepare提供了ubuntu的镜像和buildroot镜像,这里我选择了buildroot镜像,参考SPI NAND Flash 镜像烧录 | LUCKFOX WIKI中的步骤即可完成烧写
2、开发环境搭建
参考SDK 环境部署(PC端) | LUCKFOX WIKI完成环境搭建
启动一个交互式的容器,使用名为 "luckfox",并将本地主机上的 SDK 目录映射到容器内的 /home 目录,最后以 Bash shell 运行(只需执行一次)
sudo docker run -it --name luckfox -v /home/jiang/luckfox-pico:/home luckfoxtech/luckfox_pico:1.0 /bin/bash
打开名为"luckfox"的docker环境
sudo docker start -ai luckfox
在这个环境下可以编译镜像文件,不过耗时过长,我就跳过了,后面例程的编译要在这个环境下进行
二、例程编译
该例程来自https://gitee.com/luyism/luckfox_rtsp_mnist,感谢这位大佬的开源。
进入目录后运行如下命令即可完成编译
export LUCKFOX_SDK_PATH=/home
mkdir build
cd build
cmake ..
make && make install
该例程主函数while循环中的代码如下:
// get vpss frame
s32Ret = RK_MPI_VPSS_GetChnFrame(0, 0, &stVpssFrame, -1);
if (s32Ret == RK_SUCCESS)
{
void *data = RK_MPI_MB_Handle2VirAddr(stVpssFrame.stVFrame.pMbBlk);
// 复制一个帧,然后使用模型推理
cv::Mat frame(height, width, CV_8UC3, data);
// 在图像中找到数字的轮廓
cv::Rect digit_rect = find_digit_contour(frame);
if (digit_rect.area() > 0)
{
cv::Mat digit_region = frame(digit_rect);
cv::Mat preprocessed = preprocess_digit_region(digit_region);
// 运行推理
run_inference(&app_mnist_ctx, preprocessed);
// 从predictions_queue中获取预测到的数字和其对应的概率
if (!predictions_queue.empty())
{
Prediction prediction = predictions_queue.back();
cv::rectangle(frame, digit_rect, cv::Scalar(0, 255, 0), 2);
// 在图像上显示预测结果,显示字号为1,颜色为红色,粗细为2
cv::putText(frame, std::to_string(prediction.digit), cv::Point(digit_rect.x, digit_rect.y - 10),
cv::FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(255, 0, 0), 2);
// 在图像上显示预测概率
cv::putText(frame, std::to_string(prediction.probability), cv::Point(digit_rect.x+ 30, digit_rect.y - 10),
cv::FONT_HERSHEY_SIMPLEX, 0.7, cv::Scalar(230, 0, 0), 2);
// 打印预测到的数字和其对应的概率
// printf("****** Predicted digit: %d, Probability: %.2f ******\n", prediction.digit, prediction.probability);
// 从predictions_queue中删除最旧的元素
predictions_queue.pop_back();
}
}
sprintf(fps_text, "fps:%.2f", fps);
cv::putText(frame, fps_text,
cv::Point(40, 40),
cv::FONT_HERSHEY_SIMPLEX, 1,
cv::Scalar(0, 255, 0), 2);
memcpy(data, frame.data, width * height * 3);
}
// send stream
// encode H264
RK_MPI_VENC_SendFrame(0, &stVpssFrame, -1);
// rtsp
s32Ret = RK_MPI_VENC_GetStream(0, &stFrame, -1);
if (s32Ret == RK_SUCCESS)
{
if (g_rtsplive && g_rtsp_session)
{
// printf("len = %d PTS = %d \n",stFrame.pstPack->u32Len, stFrame.pstPack->u64PTS);
void *pData = RK_MPI_MB_Handle2VirAddr(stFrame.pstPack->pMbBlk);
rtsp_tx_video(g_rtsp_session, (uint8_t *)pData, stFrame.pstPack->u32Len,
stFrame.pstPack->u64PTS);
rtsp_do_event(g_rtsplive);
}
RK_U64 nowUs = TEST_COMM_GetNowUs();
fps = (float)1000000 / (float)(nowUs - stVpssFrame.stVFrame.u64PTS);
}
// release frame
s32Ret = RK_MPI_VPSS_ReleaseChnFrame(0, 0, &stVpssFrame);
if (s32Ret != RK_SUCCESS)
{
RK_LOGE("RK_MPI_VI_ReleaseChnFrame fail %x", s32Ret);
}
s32Ret = RK_MPI_VENC_ReleaseStream(0, &stFrame);
if (s32Ret != RK_SUCCESS)
{
RK_LOGE("RK_MPI_VENC_ReleaseStream fail %x", s32Ret);
}
这里在拿到一帧图像后,应用形态学操作得到数字的位置,再用神经网络对该数字进行分类。
三、实验结果
将编译生成的`luckfox_rtsp_mnist_dir`文件夹上传到开发板上,进入该文件夹,运行以下命令
./luckfox_rtsp_mnist model/best.rknn
实验结果见附件result.mp4
可以看出模型能够准确识别0,2,3,4,5,识别效果不够理想。模型将7,8,9都识别成了3,可能是因为我量化只用了一张类别为3的图片的原因。
后续可以使用更多图片进行量化,希望能够有更好的结果。
[localvideo]215bfe3857d1db13b7d34ed4e596bc01[/localvideo]
-
回复了主题帖:
#AI挑战营第二站#ONNX模型转RKNN
这是我量化用的图片
- 2024-05-10
-
回复了主题帖:
【AI挑战营第二站】算法工程化部署打包成SDK
1、ONNX是一个用于表示深度学习模型的开放标准,它允许模型在不同的深度学习框架之间转换。ONNX定义了一组与环境、平台均无关的标准格式,来增强各种AI模型的可交互性。在实际业务中,可以使用Pytorch或者TensorFlow训练模型,导出成ONNX模型,然后转换成目标设备上支撑的模型格式。RKNN模型是Rockchip公司提出的一种专门用于在其NPU上部署和运行的神经网络模型,能够提供高效的神经网络推理能力,使得在嵌入式设备上进行的人工智能应用变得更加便捷和高效。
2、帖子链接:#AI挑战营第二站#ONNX模型转RKNN https://bbs.eeworld.com.cn/thread-1281445-1-1.html
-
发表了主题帖:
#AI挑战营第二站#ONNX模型转RKNN
本帖最后由 jianghelong 于 2024-5-10 16:30 编辑
1、环境搭建
(1)安装Anaconda(已安装的可跳过)
./Anaconda3-2023.07-2-Linux-x86_64.sh
然后回车,一直往下滑动看完 license,最后输入 yes 后,继续按下回车, 然后进入安装,安装完成后输入 yes 初始化 anaconda3
(2)新建虚拟环境
# 重启或者使用命令 source ~/.bashrc 进入 anaconda 环境
source ~/.bashrc
# 创建一个环境,本例中环境名称为toolkit2_1.6,python的版本为3.8
conda create -n toolkit2_1.6 python=3.8
# 激活环境
conda activate toolkit2_1.6
(3)安装指定版本的库和whl文件
# 拉取 toolkit2 源码
git clone https://github.com/airockchip/rknn-toolkit2
# 配置 pip 源
pip3 config set global.index-url https://mirror.baidu.com/pypi/simple
# pip 安装指定版本的库(请根据 python 版本选择文件安装)
cd rknn-toolkit2
pip3 install -r packages/requirements_cp38-2.0.0b0.txt
# 安装 whl 文件,需要根据 python 版本和 rknn_toolkit2 版本
pip3 install packages/rknn_toolkit2-2.0.0b0+9bab5682-cp38-cp38-linux_x86_64.whl
2、转换代码
这里从MNIST数据集中选取了一张图片进行量化
from rknn.api import RKNN
if __name__ == '__main__':
# 创建 RKNN 对象
rknn = RKNN(verbose=True)
print('--> Config model')
rknn.config(mean_values=[[0]], std_values=[[1]], target_platform='rv1106')
print('done')
# 导入 onnx 模型
print('--> Loading model')
ret = rknn.load_onnx(model="new.onnx")
if ret != 0:
print('Load model failed!')
exit(ret)
print('done')
# 构建模型
print('--> Building model')
#ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET)
ret = rknn.build(do_quantization=True, dataset="dataset.txt")
if ret != 0:
print('Build model failed!')
exit(ret)
print('done')
# 导出 rknn 模型
print('--> Export rknn model')
ret = rknn.export_rknn("best.rknn")
if ret != 0:
print('Export rknn model failed!')
exit(ret)
print('done')
rknn.release()
输出日志如下:
I rknn-toolkit2 version: 2.0.0b0+9bab5682
--> Config model
done
--> Loading model
I Loading : 0%| |I Loading : 100%|██████████████████████████████████████████████████| 7/7 [00:00<00:00, 29448.47it/s]
done
--> Building model
D base_optimize ...
D base_optimize done.
D
D fold_constant ...
D fold_constant done.
D
D correct_ops ...
D correct_ops done.
D
D fuse_ops ...
D fuse_ops results:
D replace_reshape_gemm_by_conv: remove node = ['/Reshape', '/linear_layer/Gemm'], add node = ['/linear_layer/Gemm_2conv', '/linear_layer/Gemm_2conv_reshape']
D fold_constant ...
D fold_constant done.
D fuse_ops done.
D
D sparse_weight ...
D sparse_weight done.
D
I GraphPreparing : 0%| |I GraphPreparing : 100%|████████████████████████████████████████████| 8/8 [00:00<00:00, 6011.18it/s]
I Quantizating : 0%| |I Quantizating : 100%|██████████████████████████████████████████████| 8/8 [00:00<00:00, 1291.55it/s]
D
D quant_optimizer ...
D quant_optimizer results:
D adjust_relu: ['/conv_layer2/conv_layer2.1/Relu', '/conv_layer1/conv_layer1.1/Relu']
D quant_optimizer done.
D
W build: The default input dtype of 'input.1' is changed from 'float32' to 'int8' in rknn model for performance!
Please take care of this change when deploy rknn model with Runtime API!
W build: The default output dtype of '21' is changed from 'float32' to 'int8' in rknn model for performance!
Please take care of this change when deploy rknn model with Runtime API!
I rknn building ...
I RKNN: [16:27:56.472] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1, layout_match = 1, enable_argb_group = 0
I RKNN: librknnc version: 2.0.0b0 (35a6907d79@2024-03-24T02:34:11)
D RKNN: [16:27:56.473] RKNN is invoked
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNExtractCustomOpAttrs
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNExtractCustomOpAttrs
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNSetOpTargetPass
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNSetOpTargetPass
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNBindNorm
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNBindNorm
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNAddFirstConv
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNAddFirstConv
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNEliminateQATDataConvert
D RKNN: [16:27:56.475] <<<<<<<< end: rknn::RKNNEliminateQATDataConvert
D RKNN: [16:27:56.475] >>>>>> start: rknn::RKNNTileGroupConv
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNTileGroupConv
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNTileFcBatchFuse
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNTileFcBatchFuse
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNAddConvBias
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNAddConvBias
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNTileChannel
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNTileChannel
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNPerChannelPrep
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNPerChannelPrep
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNBnQuant
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNBnQuant
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNFuseOptimizerPass
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNFuseOptimizerPass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNTurnAutoPad
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNTurnAutoPad
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNInitRNNConst
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNInitRNNConst
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNInitCastConst
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNInitCastConst
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNMultiSurfacePass
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNMultiSurfacePass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNReplaceConstantTensorPass
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNReplaceConstantTensorPass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNSubgraphManager
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNSubgraphManager
D RKNN: [16:27:56.476] >>>>>> start: OpEmit
D RKNN: [16:27:56.476] <<<<<<<< end: OpEmit
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNLayoutMatchPass
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[input.1]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/conv_layer1/conv_layer1.1/Relu_output_0]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/conv_layer1/conv_layer1.2/MaxPool_output_0]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/conv_layer2/conv_layer2.1/Relu_output_0]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/conv_layer2/conv_layer2.2/MaxPool_output_0]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(64), tname:[/linear_layer/Gemm_2conv_output]
I RKNN: [16:27:56.476] AppointLayout: t->setNativeLayout(0), tname:[21]
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNLayoutMatchPass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNAddSecondaryNode
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNAddSecondaryNode
D RKNN: [16:27:56.476] >>>>>> start: OpEmit
D RKNN: [16:27:56.476] finish initComputeZoneMap
D RKNN: [16:27:56.476] <<<<<<<< end: OpEmit
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNSubGraphMemoryPlanPass
D RKNN: [16:27:56.476] >>>>>> start: rknn::RKNNProfileAnalysisPass
D RKNN: [16:27:56.476] node: Reshape:/linear_layer/Gemm_2conv_reshape, Target: NPU
D RKNN: [16:27:56.476] <<<<<<<< end: rknn::RKNNProfileAnalysisPass
D RKNN: [16:27:56.477] >>>>>> start: rknn::RKNNOperatorIdGenPass
D RKNN: [16:27:56.477] <<<<<<<< end: rknn::RKNNOperatorIdGenPass
D RKNN: [16:27:56.477] >>>>>> start: rknn::RKNNWeightTransposePass
W RKNN: [16:27:56.477] Warning: Tensor /linear_layer/Gemm_2conv_reshape_shape need paramter qtype, type is set to float16 by default!
W RKNN: [16:27:56.477] Warning: Tensor /linear_layer/Gemm_2conv_reshape_shape need paramter qtype, type is set to float16 by default!
D RKNN: [16:27:56.477] <<<<<<<< end: rknn::RKNNWeightTransposePass
D RKNN: [16:27:56.477] >>>>>> start: rknn::RKNNCPUWeightTransposePass
D RKNN: [16:27:56.477] <<<<<<<< end: rknn::RKNNCPUWeightTransposePass
D RKNN: [16:27:56.477] >>>>>> start: rknn::RKNNModelBuildPass
D RKNN: [16:27:56.479] <<<<<<<< end: rknn::RKNNModelBuildPass
D RKNN: [16:27:56.479] >>>>>> start: rknn::RKNNModelRegCmdbuildPass
D RKNN: [16:27:56.479] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.479] Network Layer Information Table
D RKNN: [16:27:56.479] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.479] ID OpType DataType Target InputShape OutputShape Cycles(DDR/NPU/Total) RW(KB) FullName
D RKNN: [16:27:56.479] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.479] 0 InputOperator INT8 CPU \ (1,1,28,28) 0/0/0 0 InputOperator:input.1
D RKNN: [16:27:56.479] 1 ConvRelu INT8 NPU (1,1,28,28),(16,1,3,3),(16) (1,16,28,28) 2279/7056/7056 1 Conv:/conv_layer1/conv_layer1.0/Conv
D RKNN: [16:27:56.479] 2 MaxPool INT8 NPU (1,16,28,28) (1,16,14,14) 2546/0/2546 12 MaxPool:/conv_layer1/conv_layer1.2/MaxPool
D RKNN: [16:27:56.479] 3 ConvRelu INT8 NPU (1,16,14,14),(32,16,3,3),(32) (1,32,14,14) 2318/3744/3744 7 Conv:/conv_layer2/conv_layer2.0/Conv
D RKNN: [16:27:56.479] 4 MaxPool INT8 NPU (1,32,14,14) (1,32,7,7) 1273/0/1273 6 MaxPool:/conv_layer2/conv_layer2.2/MaxPool
D RKNN: [16:27:56.479] 5 Conv INT8 NPU (1,32,7,7),(10,32,7,7),(10) (1,10,1,1) 2824/784/2824 16 Conv:/linear_layer/Gemm_2conv
D RKNN: [16:27:56.479] 6 Reshape INT8 NPU (1,10,1,1),(2) (1,10) 7/0/7 0 Reshape:/linear_layer/Gemm_2conv_reshape
D RKNN: [16:27:56.479] 7 OutputOperator INT8 CPU (1,10) \ 0/0/0 0 OutputOperator:21
D RKNN: [16:27:56.479] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.479] <<<<<<<< end: rknn::RKNNModelRegCmdbuildPass
D RKNN: [16:27:56.479] >>>>>> start: rknn::RKNNFlatcModelBuildPass
D RKNN: [16:27:56.479] Export Mini RKNN model to /tmp/tmppq9867f1/check.rknn
D RKNN: [16:27:56.479] >>>>>> end: rknn::RKNNFlatcModelBuildPass
D RKNN: [16:27:56.480] >>>>>> start: rknn::RKNNMemStatisticsPass
D RKNN: [16:27:56.480] ----------------------------------------------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.480] Feature Tensor Information Table
D RKNN: [16:27:56.480] ------------------------------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] ID User Tensor DataType DataFormat OrigShape NativeShape | [Start End) Size
D RKNN: [16:27:56.480] ------------------------------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] 1 ConvRelu input.1 INT8 NC1HWC2 (1,1,28,28) (1,1,28,28,1) | 0x00006640 0x000069c0 0x00000380
D RKNN: [16:27:56.480] 2 MaxPool /conv_layer1/conv_layer1.1/Relu_output_0 INT8 NC1HWC2 (1,16,28,28) (1,1,28,28,16) | 0x000069c0 0x00009ac0 0x00003100
D RKNN: [16:27:56.480] 3 ConvRelu /conv_layer1/conv_layer1.2/MaxPool_output_0 INT8 NC1HWC2 (1,16,14,14) (1,1,14,14,16) | 0x00009ac0 0x0000a700 0x00000c40
D RKNN: [16:27:56.480] 4 MaxPool /conv_layer2/conv_layer2.1/Relu_output_0 INT8 NC1HWC2 (1,32,14,14) (1,2,14,14,16) | 0x00006640 0x00007ec0 0x00001880
D RKNN: [16:27:56.480] 5 Conv /conv_layer2/conv_layer2.2/MaxPool_output_0 INT8 NC1HWC2 (1,32,7,7) (1,2,7,7,16) | 0x00007ec0 0x00008540 0x00000680
D RKNN: [16:27:56.480] 6 Reshape /linear_layer/Gemm_2conv_output INT8 NC1HWC2 (1,10,1,1) (1,1,1,1,16) | 0x00006640 0x00006650 0x00000010
D RKNN: [16:27:56.480] 7 OutputOperator 21 INT8 UNDEFINED (1,10) (1,10) | 0x000066c0 0x00006700 0x00000040
D RKNN: [16:27:56.480] ------------------------------------------------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] -------------------------------------------------------------------------------------------------------------
D RKNN: [16:27:56.480] Const Tensor Information Table
D RKNN: [16:27:56.480] ---------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] ID User Tensor DataType OrigShape | [Start End) Size
D RKNN: [16:27:56.480] ---------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] 1 ConvRelu conv_layer1.0.weight INT8 (16,1,3,3) | 0x00000000 0x00000240 0x00000240
D RKNN: [16:27:56.480] 1 ConvRelu conv_layer1.0.bias INT32 (16) | 0x00000240 0x000002c0 0x00000080
D RKNN: [16:27:56.480] 3 ConvRelu conv_layer2.0.weight INT8 (32,16,3,3) | 0x000002c0 0x000014c0 0x00001200
D RKNN: [16:27:56.480] 3 ConvRelu conv_layer2.0.bias INT32 (32) | 0x000014c0 0x000015c0 0x00000100
D RKNN: [16:27:56.480] 5 Conv linear_layer.weight INT8 (10,32,7,7) | 0x000015c0 0x00005300 0x00003d40
D RKNN: [16:27:56.480] 5 Conv linear_layer.bias INT32 (10) | 0x00005300 0x00005380 0x00000080
D RKNN: [16:27:56.480] 6 Reshape /linear_layer/Gemm_2conv_reshape_shape INT64 (2) | 0x00005380*0x000053c0 0x00000040
D RKNN: [16:27:56.480] ---------------------------------------------------------------------------+---------------------------------
D RKNN: [16:27:56.480] ----------------------------------------
D RKNN: [16:27:56.480] Total Internal Memory Size: 16.1875KB
D RKNN: [16:27:56.480] Total Weight Memory Size: 20.9375KB
D RKNN: [16:27:56.480] ----------------------------------------
D RKNN: [16:27:56.480] <<<<<<<< end: rknn::RKNNMemStatisticsPass
I rknn buiding done.
done
--> Export rknn model
done
得到的rknn模型见附件
- 2024-05-09
-
回复了主题帖:
入围名单公布:嵌入式工程师AI挑战营(初阶),获RV1106 Linux 板+摄像头的名单
个人信息已确认,领取板卡,可继续完成&分享挑战营第二站和第三站任务
- 2024-04-12
-
回复了主题帖:
【AI挑战营第一站】模型训练:在PC上完成手写数字模型训练,免费申请RV1106开发板
1、模型训练的本质是利用数据调整模型参数从而优化模型的性能,结果是性能更好的模型。
2、PyTorch是一个开源的Python机器学习库,它既可以看作加入了GPU支持的numpy,同时也可以看成一个拥有自动求导功能的强大的深度神经网络。PyTorch目前支持Windows、Linux和MacOS等主流操作系统,它既可以在CPU上运行,也可以在GPU和NPU上运行
3、#AI挑战营第一站# MNIST手写数字识别 https://bbs.eeworld.com.cn/thread-1277400-1-1.html
-
回复了主题帖:
【AI挑战营第一站】模型训练:在PC上完成手写数字模型训练,免费申请RV1106开发板
1、深度学习模型训练的本质在于通过调整模型参数来优化模型的性能。在训练过程中,模型会基于当前的参数对输入数据进行预测,并根据预测结果与实际标签之间的差异来计算损失。随后,通过反向传播算法,模型会更新其参数以减小损失,从而提高预测的准确性。
-
发表了主题帖:
#AI挑战营第一站# MNIST手写数字识别
本实验使用了由两层卷积层和一层全连接层组成的卷积神经网络实现了MNIST手写数字识别,得到的模型准确率为98.76%。
由于网络规模很小,本实验没有使用GPU,本实验使用的CPU为R5-7530u(六核十二线程,2GHz)
1、本实验构建的模型如下,完整代码见附件main.py:
class ConvolutionalNeuralWork(torch.nn.Module):
def __init__(self, num_classes=10):
super(ConvolutionalNeuralWork, self).__init__()
self.conv_layer1 = torch.nn.Sequential(torch.nn.Conv2d(1, 16, kernel_size=3, stride=1, padding=1), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2, stride=2))
self.conv_layer2 = torch.nn.Sequential(torch.nn.Conv2d(16, 32, kernel_size=3, stride=1, padding=1), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2, stride=2))
self.linear_layer = torch.nn.Linear(1568, num_classes)
def forward(self, x):
x = self.conv_layer1(x)
x = self.conv_layer2(x)
x = x.reshape(x.size(0), -1)
x = self.linear_layer(x)
return x
2、训练代码如下,共训练了一百轮,完整代码见附件main.py:
wb = openpyxl.Workbook()
ws = wb.active
ws.cell(1, 1).value = "epoch"
ws.cell(1, 2).value = "train loss"
ws.cell(1, 3).value = "val accuracy"
num_epochs = 100
batch_size = 100
num_class = 10
learning_rate = 0.0001
#加载数据
train_dataset = torchvision.datasets.MNIST(root="./data0", train=True, transform=torchvision.transforms.ToTensor(), download=False)
test_dataset = torchvision.datasets.MNIST(root="./data0", train=False, transform=torchvision.transforms.ToTensor(), download=False)
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
model = ConvolutionalNeuralWork(num_class)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
best_accuracy = 0
for epoch in range(num_epochs):
loss_sum = 0
#训练
for image, label in train_loader:
outputs = model(image)
loss = criterion(outputs, label)
loss_sum += loss.item()
loss.backward()
optimizer.step()
optimizer.zero_grad()
print("epoch:", epoch)
print("train loss:", loss_sum/train_dataset.data.shape[0])
ws.cell(epoch+2, 1).value = epoch + 1
ws.cell(epoch+2, 2).value = loss_sum/train_dataset.data.shape[0]
total = 0
correct = 0
#测试
for image, label in test_loader:
outputs = model(image)
s, predicted = torch.max(outputs, 1)
total = label.size(0) + total
for i in range(label.size(0)):
if label[i] == predicted[i]:
correct = correct + 1
print("val accuracy:", correct / total)
ws.cell(epoch+2, 3).value = correct / total
if correct / total > best_accuracy:
best_accuracy = correct / total
print("better model")
torch.save(model, "best.pth")
ws.cell(epoch+2, 4).value = "better model"
print('\n')
wb.save("result.xlsx")
3、训练过程中训练集损失函数和测试集准确率如下所示,详细信息见附件result.xlsx
4、最终生成的模型文件*.pth,以及转换的ONNX模型见附件