- 2024-12-19
-
回复了主题帖:
给领导的朋友大棚安装了个自动升降篷布的系统,三相电不懂,压敏电阻烧了,差点捅...
用个时控开关得了,再用单片机开发还要考虑稳定性
- 2024-12-15
-
发表了主题帖:
嵌入式工程师AI挑战营RV1106人脸识别+流水记录(6)
接上回记录
文件路径
Luckfox Pico\示例程序\RKNN示例程序\luckfox_rknn.zip\luckfox_rknn\scripts\luckfox_onnx_to_rknn\sim\retinaface\
Copy一份retinaface.py文件
发现这份文件在rknn.build之前与convert文件是一致的,不同是是build之后一个是进行模型转换,一个是调用rknn.inference进行推理
修改rknn.config的target_platform和dynamic_input后,直接运行了一遍
嘿,可以推理,只是好多float转成int8的警告,最后停在了
bboxes, kpss = outputs #获取输出数据
提示是outputs输出太多了,为了想要看到结果打印了outputs,发现结构好像与原来luckfox不一样,这里又走了几天弯路,在等号前加上*-承接多的返回值
但是下面的计算转换还是回报错,头大
最后还是回到output,想着把大象放进冰箱的事就去insightface中找灵感
参照insightface\python-package的readme
insightface中已经进行了封装
app = FaceAnalysis(providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
初始化了模型
其调用了insightface\python-package\insightface\app下的face_analysis.py文件
app.prepare(ctx_id=0, det_size=(640, 640))
设置了输入模型的大小
faces = app.get(img)
送入图片返回数据
app.get是个关键,其流程大致是app下的face_analysis->model_zoo下的retinaface
retinaface.py是FaceAnalysis人脸检测的核心,
app.get调用了retinaface的detect
detect-》forward,forward中self.session.run进行推理
好了,找到推理部分,那么insightface的推理结果解析是否可以用于rknn的推理结果的解析呢
试试呗,将insightface中self.session.run推理后的代码移到rknn,就有了下面的代码(没整理)
outputs = rknn.inference(inputs=[infer_img])#, data_format=['nhwc'])
#print (outputs)
scores_list = []
bboxes_list = []
kpss_list = []
print("forward-------------------")
input_height = 640
input_width = 640
fmc = 3
threshold=0.5
feat_stride_fpn=[8, 16, 32]
num_anchors = 2
for idx, stride in enumerate(feat_stride_fpn):
print(feat_stride_fpn,idx,stride)
scores = outputs[idx]
bbox_preds = outputs[idx+fmc]
bbox_preds = bbox_preds * stride
kps_preds = outputs[idx+fmc*2] * stride
height = input_height // stride
width = input_width // stride
K = height * width
key = (height, width, stride)
print(key,stride)
center_cache = {}
if key in center_cache:
anchor_centers = center_cache[key]
else:
#solution-1, c style:
#anchor_centers = np.zeros( (height, width, 2), dtype=np.float32 )
#for i in range(height):
# anchor_centers[i, :, 1] = i
#for i in range(width):
# anchor_centers[:, i, 0] = i
#solution-2:
#ax = np.arange(width, dtype=np.float32)
#ay = np.arange(height, dtype=np.float32)
#xv, yv = np.meshgrid(np.arange(width), np.arange(height))
#anchor_centers = np.stack([xv, yv], axis=-1).astype(np.float32)
#solution-3:
anchor_centers = np.stack(np.mgrid[:height, :width][::-1], axis=-1).astype(np.float32)
#print(anchor_centers.shape)
anchor_centers = (anchor_centers * stride).reshape( (-1, 2) )
if num_anchors>1:
anchor_centers = np.stack([anchor_centers]*num_anchors, axis=1).reshape( (-1,2) )
if len(center_cache)<100:
center_cache[key] = anchor_centers
pos_inds = np.where(scores>=threshold)[0]
bboxes = distance2bbox(anchor_centers, bbox_preds)
pos_scores = scores[pos_inds]
pos_bboxes = bboxes[pos_inds]
scores_list.append(pos_scores)
bboxes_list.append(pos_bboxes)
kpss = distance2kps(anchor_centers, kps_preds)
#kpss = kps_preds
kpss = kpss.reshape( (kpss.shape[0], -1, 2) )
pos_kpss = kpss[pos_inds]
kpss_list.append(pos_kpss)
#self.forward
print("bboxes_list---------------")
print (bboxes_list)
'''
print("scores_list---------------")
print (scores_list)
print("bboxes_list---------------")
print (bboxes_list)
print("kpss_list-----------------")
print (kpss_list)
print("-----------------------")
'''
det_scale=0.5
scores = np.vstack(scores_list)
scores_ravel = scores.ravel()
order = scores_ravel.argsort()[::-1]
bboxes = np.vstack(bboxes_list) / det_scale
kpss = np.vstack(kpss_list) / det_scale
pre_det = np.hstack((bboxes, scores)).astype(np.float32, copy=False)
pre_det = pre_det[order, :]
keep = nms(pre_det,0.4)
det = pre_det[keep, :]
kpss = kpss[order,:,:]
kpss = kpss[keep,:,:]
max_num = 0
print (max_num,det.shape[0])
if max_num > 0 and det.shape[0] > max_num:
area = (det[:, 2] - det[:, 0]) * (det[:, 3] -
det[:, 1])
img_center = img.shape[0] // 2, img.shape[1] // 2
offsets = np.vstack([
(det[:, 0] + det[:, 2]) / 2 - img_center[1],
(det[:, 1] + det[:, 3]) / 2 - img_center[0]
])
offset_dist_squared = np.sum(np.power(offsets, 2.0), 0)
if metric=='max':
values = area
else:
values = area - offset_dist_squared * 2.0 # some extra weight on the centering
bindex = np.argsort(
values)[::-1] # some extra weight on the centering
bindex = bindex[0:max_num]
det = det[bindex, :]
if kpss is not None:
kpss = kpss[bindex, :]
#self.det_model.detect
bboxes = det
kpss = kpss
print("bboxes-------------------")
print(bboxes)
print("kpss-------------------")
print(kpss)
'''
if bboxes.shape[0] == 0:
return []
'''
ret = []
for i in range(bboxes.shape[0]):
bbox = bboxes[i, 0:4]
det_score = bboxes[i, 4]
kps = None
if kpss is not None:
kps = kpss[i]
face = Face(bbox=bbox, kps=kps, det_score=det_score)
'''
for taskname, model in self.models.items():
if taskname=='detection':
continue
model.get(img, face)
'''
#model.get(img, face)
ret.append(face)
print("ret---------------")
print (ret)
faces = ret
img = cv2.imread('./test.jpg')
rimg = draw_on(img, faces)
cv2.imwrite("./ldh_output.jpg", rimg)
可以运行,并打印出了结果,但是Y轴为啥偏移了
当然这张图也用rknn自带的推理下,效果如下图
目前,正在排查坐标偏移的问题
To be continue...
-
发表了主题帖:
嵌入式工程师AI挑战营RV1106人脸识别+流水记录(5)
本帖最后由 90houyidai 于 2024-12-15 21:54 编辑
从这里开始感觉没啥头绪,就去翻了数字识别的帖子,提到数据集,训练模型,然后在insightface工程文件夹里一通乱找
发现一个model_zoo的文件夹,readme中提供了下载链接,可是都好大好大,还是G盘的,网路不稳定根本下载不了
转而发现本次好像没有要求自己训练模型,所以就想着buffalo_l的模型是ONNX的是不是可以直接拿来使用
那就先试试
det_10g Retinaface detection
w600k r50 ResNet50 Recognition
2d106det 2d106 & 3d68 Alignment
1k3d68 2d106 & 3d68 Alignment
genderage Gender&Age Attributes
这里我使用miniconda3,为了防止出现环境错误,我就将insightface和RKNN-toolkit2的环境安在一起
然后按照RKNN 推理测试 | LUCKFOX WIKI转换RKNN模型
脚本文件可在前面下载的Luckfox的网盘文件中找到
1、E:\BaiduNetdiskDownload\Luckfox Pico\示例程序\RKNN示例程序\luckfox_rknn.zip\luckfox_rknn\scripts\luckfox_onnx_to_rknn\convert\
rknn.config修改target_platform、dynamic_input
rknn.config(mean_values=[[104, 117, 123]], std_values=[[1, 1, 1]], target_platform='rv1106',remove_reshape=True,
quantized_algorithm="normal", quant_img_RGB2BGR=True,optimization_level=0,dynamic_input=[[[1,3,224,224]]])
python convert.py ../model/det_10g.onnx ../dataset/retinaface_dataset.txt ../model/det_10g.rknn Retinaface
2、 else:
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rv1106',dynamic_input=[[[1,3,112,112]]])
python convert.py ../model/w600k_r50.onnx ../dataset/retinaface_dataset.txt ../model/w600k_r50.rknn ResNet50
3、 else:
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rv1106',dynamic_input=[[[1,3,192,192]]])
python convert.py ../model/2d106det.onnx ../dataset/retinaface_dataset.txt ../model/2d106det.rknn 2d106
转换det_10g模型是提示下面dynamic_input错误,这咋整呢?
卡了好几天,想着能不能修改ONNX模型文件呢?又去安装了onnx-simplifier和onnx-modifier,将onnx模型简化一下并且更改了输入层
pip install onnx-simplifier
python simplified.py
python -m onnxsim det_10g.onnx simp_sim.onnx --overwrite-input-shape 1,3,640,640
然后转换了一下,发现并无卵用,依然报错(可能我不会用)
E inference: The input(ndarray) shape (1, 640, 640, 3) is wrong, expect 'nchw' like (1, 3, 640, 640)!
outputs = rknn.inference(inputs=[infer_img],data_format='nhwc')
所以按照提示去rknn.config中去配置了动态输入参数,嘿,竟然过了
转换好像没问题了,但是真的能用了吗?那就在PC试试先
打开网盘的压缩包Luckfox Pico\示例程序\RKNN示例程序\luckfox_rknn.zip\luckfox_rknn\scripts\luckfox_onnx_to_rknn\sim\retinaface\
cd sim/retinaface
以retinaface修改测试
转换文件路径
Luckfox Pico\示例程序\RKNN示例程序\luckfox_rknn.zip\luckfox_rknn\scripts\luckfox_onnx_to_rknn\convert\
转换文件
-
回复了主题帖:
嵌入式工程师AI挑战营RV1106人脸识别+流水记录(1)
Jacktang 发表于 2024-12-15 09:39
10天前做的,还能回忆当时的是编译时最大的问题好像是网络下载包,记忆力真好,赞
拉了几遍都没拉全,后面放弃了去码云上拉的
-
回复了主题帖:
嵌入式工程师AI挑战营RV1106人脸识别+流水记录(2)
Jacktang 发表于 2024-12-15 09:41
Wiki8.1必须要在虚拟机中运行么
按照WIKI,应该也可以在DOCK中运行,因为要用到SDK,应该是用SDK中的编译链吧
- 2024-12-13
-
发表了主题帖:
嵌入式工程师AI挑战营RV1106人脸识别+流水记录(4)
本帖最后由 90houyidai 于 2024-12-15 11:29 编辑
上一篇已经将LuckFox这边的跑通了,板上推理可能还欠缺一点
下面看是跑Insightface,为了方便我将InsightFace的环境与RKNN-Toolkit2的环境搭在一起,方便之后模型的转换
首先,依然是环境搭建,又是一通PIP,刚开始是安装这个使用insightface实现人脸检测和人脸识别-物联沃-IOTWORD物联网进行搭建的
pip install -U insightface -i https://pypi.tuna.tsinghua.edu.cn/simple/
pip install onnxruntime -i https://pypi.tuna.tsinghua.edu.cn/simple/
onnxruntime-gpu 或者onnxruntime 选择GPU或者CPU
环境搭建好了就是下载InsightFace工程
git clone https://github.com/deepinsight/insightface
\insightface\python-package 文件夹中有个Readme文件
参照说明建立测试PY文件
import cv2
import numpy as np
import insightface
from insightface.app import FaceAnalysis
from insightface.data import get_image as ins_get_image
app = FaceAnalysis(allowed_modules=['detection'],providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
print("prepare::::")
app.prepare(ctx_id=0, det_size=(640, 640))
img = ins_get_image('t1') #不用带后缀,图片放到./insightface/python-package/insightface/data/images
faces = app.get(img)
print("faces::::", faces)
rimg = app.draw_on(img, faces)
cv2.imwrite("./ldh_output.jpg", rimg)
handler = insightface.model_zoo.get_model('/home/ubuntu/.insightface/models/buffalo_l/w600k_r50.onnx', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
handler.prepare(ctx_id=0)
img = ins_get_image('t1')
feature = handler.get(img, faces[0])
print("size of feature:", len(feature))
print("feature:", feature)
feature = handler.get(img, faces[1])
print("size of feature:", len(feature))
print("feature:", feature)
运行后就可以在当前路径下得到一张输出图片,其将图片中的人脸全部框出,并标记人脸识别点
第一次运行会比较慢,通过输出可以看到是在下载模型(此步不确定,回忆不起来了)
模型使用的是buffalo_l模型,路径在~/.insightface/models/下,需要打开隐藏才能看到
可以在测试文件开头选择所使用的模型:antelopev2、buffalo_l、buffalo_m、buffalo_s、buffalo_sc
```
model_pack_name = 'buffalo_l'
app = FaceAnalysis(name=model_pack_name)
```
从log上可以看出每个模型的功能
1k3d68是检测人脸3D特征的
2d106det是检测人脸2D特征的
det_10g是检测人脸的
genderage是检测人脸年龄和性别的
w600k_r50是检测人脸embedding特征的
find model: /home/ubuntu/.insightface/models/buffalo_l/det_10g.onnx 模型名称model.taskname
detection 功能
[1, 3, '?', '?'] model.input_shape,
127.5 model.input_mean,
128.0 model.input_std
,
(insightface_chw) ubuntu@ubuntu:~/insightface/python-package$ python3 test1.py 检测和识别,返回512维数据
{'bbox': array([466.0821 , 268.6164 , 573.58923, 415.5331 ], dtype=float32),
'kps': array([[491.85046, 321.8314 ],[541.85266, 332.11188],[507.67114, 366.41312],[485.91965, 369.691 ],[533.74945, 378.3811 ]], dtype=float32),
'det_score': 0.9196533},
bbox为人脸框坐标
kps为关键点坐标
det_score为检测分数
至此,insightface算是测试好了,下面就想怎样将insightface与luckfox结合了
-
发表了主题帖:
嵌入式工程师AI挑战营RV1106人脸识别+流水记录(3)
RKNN-Toolkit2 +RKNN推理测试
首先是下载RKNN-Toolkit2
git clone https://github.com/rockchip-linux/rknn-toolkit2.git
上面的老是拉不下来,就换下面的
git clone https://github.com/airockchip/rknn-toolkit2.git
最后还是没成功就去鲁班猫的资料包里翻出来RKNN-Toolkit2工具,回头再看最初网盘下载的包里就有,白白浪费了几天时间
然后按照RKNN 推理测试 | LUCKFOX WIKI安装环境,注意Python版本和版本号需要根据requirements文件一致,还有文件路径
当然使用国内源会更稳定一些
+ 进入 RKNN-Toolkit2 Conda 开发环境
pip install -r rknn-toolkit2/rknn-toolkit2/packages/x86_64/requirements_cp38-2.3.0.txt -i https://pypi.tuna.tsinghua.edu.cn/simple/
pip install rknn-toolkit2/rknn-toolkit2/packages/x86_64/rknn_toolkit2-2.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl -i https://pypi.tuna.tsinghua.edu.cn/simple/
这个工具可以用来将各种网络模型转换成RKNN 模型,也可以结合SDK编译板端执行文件
在Luckfox Pico\示例程序\RKNN示例程序\luckfox_rknn.zip\luckfox_rknn\scripts\luckfox_onnx_to_rknn\sim\retinaface\中可以在PC端进行推理
python3 retinaface1.py 运行文件
报了下面的错误
E build: Traceback (most recent call last):
File "rknn/api/rknn_log.py", line 344, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper
File "rknn/api/rknn_base.py", line 1962, in rknn.api.rknn_base.RKNNBase.build
File "rknn/api/graph_optimizer.py", line 2083, in rknn.api.graph_optimizer.GraphOptimizer.fuse_ops
File "rknn/api/rules/reduce.py", line 4162, in rknn.api.rules.reduce._p_reduce_reshape_op_around_axis_op
File "rknn/api/rknn_utils.py", line 937, in rknn.api.rknn_utils.gen_gather_for_change_split_shape
AttributeError: module 'torch' has no attribute 'arange'
网络搜索提示是Pytorch问题,在conda中重新安装下
conda install pytorch
然后重启一下环境,执行文件,又报了下面的错误
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
E build: Traceback (most recent call last):
File "rknn/api/rknn_log.py", line 344, in rknn.api.rknn_log.error_catch_decorator.error_catch_wrapper
File "rknn/api/rknn_base.py", line 1945, in rknn.api.rknn_base.RKNNBase.build
File "rknn/api/graph_optimizer.py", line 627, in rknn.api.graph_optimizer.GraphOptimizer.fold_constant
File "/home/ubuntu/miniconda3/envs/insightface_chw/lib/python3.8/site-packages/onnxruntime/__init__.py", line 57, in <module>
raise import_capi_exception
File "/home/ubuntu/miniconda3/envs/insightface_chw/lib/python3.8/site-packages/onnxruntime/__init__.py", line 23, in <module>
from onnxruntime.capi._pybind_state import ExecutionMode # noqa: F401
File "/home/ubuntu/miniconda3/envs/insightface_chw/lib/python3.8/site-packages/onnxruntime/capi/_pybind_state.py", line 32, in <module>
from .onnxruntime_pybind11_state import * # noqa
ImportError
看着是numpy版本不对,那就再更新下numpy
pip install -U numpy -i https://pypi.tuna.tsinghua.edu.cn/simple/
这次就可以正常推理了
-
发表了主题帖:
嵌入式工程师AI挑战营RV1106人脸识别+流水记录(2)
上一篇记录SDK的编译,主题人脸识别关系不大
继续按照WIKI实验,先要先看下板卡的人脸检测的效果,于是按照RKMPI 实例使用指南 | LUCKFOX WIKI中“基于 opencv-mobile 绘制 RKNN 推理结果”复现了一下
按照Wiki8.1在虚拟机中
获取 git 仓库源码git clone https://github.com/LuckfoxTECH/luckfox_pico_rkmpi_example.git
设置环境变量
export LUCKFOX_SDK_PATH=< luckfox-pico Sdk 地址>
注意:使用绝对地址。
执行 ./build.sh 后选择编译的例程
1) luckfox_pico_rtsp_opencv
2) luckfox_pico_rtsp_opencv_capture
3) luckfox_pico_rtsp_retinaface
4) luckfox_pico_rtsp_retinaface_osd
5) luckfox_pico_rtsp_yolov5
Enter your choice [1-5]:3
使用 luckfox_pico_rtsp_retinaface实例
在.\luckfox_pico_rkmpi_example.7z\luckfox_pico_rkmpi_example\example\luckfox_pico_rtsp_retinaface\src\main.cc文件中增加FPS与人脸框
int main(int argc, char *argv[]) {
system("RkLunch-stop.sh");
RK_S32 s32Ret = 0;
int width = DISP_WIDTH;
int height = DISP_HEIGHT;
int model_width = 640;
int model_height = 640;
float scale_x = (float)width / (float)model_width;
float scale_y = (float)height / (float)model_height;
int sX,sY,eX,eY;
char fps_text[16];
float fps = 0;
memset(fps_text,0,16);
char text[16];
memset(text,0,16);
// Rknn model
rknn_app_context_t rknn_app_ctx;
object_detect_result_list od_results;
//init model
const char *model_path = "./model/retinaface.rknn";
memset(&rknn_app_ctx, 0, sizeof(rknn_app_context_t));
if(init_retinaface_model(model_path, &rknn_app_ctx) != RK_SUCCESS)
{
RK_LOGE("rknn model init fail!");
return -1;
}
//h264_frame
VENC_STREAM_S stFrame;
stFrame.pstPack = (VENC_PACK_S *)malloc(sizeof(VENC_PACK_S));
RK_U64 H264_PTS = 0;
RK_U32 H264_TimeRef = 0;
VIDEO_FRAME_INFO_S stViFrame;
// Create Pool
MB_POOL_CONFIG_S PoolCfg;
memset(&PoolCfg, 0, sizeof(MB_POOL_CONFIG_S));
PoolCfg.u64MBSize = width * height * 3 ;
PoolCfg.u32MBCnt = 1;
PoolCfg.enAllocType = MB_ALLOC_TYPE_DMA;
//PoolCfg.bPreAlloc = RK_FALSE;
MB_POOL src_Pool = RK_MPI_MB_CreatePool(&PoolCfg);
printf("Create Pool success !\n");
// Get MB from Pool
MB_BLK src_Blk = RK_MPI_MB_GetMB(src_Pool, width * height * 3, RK_TRUE);
// Build h264_frame
VIDEO_FRAME_INFO_S h264_frame;
h264_frame.stVFrame.u32Width = width;
h264_frame.stVFrame.u32Height = height;
h264_frame.stVFrame.u32VirWidth = width;
h264_frame.stVFrame.u32VirHeight = height;
h264_frame.stVFrame.enPixelFormat = RK_FMT_RGB888;
h264_frame.stVFrame.u32FrameFlag = 160;
h264_frame.stVFrame.pMbBlk = src_Blk;
unsigned char *data = (unsigned char *)RK_MPI_MB_Handle2VirAddr(src_Blk);
cv::Mat frame(cv::Size(width,height),CV_8UC3,data);
// rkaiq init
RK_BOOL multi_sensor = RK_FALSE;
const char *iq_dir = "/etc/iqfiles";
rk_aiq_working_mode_t hdr_mode = RK_AIQ_WORKING_MODE_NORMAL;
//hdr_mode = RK_AIQ_WORKING_MODE_ISP_HDR2;
SAMPLE_COMM_ISP_Init(0, hdr_mode, multi_sensor, iq_dir);
SAMPLE_COMM_ISP_Run(0);
// rkmpi init
if (RK_MPI_SYS_Init() != RK_SUCCESS) {
RK_LOGE("rk mpi sys init fail!");
return -1;
}
// rtsp init
rtsp_demo_handle g_rtsplive = NULL;
rtsp_session_handle g_rtsp_session;
g_rtsplive = create_rtsp_demo(554);
g_rtsp_session = rtsp_new_session(g_rtsplive, "/live/0");
rtsp_set_video(g_rtsp_session, RTSP_CODEC_ID_VIDEO_H264, NULL, 0);
rtsp_sync_video_ts(g_rtsp_session, rtsp_get_reltime(), rtsp_get_ntptime());
// vi init
vi_dev_init();
vi_chn_init(0, width, height);
// venc init
RK_CODEC_ID_E enCodecType = RK_VIDEO_ID_AVC;
venc_init(0, width, height, enCodecType);
printf("init success\n");
while(1)
{
// get vi frame
h264_frame.stVFrame.u32TimeRef = H264_TimeRef++;
h264_frame.stVFrame.u64PTS = TEST_COMM_GetNowUs();
s32Ret = RK_MPI_VI_GetChnFrame(0, 0, &stViFrame, -1);
if(s32Ret == RK_SUCCESS)
{
void *vi_data = RK_MPI_MB_Handle2VirAddr(stViFrame.stVFrame.pMbBlk);
cv::Mat yuv420sp(height + height / 2, width, CV_8UC1, vi_data);
cv::Mat bgr(height, width, CV_8UC3, data);
cv::Mat model_bgr(model_height, model_width, CV_8UC3);
cv::cvtColor(yuv420sp, bgr, cv::COLOR_YUV420sp2BGR);
cv::resize(bgr, frame, cv::Size(width ,height), 0, 0, cv::INTER_LINEAR);
cv::resize(bgr, model_bgr, cv::Size(model_width ,model_height), 0, 0, cv::INTER_LINEAR);
//model
memcpy(rknn_app_ctx.input_mems[0]->virt_addr, model_bgr.data, model_width * model_height * 3);
inference_retinaface_model(&rknn_app_ctx, &od_results);
//model
sprintf(fps_text,"fps = %.2f",fps);
cv::putText(frame,fps_text,
cv::Point(40, 40),
cv::FONT_HERSHEY_SIMPLEX,1,
cv::Scalar(0,255,0),2);
for(int i = 0; i < od_results.count; i++)
{
if(od_results.count >= 1)
{
object_detect_result *det_result = &(od_results.results[i]);
sX = (int)((float)det_result->box.left *scale_x);
sY = (int)((float)det_result->box.top *scale_y);
eX = (int)((float)det_result->box.right *scale_x);
eY = (int)((float)det_result->box.bottom *scale_y);
printf("%d %d %d %d\n",sX,sY,eX,eY);
cv::rectangle(frame,cv::Point(sX,sY),
cv::Point(eX,eY),cv::Scalar(0,255,0),3);
sprintf(text, " %.1f%%", det_result->prop * 100);
cv::putText(frame,text,cv::Point(sX, sY - 8),
cv::FONT_HERSHEY_SIMPLEX,1,
cv::Scalar(0,255,0),2);
}
}
}
memcpy(data, frame.data, width * height * 3);
// encode H264
RK_MPI_VENC_SendFrame(0, &h264_frame ,-1);
// rtsp
s32Ret = RK_MPI_VENC_GetStream(0, &stFrame, -1);
if(s32Ret == RK_SUCCESS)
{
if(g_rtsplive && g_rtsp_session)
{
//printf("len = %d PTS = %d \n",stFrame.pstPack->u32Len, stFrame.pstPack->u64PTS);
void *pData = RK_MPI_MB_Handle2VirAddr(stFrame.pstPack->pMbBlk);
rtsp_tx_video(g_rtsp_session, (uint8_t *)pData, stFrame.pstPack->u32Len,
stFrame.pstPack->u64PTS);
rtsp_do_event(g_rtsplive);
}
RK_U64 nowUs = TEST_COMM_GetNowUs();
fps = (float) 1000000 / (float)(nowUs - h264_frame.stVFrame.u64PTS);
}
// release frame
s32Ret = RK_MPI_VI_ReleaseChnFrame(0, 0, &stViFrame);
if (s32Ret != RK_SUCCESS) {
RK_LOGE("RK_MPI_VI_ReleaseChnFrame fail %x", s32Ret);
}
s32Ret = RK_MPI_VENC_ReleaseStream(0, &stFrame);
if (s32Ret != RK_SUCCESS) {
RK_LOGE("RK_MPI_VENC_ReleaseStream fail %x", s32Ret);
}
}
// Destory MB
RK_MPI_MB_ReleaseMB(src_Blk);
// Destory Pool
RK_MPI_MB_DestroyPool(src_Pool);
RK_MPI_VI_DisableChn(0, 0);
RK_MPI_VI_DisableDev(0);
SAMPLE_COMM_ISP_Stop(0);
RK_MPI_VENC_StopRecvFrame(0);
RK_MPI_VENC_DestroyChn(0);
free(stFrame.pstPack);
if (g_rtsplive)
rtsp_del_demo(g_rtsplive);
RK_MPI_SYS_Exit();
// Release rknn model
release_retinaface_model(&rknn_app_ctx);
return 0;
}
luckfox_pico_rtsp_retinaface_demo 测试人脸检测,rtsp视频流添加FPS后交叉编译
export LUCKFOX_SDK_PATH=/home/ubuntu/luckfox-pico
sudo chmod -R 777 ../luckfox_pico_rkmpi_example
ubuntu@ubuntu:~/luckfox_pico_rkmpi_example$ ./build.sh
将文件传输到板卡上,按照WIKI8.2在板端运行
编译完成后会在luckfox_pico_rkmpi_example/install文件夹下生成对应的部署文件夹
luckfox_pico_rtsp_opencv_demo
luckfox_pico_rtsp_opencv_capture_demo
luckfox_pico_rtsp_retinaface_demo
luckfox_pico_rtsp_retinaface_osd_demo
luckfox_pico_rtsp_yolov5_demo
将生成的部署文件夹完整上传到 Luckfox Pico 上 (可使用adb ssh等方式) ,板端进入文件夹运行
# 在 Luckfox Pico 板端运行,<Demo Target> 是部署文件夹中的可执行程序
chmod a+x <Demo Target>
./<Demo Target>
使用 VLC 打开网络串流 rtsp://172.32.0.93/live/0(按实际情况修改 IP 地址拉取图像)
PC端文件
板端文件
Buildroot
登录账号:root
登录密码:luckfox
静态IP地址:172.32.0.93
Ubuntu
登录账号:pico
登录密码:luckfox
静态IP地址:172.32.0.70
-
发表了主题帖:
嵌入式工程师AI挑战营RV1106人脸识别+流水记录(1)
有幸拿到幸狐RV1106的开发板参加EEworld的嵌入式工程师AI挑战营,上手前先看了下幸狐的Quick_Start,一步入门
这一篇就按照上手教程 | LUCKFOX WIKI,准备环境
这个镜像文件基本涵盖了幸狐的所有用到的资料,里面也准备了VMware的虚拟机文件,直接导入即可
开发板自带有有基于SPI_NAND的Buildroot的系统,可以简单测试板卡的功能
后面做人脸识别担心NAND容量不够,所以按照WIKI切换到SD卡的Buildroot系统
这里耽误了几天,问题出在运行./build.sh lunch在做镜像时的 配置文件不能对应,后面选择了custom后按照可以正常编译SDK,首次编译比较慢,需要下载好多文件
这个很重要!!!可以节省不少时间
网络原因导致编译buildroot时下载诸如expat和Python等软件包失败如何解决?
答:下载离线包,替换自己的luckfox-pico/sysdrv/source/buildroot/buildroot-2023.02.6/目录下的dl文件夹。
因为需要用到摄像头,按照luckfox-config 配置 | LUCKFOX WIKI中确认CSI开启。
——写于24.12.13。SDK的编译大概是在10天前,回忆当时,编译时最大的问题好像是网络下载包,还有一个什么错误记不得了,但是也忘记保存图片了
先写一点吧,这篇没啥内容,用于记录调试过程
- 2024-11-22
-
加入了学习《littlefulse 多元新技术赋能安全可靠和高效》,观看 LIT-3762-22-25_MATE12B-MITI7L-ReedSwitches-Captions-CN
-
加入了学习《littlefulse 多元新技术赋能安全可靠和高效》,观看 LIT-3762-08-eFuse_USB_Captions-CN
-
加入了学习《littlefulse 多元新技术赋能安全可靠和高效》,观看 LIT-3762-24_LoadSwitchesICs_2024UpdateFINAL_CN
-
加入了学习《littlefulse 多元新技术赋能安全可靠和高效》,观看 LIT-3762-23_TTapeCaptions-CN
-
加入了学习《littlefulse 多元新技术赋能安全可靠和高效》,观看 LIT-3762-09 CSR Series_Captions-CN
-
加入了学习《littlefulse 多元新技术赋能安全可靠和高效》,观看 LIT-3762-20-SIDACtor-Captions_CN
-
加入了学习《littlefulse 多元新技术赋能安全可靠和高效》,观看 LIT-3762-11_SZSMF4L_Captions-CN
-
加入了学习《littlefulse 多元新技术赋能安全可靠和高效》,观看 LIT-3762-10-AEC-Q200-CN-Captions
- 2024-11-21
-
回复了主题帖:
入围名单公布:嵌入式工程师AI挑战营(进阶)的挑战者们,领取板卡啦
个人信息已确认,领取板卡,可继续完成任务
- 2024-11-20
-
回复了主题帖:
嵌入式工程师AI挑战营(进阶):在RV1106部署InsightFace算法的多人实时人脸识别实战
InsightFace,也称为ArcFace,是一种基于深度学习的人脸识别算法,其核心在于引入了一种加性角度间隔的softmax损失函数(Arc-Softmax),以更好地学习到人脸特征之间的区分性。该模型在多个公开数据集上取得了优异的表现,尤其是在大规模人脸识别任务中。
首先对人脸进行检测->然后对人脸的关键点检测->然后对人脸进行校准,裁剪合适大小后进行关键特征提取->依照特征进行特征库搜索
打算部署人脸的登陆系统,以及肢体动作识别
- 2024-11-06
-
回复了主题帖:
新能源汽车车载交流慢充和维修
国标上PWM好像是±电压