91欧美超碰AV自拍|国产成年人性爱视频免费看|亚洲 日韩 欧美一厂二区入|人人看人人爽人人操aV|丝袜美腿视频一区二区在线看|人人操人人爽人人爱|婷婷五月天超碰|97色色欧美亚州A√|另类A√无码精品一级av|欧美特级日韩特级

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫(xiě)文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

NPU和CPU對(duì)比運(yùn)行速度有何不同?基于i.MX 8M Plus處理器的MYD-JX8MPQ開(kāi)發(fā)板

米爾電子 ? 2022-05-09 16:46 ? 次閱讀
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

參考

https://www.toradex.cn/blog/nxp-imx8ji-yueiq-kuang-jia-ce-shi-machine-learning

IMX-MACHINE-LEARNING-UG.pdf


CPU和NPU圖像分類

cd /usr/bin/tensoRFlow-lite-2.4.0/examples

CPU運(yùn)行

./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt

INFO: Loaded model mobilenet_v1_1.0_224_quant.tflite

INFO: resolved reporter

INFO: invoked

INFO: averagetime:50.66ms

INFO: 0.780392: 653 military unIForm

INFO: 0.105882: 907 Windsor tie

INFO: 0.0156863: 458 bow tie

INFO: 0.0117647: 466 bulletproof vest

INFO: 0.00784314: 835 suit


GPU/NPU加速運(yùn)行

./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt-a 1

INFO: Loaded model mobilenet_v1_1.0_224_quant.tflite

INFO: resolved reporter

INFO: Created TensorFlow Lite delegate for NNAPI.

INFO: Applied NNAPI delegate.

INFO: invoked

INFO: average time:2.775ms

INFO: 0.768627: 653 military uniform

INFO: 0.105882: 907 Windsor tie

INFO: 0.0196078: 458 bow tie

INFO: 0.0117647: 466 bulletproof vest

INFO: 0.00784314: 835 suit

USE_GPU_INFERENCE=0./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt--external_delegate_path=/usr/lib/libvx_delegate.so

Python運(yùn)行

python3 label_image.py

INFO: Created TensorFlow Lite delegate for NNAPI.

Applied NNAPI delegate.

WARM-up time:6628.5ms

Inference time: 2.9 ms

0.870588: military uniform

0.031373: Windsor tie

0.011765: mortarboard

0.007843: bow tie

0.007843: bulletproof vest


基準(zhǔn)測(cè)試CPU單核運(yùn)行

./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite

STARTING!

Log parameter values verbosely: [0]

Graph: [mobilenet_v1_1.0_224_quant.tflite]

Loaded model mobilenet_v1_1.0_224_quant.tflite

The input model file size (MB): 4.27635

Initialized session in 15.076ms.

Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.

count=4 first=166743 curr=161124 min=161054 max=166743avg=162728std=2347

Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.

count=50 first=161039 curr=161030 min=160877 max=161292 avg=161039std=94

Inference timings in us: Init: 15076, First inference: 166743, Warmup (avg):162728, Inference (avg):161039

Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.

Peak memory footprint (MB): init=2.65234 overall=9.00391

CPU多核運(yùn)行

./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --num_threads=4

4核--num_threads設(shè)置為4性能最好

STARTING!

Log parameter values verbosely: [0]

Num threads: [4]

Graph: [mobilenet_v1_1.0_224_quant.tflite]

#threads used for CPU inference: [4]

Loaded model mobilenet_v1_1.0_224_quant.tflite

The input model file size (MB): 4.27635

Initialized session in 2.536ms.

Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.

count=11 first=48722 curr=44756 min=44597 max=49397 avg=45518.9 std=1679

Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.

count=50 first=44678 curr=44591 min=44590 max=50798avg=44965.2std=1170

Inference timings in us: Init: 2536, First inference: 48722, Warmup (avg):45518.9, Inference (avg):44965.2

Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.

Peak memory footprint (MB): init=1.38281 overall=8.69922

GPU/NPU加速

./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --num_threads=4 --use_nnapi=true

STARTING!

Log parameter values verbosely: [0]

Num threads: [4]

Graph: [mobilenet_v1_1.0_224_quant.tflite]

#threads used for CPU inference: [4]

Use NNAPI: [1]

NNAPI accelerators available: [vsi-npu]

Loaded model mobilenet_v1_1.0_224_quant.tflite

INFO: Created TensorFlow Lite delegate for NNAPI.

Explicitly applied NNAPI delegate, and the model graph will be completely executed by the delegate.

The input model file size (MB): 4.27635

Initialized session in 3.968ms.

Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.

count=1 curr=6611085

Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.

count=369 first=2715 curr=2623 min=2572 max=2776avg=2634.2std=20

Inference timings in us: Init: 3968, First inference: 6611085, Warmup (avg): 6.61108e+06, Inference (avg): 2634.2

Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.

Peak memory footprint (MB): init=2.42188 overall=28.4062

結(jié)果對(duì)比

CPU運(yùn)行CPU多核多線程NPU加速
圖像分類50.66 ms2.775 ms
基準(zhǔn)測(cè)試161039uS44965.2uS2634.2uS

OpenCV DNN

cd /usr/share/OpenCV/samples/bin

./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet

下載模型

cd /usr/share/opencv4/testdata/dnn/

python3 download_models_basic.py

圖像分類

cd /usr/share/OpenCV/samples/bin

./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet

e2a1f644-c70d-11ec-8521-dac502259ad0.jpg


文件瀏覽器地址欄輸入

ftp://ftp.toradex.cn/Linux/i.MX8/eIQ/OpenCV/Image_Classification.zip

下載文件

解壓得到文件models.yml和squeezenet_v1.1.caffemodel

cd /usr/share/OpenCV/samples/bin

將文件導(dǎo)入到開(kāi)發(fā)板的/usr/share/OpenCV/samples/bin目錄下

$cp/usr/share/opencv4/testdata/dnn/dog416.png /usr/share/OpenCV/samples/bin/
$cp/usr/share/opencv4/testdata/dnn/squeezenet_v1.1.prototxt /usr/share/OpenCV/samples/bin/
$cp/usr/share/OpenCV/samples/data/dnn/classification_classes_ILSVRC2012.txt /usr/share/OpenCV/samples/bin/
$ cd /usr/share/OpenCV/samples/bin/

圖片輸入

./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet

報(bào)錯(cuò)

root@myd-jx8mp:/usr/share/OpenCV/samples/bin# ./example_dnn_classification --input=dog416.png --zoo=model.yml squeezenet

ERRORS:

Missing parameter: 'mean'

Missing parameter: 'rgb'

加入?yún)?shù)--rgb 和 --mean=1

還是報(bào)錯(cuò)加入?yún)?shù)--mode

root@myd-jx8mp:/usr/share/OpenCV/samples/bin# ./example_dnn_classification --rgb --mean=1 --input=dog416.png --zoo=models.yml squeezenet

[WARN:0]global/usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream

[WARN:0]global/usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1

root@myd-jx8mp:/usr/share/OpenCV/samples/bin#./example_dnn_classification --rgb --mean=1 --input=dog416.png --zoo=models.yml squeezenet --mode

[WARN:0]global/usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream

[WARN:0]global/usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1

視頻輸入

./example_dnn_classification --device=2 --zoo=models.yml squeezenet

問(wèn)題

如果testdata目錄下沒(méi)有文件,則查找下

lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto$ find . -name "dog416.png"

./build-xwayland/tmp/work/cortexa53-crypto-mx8mp-poky-linux/opencv/4.4.0.imx-r0/extra/testdata/dnn/dog416.png

再將相應(yīng)的文件復(fù)制到開(kāi)發(fā)板

cd./build-xwayland/tmp/work/cortexa53-crypto-mx8mp-poky-linux/opencv/4.4.0.imx-r0/extra/testdata/

tar -cvf /mnt/e/dnn.tar ./dnn/

cd/usr/share/opencv4/testdata目錄不存在則先創(chuàng)建

rz導(dǎo)入dnn.tar

解壓tar -xvf dnn.tar

terminate calLEDafter throwing an instance of 'cv::Exception'

what():OpenCV(4.4.0)/usr/src/debug/opencv/4.4.0.imx-r0/git/samples/dnn/classification.cpperrorAssertion failed) !model.empty() in function 'main'

Aborted

lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto/build-xwayland$ find . -name classification.cpp

lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto/build-xwayland$ cp ./tmp/work/cortexa53-crypto-mx8mp-poky-linux/opencv/4.4.0.imx-r0/packages-split/opencv-src/usr/src/debug/opencv/4.4.0.imx-r0/git/samples/dnn/classification.cpp /mnt/e

lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto/build-xwayland$

YOLO對(duì)象檢測(cè)

cd /usr/share/OpenCV/samples/bin

./example_dnn_object_detection --width=1024 --height=1024 --scale=0.00392 --input=dog416.png --rgb --zoo=models.yml yolo

e2ba8f74-c70d-11ec-8521-dac502259ad0.jpg


https://pjreddie.com/darknet/yolo/下載cfg和weights文件

cd/usr/share/OpenCV/samples/bin/

導(dǎo)入上面下載的文件

cp/usr/share/OpenCV/samples/data/dnn/object_detection_classes_yolov3.txt/usr/share/OpenCV/samples/bin/

cp/usr/share/opencv4/testdata/dnn/yolov3.cfg/usr/share/OpenCV/samples/bin/./example_dnn_object_detection --width=1024 --height=1024 --scale=0.00392 --input=dog416.png --rgb --zoo=models.yml yolo

OpenCV經(jīng)典機(jī)器學(xué)

cd /usr/share/OpenCV/samples/bin

線性SVM

./example_tutorial_introduction_to_svm

e2d1263a-c70d-11ec-8521-dac502259ad0.jpg

非線性SVM

./example_tutorial_non_linear_svms

e2e33c80-c70d-11ec-8521-dac502259ad0.jpg

PCA分析

./example_tutorial_introduction_to_pca ../data/pca_test1.jpg

e2fa2152-c70d-11ec-8521-dac502259ad0.jpg

邏輯回歸

./example_cpp_logistic_regression

e310c22c-c70d-11ec-8521-dac502259ad0.jpg

e323f9c8-c70d-11ec-8521-dac502259ad0.jpg

e3371f58-c70d-11ec-8521-dac502259ad0.jpg

聲明:本文內(nèi)容及配圖由入駐作者撰寫(xiě)或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場(chǎng)。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問(wèn)題,請(qǐng)聯(lián)系本站處理。 舉報(bào)投訴
收藏 人收藏
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

    評(píng)論

    相關(guān)推薦
    熱點(diǎn)推薦

    請(qǐng)問(wèn)qemu 可以模擬 i.MX 8M Plus 嗎?

    我們沒(méi)有i.MX 8M Plus,所以我想問(wèn)一下 qemu 是否可以模擬i.MX 8M
    發(fā)表于 03-05 08:10

    探索i.MX 91應(yīng)用處理器家族:為邊緣應(yīng)用帶來(lái)新可能

    開(kāi)發(fā)者提供了一個(gè)強(qiáng)大而靈活的平臺(tái)。今天,我們就來(lái)深入了解一下i.MX 91應(yīng)用處理器家族的特點(diǎn)和優(yōu)勢(shì)。 文件下載: NXP Semiconductors i.MX 91應(yīng)用
    的頭像 發(fā)表于 12-24 11:50 ?912次閱讀

    探索FRDM - IMX8MPLUS開(kāi)發(fā)板:開(kāi)啟嵌入式開(kāi)發(fā)新旅程

    MPLUS開(kāi)發(fā)板就是這樣一款值得深入探索的產(chǎn)品。它為開(kāi)發(fā)者提供了一個(gè)低成本、高性能的硬件平臺(tái),能夠幫助我們快速熟悉i.MX 8M Plus應(yīng)
    的頭像 發(fā)表于 12-24 11:00 ?387次閱讀

    既要穩(wěn)定性,還要性價(jià)比的工業(yè)級(jí)開(kāi)發(fā)板 — 米爾i.MX91

    之前我介紹過(guò)NXPi.MX8系列的開(kāi)發(fā)板,最近它的下一代產(chǎn)品i.MX9系列產(chǎn)品也有了,我就拿到了這個(gè)基于NXPi.MX9系列的米爾MYD-L
    的頭像 發(fā)表于 11-20 08:05 ?2162次閱讀
    既要穩(wěn)定性,還要性價(jià)比的工業(yè)級(jí)<b class='flag-5'>開(kāi)發(fā)板</b> — 米爾<b class='flag-5'>i.MX</b>91

    恩智浦FRDM i.MX 8M Plus開(kāi)發(fā)板詳解

    開(kāi)發(fā)高級(jí)HMI應(yīng)用、計(jì)算機(jī)視覺(jué)系統(tǒng)以及邊緣AI項(xiàng)目時(shí),開(kāi)發(fā)人員常常面臨一個(gè)共同挑戰(zhàn):如何在不依賴昂貴且復(fù)雜的開(kāi)發(fā)平臺(tái)的前提下,獲得足夠的處理能力。這正是FRDM
    的頭像 發(fā)表于 11-18 15:07 ?1429次閱讀

    簡(jiǎn)單認(rèn)識(shí)NXP FRDM i.MX 93開(kāi)發(fā)板

    FRDM i.MX 93開(kāi)發(fā)板是一款入門級(jí)、緊湊型開(kāi)發(fā)板,采用i.MX93應(yīng)用處理器。該配備板
    的頭像 發(fā)表于 11-17 09:45 ?1435次閱讀
    簡(jiǎn)單認(rèn)識(shí)NXP FRDM <b class='flag-5'>i.MX</b> 93<b class='flag-5'>開(kāi)發(fā)板</b>

    恩智浦推出i.MX 952人工智能應(yīng)用處理器

    恩智浦半導(dǎo)體宣布推出i.MX 9系列的新成員——i.MX 952應(yīng)用處理器。該處理器專為AI視覺(jué)、人機(jī)接口(HMI)及座艙感知應(yīng)用而設(shè)計(jì),通過(guò)集成eIQ Neutron神經(jīng)
    的頭像 發(fā)表于 10-27 09:15 ?3451次閱讀

    恩智浦FRDM i.MX 8M Plus開(kāi)發(fā)板上架

    i.MX 8M Plus應(yīng)用處理器集成2個(gè)或4個(gè)Arm Cortex-A53核、1個(gè)專用于實(shí)時(shí)控制的Arm Cortex-M7核,以及1個(gè)算
    的頭像 發(fā)表于 08-16 17:38 ?2223次閱讀
    恩智浦FRDM <b class='flag-5'>i.MX</b> <b class='flag-5'>8M</b> <b class='flag-5'>Plus</b><b class='flag-5'>開(kāi)發(fā)板</b>上架

    米爾NXP i.MX 91核心發(fā)布,助力新一代入門級(jí)Linux應(yīng)用開(kāi)發(fā)

    本帖最后由 blingbling111 于 2025-5-30 16:17 編輯 米爾電子基于與NXP長(zhǎng)期合作的嵌入式處理器開(kāi)發(fā)經(jīng)驗(yàn),在i.MX 6和i.MX
    發(fā)表于 05-30 11:20

    NXP i.MX 91開(kāi)發(fā)板#支持快速創(chuàng)建基于Linux?的邊緣器件

    NXP Semiconductors FRDM i.MX 91開(kāi)發(fā)板設(shè)計(jì)用于評(píng)估i.MX 91應(yīng)用處理器,支持快速創(chuàng)建基于Linux ^?^ 的邊緣器件。該
    的頭像 發(fā)表于 05-19 10:55 ?2954次閱讀
    NXP <b class='flag-5'>i.MX</b> 91<b class='flag-5'>開(kāi)發(fā)板</b>#支持快速創(chuàng)建基于Linux?的邊緣器件

    煥新登場(chǎng)!飛凌嵌入式FET-MX8MPQ-SMARC核心發(fā)布

    飛凌嵌入式FET-MX8MPQ-SMARC核心基于NXP i.MX8MPQ處理器開(kāi)發(fā)設(shè)計(jì),該系列處理器
    的頭像 發(fā)表于 05-07 11:29 ?1146次閱讀
    煥新登場(chǎng)!飛凌嵌入式FET-<b class='flag-5'>MX8MPQ</b>-SMARC核心<b class='flag-5'>板</b>發(fā)布

    TPS6521825 適用于 NXP i.MX 8M mini 的電源管理 IC數(shù)據(jù)手冊(cè)

    TPS6521825 是一款單芯片電源管理 IC (PMIC),專門用于支持 i.MX 8M Mini 處理器和 LP873347 器件。該器件的額定溫度范圍為 –40°C 至 +105°C,適用于各種工業(yè)應(yīng)用。
    的頭像 發(fā)表于 05-04 10:44 ?957次閱讀
    TPS6521825 適用于 NXP <b class='flag-5'>i.MX</b> <b class='flag-5'>8M</b> mini 的電源管理 IC數(shù)據(jù)手冊(cè)

    在 NXP i.MX 8M Plus EVK上比較Yocto 4.0和Yocto 5.0時(shí),空閑模式下的功耗增加了 20%,為什么?

    尊敬的先生: 在 NXP i.MX 8M Plus EVK 上比較 Yocto 4.0(內(nèi)核 5.15.71-2.2.2)和 Yocto 5.0(內(nèi)核 6.6.52-2.2.0)時(shí),我們觀察到空閑
    發(fā)表于 03-26 07:15

    將Deepseek移植到i.MX 8MP|93 EVK的步驟

    顯示了 DeepSeek 模型運(yùn)行期間 i.MXCPU 和內(nèi)存使用情況。需要指出的是,CPU 效率會(huì)影響模型 Token 生成的速度。
    發(fā)表于 03-26 06:08

    NXP基于i.MX 91應(yīng)用處理器打造的FRDM i.MX 91開(kāi)發(fā)板特性參數(shù)詳解

    FRDM i.MX 91開(kāi)發(fā)板。該開(kāi)發(fā)板基于i.MX 91應(yīng)用處理器打造,專為加速工業(yè)與物聯(lián)網(wǎng)的開(kāi)發(fā)
    的頭像 發(fā)表于 03-21 09:37 ?17.5w次閱讀
    NXP基于<b class='flag-5'>i.MX</b> 91應(yīng)用<b class='flag-5'>處理器</b>打造的FRDM <b class='flag-5'>i.MX</b> 91<b class='flag-5'>開(kāi)發(fā)板</b>特性參數(shù)詳解