Running PyTorch models on Jetson Nano (2023)

de Jeff Tang, Hamid Shojanazeri, Geeta Chauhan

general description

nvidiasupersonic nano, Part ofJetson product familyor Jetson modules, is a small but powerful Linux (Ubuntu) based embedded computer with 2/4 GB GPU. With it, you can run many PyTorch models efficiently. This document summarizes our experience running various deep learning models with three different engines on the Jetson Nano:

  1. Jetson Inference, NVIDIA's high-level API with built-in support for running the most popular computer vision models, which can be ported to the Jetson platform and learned using PyTorch.

  2. TensorRT, a high-performance inference SDK from NVIDIA that requires converting a PyTorch model to ONNX and then to the TensorRT engine file that the TensorRT runtime can execute.

  3. PyTorch with the direct PyTorch APIantorcha.nnby inference.

Jetson Nano-Setup

After purchasing a Jetson NanoHere, just follow the clear step by stepinstructionsto download and write the Jetson Nano Developer Kit SD card image to a microSD card and complete the setup. Once the setup is complete and the Nano boots, you will see the standard Linux prompt along with the username and name of the Nano used during setup.

Run the following commands to check the GPU status on the Nano:

sudo pip3 install jetson-statssudo jtop

You'll see information including:

Running PyTorch models on Jetson Nano (1)

You can also see the installed CUDA version:

$ ls -lt /usr/locallrwxrwxrwx 1 root root 22. August 2 01:47 cuda -> /etc/alternatives/cudalrwxrwxrwx 1 root root 25. August 01:47 cuda-10 -> /etc/alternatives/cuda-10drwxr- xr - x 12 Return 4096 2 . 01 : 47 } 10.2

To use a camera on the Jetson Nano, for example Arducam 8MP IMX219, follow the instructionsHereor run the following commands afterInstall a camera module:

cd ~wget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.shchmod +x install_full.sh./install_full.sh -m arducam

Another way to do this is to use the original Jetson Nano camera driver:

sudo dpkg -r arducam-nvidia-l4t-kernelsudo off -r now
(Video) L-5 YOLOv5 on Jetson Nano | PyTorch & TorchVision Installation on Jetson Nano

Then use ls /dev/video0 to confirm that the camera was found:

$ls /dev/video0/dev/video0

And finally the following command to see the camera in action:

nvgstcapture-1.0 --orientation=2

Using Jetson's inference

nvidiasupersonic lockThe API provides the easiest way to run image recognition, object detection, semantic segmentation, and pose estimation models on Jetson Nano. Jetson Inference has TensorRT built in, so it's very fast.

To test running Jetson Inference, first clone the repository and download the models:

git clone --recursive https://github.com/dusty-nv/jetson-inferencecd jetson-inference

Then use the pre-made onesDocker-ContainerI already installed PyTorch to test the models:

docker/run.sh --volume ~/jetson_inference:/jetson_inference

To run image recognition, object recognition, semantic segmentation, and pose estimation models on test images, use:

cd build/aarch64/bin./imagenet.py images/jellyfish.jpg /jetson_inference/jellyfish.jpg./segnet.py images/dog.jpg /jetson_inference/dog.jpeg./detectnet.py images/peds_0.jpg /jetson_inference /peds_0.jpg./posenet.py images/humans_0.jpg /jetson_inference/pose_humans_0.jpg

Four images of the results of running the four different models are generated. Exit the docker image to view them:

(Video) #3 Installing OpenCV, PyTorch & TorchVision on Jetson Nano | 2022

$ls -lt~/jetson_inference/-rw-r--r-- 1 root root 68834 Oct 15 9:30 PM pose_humans_0.jpg-rw-r--r-- 1 root root 914058 Oct 15 9:30 PM peds_0 .jpg -rw-r--r-- 1 root root 666239 October 15 9:30 PM dog.jpeg-rw-r--r-- 1 root root 179760 October 15 9:29 PM jellyfish.jpg

Running PyTorch models on Jetson Nano (2) Running PyTorch models on Jetson Nano (3)

Running PyTorch models on Jetson Nano (4) Running PyTorch models on Jetson Nano (5)

You can also use the Docker image to run PyTorch templates, as the image has PyTorch, Torchvision, and Torchaudio installed:

# lista de pip|grep Torchtorch (1.9.0)torchaudio (0.9.0a0+33b2469)torchvision (0.10.0a0+300a8a4)

Although Jetson Inference includes models already converted to the TensorRT engine file format, you can optimize the models by following the steps in Transfer Learning with PyTorch (for Jetson Inference).Here.

Com TensorRT

TensorRTis a high-performance inference SDK from NVIDIA. The Jetson Nano supports TensorRT through the Jetpack SDK, which is included in the SD card image used to configure the Jetson Nano. To confirm that TensorRT is already installed on the Nano,Execute dpkg -l|grep -i tensort:

Running PyTorch models on Jetson Nano (6)

In theory, TensorRT could be used to "take a trained PyTorch model and tune it to run more efficiently during inference on an NVIDIA GPU". Follow the instructions and code onPortable computerto see PyTorch being used with TensorRT over ONNX in a Torchvision Resnet50 model:

  1. How to convert PyTorch model for ONNX;

  2. How to convert ONNX model to a TensorRT engine file;

  3. To run the engine file with the TensorRT runtime to improve performance: Increased the inference time from the original 31.5ms/19.4ms (FP32/FP16 precision) to 6.28ms (TensorRT) enhanced.

You can replace the Resnet50 model in the notebook code with another PyTorch model, follow the above conversion process, and run the converted final model TensorRT engine file with the TensorRT runtime to see optimized performance. However, please note that due to the memory size of the Nano GPU, models larger than 100MB may not work with the following error information:

Error code 1: Cuda runtime (all CUDA enabled devices are busy or unavailable)

You may also get an error converting a PyTorch model to an ONNX model, which can be fixed by replacing:

Torch.onnx.export(resnet50, dummy_input, "resnet50_pytorch.onnx", detallado=Falso)

(Video) L-2 Jetson Nano Headless | Use Jetson Nano Remotely

Disadvantage:

Torch.onnx.export(modelo, dummy_input, "deeplabv3_pytorch.onnx", opset_version=11, detallado=Falso)

Using PyTorch

To download and install PyTorch 1.9 on Nano, first run the following commands (seeHerefor more information):

wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O Torch-1.8.0-cp36-cp36m-linux_aarch64.whl -O Torch-1.9.0-cp36-cp36m-linux_aarch64.whlsudo apt- conseguir instalar python3-pip libopenblas-base libopenmpi-dev pip3 instalar Cythonpip3 instalar numpy Torch-1.9.0-cp36-cp36m-linux_aarch64.whl

To download Torchvision 0.10 and install it on the Nano, run the following commands:

https://drive.google.com/uc?id=1tU6YlPjrP605j4z8PMnqwCSoP6sSC91Zpip3 Instala Torchvision-0.10.0a0+300a8a4-cp36-cp36m-linux_aarch64.whl

After above steps, run this to confirm:

$ pip3 list|grep TorchTorch (1.9.0) TorchVision (0.10.0)

You can also use the Docker image described in the sectionUsing Jetson's inference(which also has PyTorch and Torchvision installed) to skip the manual steps above.

The OfficialYOLOv5repo is used to run the PyTorch YOLOv5 model on the Jetson Nano. After logging in to the Jetson Nano, do the following:

  • Get the repository and install whatever is needed:
git-Klon https://github.com/ultralytics/yolov5cd yolov5pip install -r requirements.txt
  • Runpython3 detectar.py, which uses the yolov5s.pt PyTorch template by default. You should see something like:
detectar: ​​weights=yolov5s.pt, source=data/images, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False , save_crop =False, nosave=False, Classes=None, agnostic_nms=False, Augment=False, visualize=False, update=False, Project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False , hide_conf =False, Medium=FalseYOLOv5 🚀 v5.0-499-g48b00db Torch 1.9.0 CUDA:0 (NVIDIA Tegra X1, 3956.1015625MB) Fusion Layers... Model Summary: 224 layers, 7266973 parameters, 0 gradientsImage 1/5 /home /jeff /repos/yolov5-new/yolov5/data/images/bus.jpg: 640x480 4 pessoas, 1 ônibus, 1 hidrante, pronto. (0,142s)...

The inference time on the Jetson Nano GPU is approximately 140ms, more than double the inference time on iOS or Android (approximately 330ms).

(Video) YOLOv5 on Jetson Nano

If you receive an error message"ImportError: _imagingft C module is not installed."then you need to reinstall the pillow:

sudo apt-get install libpng-devsudo apt-get install libfreetype6-devpip3 desinstalar Pillowpip3 install --no-cache-dir Travesseiro

After successfully completing thepython3 detectar.pyrunning, the object detection results of the test patterns are indata/imageswill be inrun/acknowledge/expDirectory. To test detection using a live webcam instead of local images, use the--source 0parameters at runtimepython3 detectar.py):

~/repos/yolov5$ ls -lt running/detect/exp10total 1456-rw-rw-r-- 1 jeff jeff 254895 15. Out 4:12 pm zidane.jpg-rw-rw-r-- 1 jeff jeff 202674 15 . Oct 16:12 test3.png-rw-rw-r-- 1 jeff jeff 217117 15. Oct 16:12 test2.jpg-rw-rw-r-- 1 jeff jeff 305826 15. Oct 16:12 test1 .png- rw-rw - r-- 1 jeff jeff 495760 october 15 4h12 bus.jpg

Using the same test files used in PyTorch iOS YOLOv5 Demo App or Android YOLOv5 Demo App, you can compare the results obtained by running the YOLOv5 PyTorch model on mobile devices and Jetson Nano were generated:

Running PyTorch models on Jetson Nano (7) Running PyTorch models on Jetson Nano (8)

Figure 1. PyTorch YOLOv5 on Jetson Nano.

Running PyTorch models on Jetson Nano (9) Running PyTorch models on Jetson Nano (10)

Figura 2. PyTorch YOLOv5 no iOS.

Running PyTorch models on Jetson Nano (11) Running PyTorch models on Jetson Nano (12)

Figura 3. PyTorch YOLOv5 no Android.

Summary

Based on our experience running various PyTorch models for potential demo applications on the Jetson Nano, we see that even the Jetson Nano, a lower end of the Jetson family of products, offers a powerful GPU and an integrated system that can run some of the most demanding models. recent versions of PyTorch directly, pre-trained or transferred, efficient.

Building PyTorch demo apps on Jetson Nano can be similar to building PyTorch apps on Linux, but you can also use TensorRT after converting your PyTorch models to the TensorRT engine file format.

But if you just need to run some common computer vision models on the Jetson Nano using NVIDIA's Jetson Inference, which supports image recognition, object recognition, semantic segmentation, and pose estimation models, this is the easiest way.

references

Torch-TensorRT, a compiler for PyTorch via TensorRT:https://github.com/NVIDIA/Torch-TensorRT/

Jetson-Inferenz-Docker-Image-Details:https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md

(Video) Object detection with Jetson Nano using YOLOv7 and RealSense

A guide to using TensorRT on NVIDIA Jetson Nano:https://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/Including:

  1. Use Jetson as a portable GPU device to run an NN chess engine model:https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018

  2. A MaskEraser application using PyTorch and Torchvision, installed directly with pip :https://github.com/INTEC-ATI/MaskEraser#install-pytorch

Videos

1. Real-Time Object Detection in 10 Lines of Python Code on Jetson Nano
(NVIDIA Developer)
2. [Tutorial] Machine Learning Docker Container On Jetson Nano
(make2explore Systems)
3. Jetson Nano Custom Object Detection - how to train your own AI
(Kevin McAleer)
4. Jetson AI Fundamentals - S3E5 - Training Object Detection Models
(NVIDIA Developer)
5. Retrain SSD Object Detection Model with Pytorch on Colab PART-2 Run Model on NVIDIA Jetson
(Arduino Android Raspberry pi AIoT)
6. L-3 Install OpenCV 4.5 on NVIDIA Jetson Nano | Set Up a Camera for NVIDIA Jetson Nano
(Code With Aarohi)

References

Top Articles
Latest Posts
Article information

Author: Jamar Nader

Last Updated: 02/20/2023

Views: 6129

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Jamar Nader

Birthday: 1995-02-28

Address: Apt. 536 6162 Reichel Greens, Port Zackaryside, CT 22682-9804

Phone: +9958384818317

Job: IT Representative

Hobby: Scrapbooking, Hiking, Hunting, Kite flying, Blacksmithing, Video gaming, Foraging

Introduction: My name is Jamar Nader, I am a fine, shiny, colorful, bright, nice, perfect, curious person who loves writing and wants to share my knowledge and understanding with you.