Yolov5 Detection On Milk-v Duo

Docker Development Environment

Install Docker

In a Windows environment, you can install Docker Desktop for Windows. Here is the docker download link

Running Docker on Windows requires specific dependencies, as shown in the figure. You need to use either the WSL2 backend or the Hyper-V backend as the runtime dependency.

To enable the Hyper-V backend, follow these steps:

  1. Go to Control Panel → Programs → Turn Windows features on or off.
  2. Find Hyper-V, and check both “Hyper-V Management Tools” and “Hyper-V Platform.” Wait for the system to configure the necessary files, then restart your computer.

Once you have enabled the Hyper-V backend and followed the steps you mentioned, you can proceed to download and install Docker Desktop for Windows. During the installation, make sure to select the appropriate options based on your chosen backend.

After the installation is complete, you’ll need to restart your computer. Once it’s restarted, you should be able to use Docker on your Windows system.

1.Pull the necessary Docker images

docker hub images:

docker pull sophgo/tpuc_dev:v2.2

Start Container

docker run --privileged --name <container_name> -v /workspace -it sophgo/tpuc_dev:v2.2

<container_name> is the name you defined for the container.

Download TPU SDK

you could download the packages tpu-mlir_xxxx.tar.gz from the following method,xxxxis the vesion,like tpu-mlir_v1.2.89-g77a2268f-20230703

username: cvitek_mlir_2023
password: 7&2Wd%cu5k

Also,github download link is here,github link.

Copy SDK

To copy your development toolkit from Windows to a Docker container, you can use the docker cp command. Here’s a general example of how to do it:

docker cp <path>/tpu-mlir_xxxx.tar.gz <container_name>:/tpu-mlir_xxxx.tar.gz

Replace the following with your specific information:

  • path: The local file on your Windows machine that you want to copy.
  • container_name: The name or ID of your running Docker container.

Extract SDK and add it to the environment variables

In a Docker container, you can follow these steps to extract a toolkit and add it to the environment variables:

$ tar -zxvf tpu-mlir_xxxx.tar.gz
$ source ./tpu-mlir_xxxx/envsetup.sh

2.Preparing the raw model file on Windows

Preparing the yolov5 Developer Kit and yolov5n.pt file

Downloadyolov5 kit and yolov5n.ptfile, After downloading, extract the toolkit and make yolov5n.pt under yolov5-master/ file directory.

Conda Enviroment

Create a new Anaconda Prompt terminal, create a conda virtual environment and install version 3. 9. 0 of python, then use the following command to install version 1. 12. 1 of pytorch. The specific installation instructions can be selected according to your needs, and the subsequent process only requires the use of CPU.

# CUDA 10.2
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=10.2 -c pytorch
# CUDA 11.3
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
# CUDA 11.6
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch
# CPU Only
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cpuonly -c pytorch

Then cd the terminal path to the path’yolov5-master /’ of the development kit, and enter’pip install-r requirements. txt’ to install other dependencies

Obtain Model File

Create a new main.py file in the yolov5-master/ directory, and write the following code in the file:

import torch
from models.experimental import attempt_download
model = torch.load(attempt_download("./yolov5n.pt"), map_location=torch.device('cpu'))['model'].float()
model.model[-1].export = True
torch.jit.trace(model, torch.rand(1, 3, 640, 640), strict=False).save('./yolov5n_jit.pt')

Then, find the file yolov5-master/models/yolo.py and comment out lines 63 to 79, and add the code return x at line 80, as shown in the following figure:

After completing the modifications, run the main.py file, and it will generate the yolov5n_jit.pt file in the yolov5-master directory. This file is the required raw model file.

3. Preparing the Work Directory in Docker

Create a yolov5n_torch working directory, noting that it should be at the same level as tpu-mlir_xxxx, and place both the model file and image files in this directory.

$ mkdir yolov5n_torch && cd yolov5n_torch
Open a new terminal and copy the `yolov5n_jit.pt` file from Windows to Docker.
docker cp <path>/yolov5-master/yolov5n_jit.pt <container_name>:/workspace/yolov5n_torch/yolov5n_jit.pt

where <path> is the file directory where the yolov5 development kit is located in the Windows system, and <container_name> is the container name.

Then, place the image files in the yolov5n_torch/ directory and create a work directory.

$ cp -rf $TPUC_ROOT/regression/dataset/COCO2017 .
$ cp -rf $TPUC_ROOT/regression/image .
$ mkdir work && cd work

Here, $TPUC_ROOT is an environment variable that corresponds to the tpu-mlir_xxxx directory.


In this example, the model takes RGB input, and mean and scale are set to 0,0,0 and 0.0039216,0.0039216,0.0039216 respectively. The command to convert the torch model to an MLIR model is as follows:

$ model_transform.py \
  --model_name yolov5n \
  --model_def ../yolov5n_jit.pt \
  --input_shapes [[1,3,640,640]] \
  --pixel_format "rgb" \
  --keep_aspect_ratio \
  --mean 0,0,0 \
  --scale 0.0039216,0.0039216,0.0039216 \
  --test_input ../image/dog.jpg \
  --test_result yolov5n_top_outputs.npz \
  --output_names 1219,1234,1249 \
  --mlir yolov5n.mlir

Successful Running Example:

After converting to MLIR model, a yolov5n.mlir file will be generated, which is the MLIR model file, and a yolov5n_in_f32.npz file will also be generated, which is the input file for subsequent model conversion.

5. MLIR to INT8 Model

Generate Calibration Table

Before converting to INT8 model, a calibration table needs to be generated. Here, we use 100 existing images from COCO2017 as an example to execute the calibration:

$ run_calibration.py yolov5n.mlir \
  --dataset ../COCO2017 \
  --input_num 100 \
  -o ./yolov5n_cali_table
After the operation is completed, `yolov5n_ cali_ table ` file, which is used for subsequent compilation of int8 models

Compile to INT8 Model

The command to convert the MLIR model to INT8 model is as follows:

$ model_deploy.py \
  --mlir yolov5n.mlir \
  --quantize INT8 \
  --calibration_table ./yolov5n_cali_table \
  --chip cv180x \
  --test_input ../image/dog.jpg \
  --test_reference yolov5n_top_outputs.npz \
  --compare_all \
  --tolerance 0.96,0.72 \
  --fuse_preprocess \
  --debug \
  --model yolov5n_int8_fuse.cvimodel

Example of successful compilation effect:

After compilation, yolov5n_ int8_ Fuse. cvi model file

6. Validating on the Duo development board

Connecting the Duo development board

Follow the previous tutorials to connect the Duo development board to your computer and use Mobaxterm to open a terminal to operate the Duo development board.

Obtaining cvitek_tpu_sdk

You can obtain the development toolkit cvitek_tpu_sdk_cv180x_musl_riscv64_rvv.tar.gz from a download platform, noting that you need to select the cv180x toolkit. The download platform is as follows:

username: cvitek_mlir_2023
password: 7&2Wd%cu5k

After the download is complete, copy it to Docker and extract it in Docker

$ tar -zxvf cvitek_tpu_sdk_cv180x_musl_riscv64_rvv.tar.gz

After the decompression is completed, a folder named “cvitek_tpu_sdk” will be generated.

Copying the development toolkit and model files to the development board

In the terminal of the duo development board, create a new directory /home/milkv/.

$ mkdir /home/milkv && cd /home/milkv

Copy the development toolkit and model file on the development board in the terminal of docker.

$ scp -r cvitek_tpu_sdk root@
$ scp /workspace/yolov5n_torch/work/yolov5n_int8_fuse.cvimodel root@

Setting environment variables

In the terminal of the duo development board, perform the setting of environment variables.

$ cd ./cvitek_tpu_sdk
$ source ./envs_tpu_sdk.sh

Performing Object Detection

In the terminal of the duo development board, enter the following command to perform object detection:

$ ./samples/samples_extra/bin/cvi_sample_detector_yolo_v5_fused_preprocess \
  ./yolov5n_int8_fuse.cvimodel \
  ./samples/samples_extra/data/dog.jpg \

If successful

After successful operation, a detection result file yolov5n_ out.jpg


The following files are summarized in the main text:

  • TPU-MLIR model conversion toolchain: tpu-mlir_v1.2.89-g77a2268f-20230703.tar.gz
  • TPU SDK development toolkit: cvitek_tpu_sdk_cv180x_musl_riscv64_rvv.tar.gz
  • (Appendix) Sample test code source: cvitek_tpu_samples.tar.gz
  • (Appendix) Converted cvimodel package: cvimodel_samples_cv180x.tar.gz

The package files required for TPU development mentioned in the main text can be obtained from the following sftp site:

user: cvitek_mlir_2023
password: 7&2Wd%cu5k

Or use wget to get:

# TPU-MLIR model conversion toolchain
wget --user='cvitek_mlir_2023' --password='7&2Wd%cu5k'

# TPU SDK development toolkit
wget --user='cvitek_mlir_2023' --password='7&2Wd%cu5k'

# (Appendix)  Sample test code source
wget --user='cvitek_mlir_2023' --password='7&2Wd%cu5k'

# (Appendix) Converted cvimodel package
wget --user='cvitek_mlir_2023' --password='7&2Wd%cu5k'