Identification of RoboMaster robot armor plate

Thanks to the Elecfans Forum and SuanNeng for providing the opportunity to trial the Milk-V Duo development board.

Last time, we introduced the porting of the OpenCV image processing library. This time, we’re attempting to port the RoboMaster robot armor plate recognition program to the board to test the development board’s image processing capabilities.

Introduction to Armor Plate Recognition Algorithm

The image below is an isometric view of a RoboMaster infantry unit with armor plates installed. The armor plates are vertically fixed around the vehicle, and each armor plate consists of two parallel LED strips with known dimensions, as well as a fixed distance between the two LED strips.

The armor plate recognition algorithm is already very mature. Its basic idea is to first use threshold segmentation, dilation, and other algorithms to recognize the LED strips in the image. Then, based on the aspect ratio, area, and convexity of the LED strips, the algorithm filters out the LED strips and matches them to find suitable pairs. These pairs are considered candidate armor plates, and the pattern in the middle is extracted to determine whether it is a digit for further filtering. For detailed information, please refer to: RoboMaster视觉教程(4)装甲板识别算法-CSDN博客.

Considerations for Code Porting

The CPU clock frequency of the Milk-V Duo processing board reaches 1GHz, but it has only 64MB of RAM, which is far less than a typical Linux system. Therefore, when porting programs, memory usage must be carefully managed. Our program processes one image frame at a time, and we try to minimize image cloning during processing because slightly more memory usage can lead to program failures.

In terms of video processing, we didn’t use OpenCV’s VideoCapture and VideoWrite classes. Both of these classes consume a lot of memory, and with our high video resolution (1280x1024), errors occur after processing only a few frames. Instead, we store the video as a sequence of images, which ensures that the memory used for each processing step is minimized.

The core code is as follows:

#include<iostream>
#include<opencv2/opencv.hpp>
#include<opencv2/imgproc/types_c.h>
#include<vector>
#include "ArmorParam.h"
#include "ArmorDescriptor.h"
#include "LightDescriptor.h"
#include <sys/time.h>

using namespace std;
using namespace cv;

template<typename T>
float distance(const cv::Point_<T>& pt1, const cv::Point_<T>& pt2)
{
    return std::sqrt(std::pow((pt1.x - pt2.x), 2) + std::pow((pt1.y - pt2.y), 2));
}

class ArmorDetector
{
public:
    //初始化各个参数和我方颜色
    void init(int selfColor){
        if(selfColor == RED){
            _enemy_color = BLUE;
            _self_color = RED;
        }
    }

    void loadImg(Mat& img ){
        _srcImg = img;


        Rect imgBound = Rect(cv::Point(50, 50), Point(_srcImg.cols - 50, _srcImg.rows- 50) );

        _roi = imgBound;
        _roiImg = _srcImg(_roi).clone();//注意一下,对_srcImg进行roi裁剪之后,原点坐标也会移动到裁剪后图片的左上角

    }

    //识别装甲板的主程序,
    int detect(){
        //颜色分离
        _grayImg = separateColors(); // 用颜色判断敌我,可以不用
        int brightness_threshold = 120;//设置阈值,取决于你的曝光度
        Mat binBrightImg;
        //阈值化,只保留了亮的部分
        threshold(_grayImg, binBrightImg, brightness_threshold, 255, cv::THRESH_BINARY);
//cout << "thresh" << endl;

        //膨胀
        Mat element = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(3, 3));
        dilate(binBrightImg, binBrightImg, element);

        //找轮廓
        vector<vector<Point> > lightContours;
        findContours(binBrightImg.clone(), lightContours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);

        //////debug/////
        _debugImg = _roiImg.clone();
        for(size_t i = 0; i < lightContours.size(); i++){
            drawContours(_debugImg,lightContours, i, Scalar(0,0,255),  3, 8);

        }
        ////////////////


        //筛选灯条
        vector<LightDescriptor> lightInfos;
        filterContours(lightContours, lightInfos);
        //没找到灯条就返回没找到
        if(lightInfos.empty()){
            return  -1;
        }

        //debug 绘制灯条轮廓
        drawLightInfo(lightInfos);

        //匹配装甲板
        _armors = matchArmor(lightInfos);
        if(_armors.empty()){
            return  -1;
        }

        //绘制装甲板区域
        for(size_t i = 0; i < _armors.size(); i++){
             vector<Point2i> points;
            for(int j = 0; j < 4; j++){
                points.push_back(Point(static_cast<int>(_armors[i].vertex[j].x), static_cast<int>(_armors[i].vertex[j].y)));
            }

           polylines(_debugImg, points, true, Scalar(0, 255, 0), 3, 8, 0);//绘制两个不填充的多边形
        }

        return 0;
    }

    //分离色彩,提取我们需要(也就是敌人)的颜色,返回灰度图
    Mat separateColors(){
        vector<Mat> channels;
        // 把一个3通道图像转换成3个单通道图像
        split(_roiImg,channels);//分离色彩通道

        Mat grayImg;

        //剔除我们不想要的颜色
        //对于图像中红色的物体来说,其rgb分量中r的值最大,g和b在理想情况下应该是0,同理蓝色物体的b分量应该最大,将不想要的颜色减去,剩下的就是我们想要的颜色
        if(_enemy_color==RED){
            grayImg=channels.at(2)-channels.at(0);//R-B
        }
        else{
            grayImg=channels.at(0)-channels.at(2);//B-R
        }
        return grayImg;
    }

    //筛选符合条件的轮廓
    //输入存储轮廓的矩阵,返回存储灯条信息的矩阵
    void filterContours(vector<vector<Point> >& lightContours, vector<LightDescriptor>& lightInfos){
        for(const auto& contour : lightContours){
            //得到面积
            float lightContourArea = contourArea(contour);
            //面积太小的不要
            if(lightContourArea < _param.light_min_area) continue;
            //椭圆拟合区域得到外接矩形
            RotatedRect lightRec = fitEllipse(contour);
            //矫正灯条的角度,将其约束为-45~45
            adjustRec(lightRec);
            //宽高比、凸度筛选灯条  注:凸度=轮廓面积/外接矩形面积
            if(lightRec.size.width / lightRec.size.height >_param.light_max_ratio ||
            lightContourArea / lightRec.size.area() <_param.light_contour_min_solidity)
                continue;
            //对灯条范围适当扩大
            lightRec.size.width *= _param.light_color_detect_extend_ratio;
            lightRec.size.height *= _param.light_color_detect_extend_ratio;

            //因为颜色通道相减后己方灯条直接过滤,不需要判断颜色了,可以直接将灯条保存
            lightInfos.push_back(LightDescriptor(lightRec));
       }
   }



    //绘制旋转矩形
    void drawLightInfo(vector<LightDescriptor>& LD){
//cout << "enter" << endl;
//        _debugImg = _roiImg.clone();
        vector<std::vector<cv::Point> > cons;
        int i = 0;
        for(auto &lightinfo: LD){
            RotatedRect rotate = lightinfo.rec();
            auto vertices = new cv::Point2f[4];
            rotate.points(vertices);
            vector<Point> con;
            for(int i = 0; i < 4; i++){
                con.push_back(vertices[i]);
            }
            cons.push_back(con);
            drawContours(_debugImg, cons, i, Scalar(0,255,255), 3, 8);
            i++;
        }
//cout << "exit" << endl;

    }

    //匹配灯条,筛选出装甲板
    vector<ArmorDescriptor> matchArmor(vector<LightDescriptor>& lightInfos){
        vector<ArmorDescriptor> armors;
       //按灯条中心x从小到大排序
       sort(lightInfos.begin(), lightInfos.end(), [](const LightDescriptor& ld1, const LightDescriptor& ld2){
           //Lambda函数,作为sort的cmp函数
           return ld1.center.x < ld2.center.x;
       });
       for(size_t i = 0; i < lightInfos.size(); i++){
        //遍历所有灯条进行匹配
           for(size_t j = i + 1; (j < lightInfos.size()); j++){
               const LightDescriptor& leftLight  = lightInfos[i];
               const LightDescriptor& rightLight = lightInfos[j];

               //角差
               float angleDiff_ = abs(leftLight.angle - rightLight.angle);
               //长度差比率
               float LenDiff_ratio = abs(leftLight.length - rightLight.length) / max(leftLight.length, rightLight.length);
               //筛选
               if(angleDiff_ > _param.light_max_angle_diff_ ||
                  LenDiff_ratio > _param.light_max_height_diff_ratio_){

                   continue;
               }
               //左右灯条相距距离
               float dis = distance(leftLight.center, rightLight.center);
               //左右灯条长度的平均值
               float meanLen = (leftLight.length + rightLight.length) / 2;
               //左右灯条中心点y的差值
               float yDiff = abs(leftLight.center.y - rightLight.center.y);
               //y差比率
               float yDiff_ratio = yDiff / meanLen;
               //左右灯条中心点x的差值
               float xDiff = abs(leftLight.center.x - rightLight.center.x);
               //x差比率
               float xDiff_ratio = xDiff / meanLen;
               //相距距离与灯条长度比值
               float ratio = dis / meanLen;
               //筛选
               int cnt = 0;
               cnt++;
//               cout << cnt << "times try:\n" << "yDiff_ratio: " << yDiff_ratio << "\nxDiff_ratio: " << xDiff_ratio << "\nratio: " << ratio << endl;
               if(yDiff_ratio > _param.light_max_y_diff_ratio_ ||
                  xDiff_ratio < _param.light_min_x_diff_ratio_ ||
                  ratio > _param.armor_max_aspect_ratio_ ||
                  ratio < _param.armor_min_aspect_ratio_){
                   continue;
               }

               //按比值来确定大小装甲
               int armorType = ratio > _param.armor_big_armor_ratio ? BIG_ARMOR : SMALL_ARMOR;
               // 计算旋转得分
               float ratiOff = (armorType == BIG_ARMOR) ? max(_param.armor_big_armor_ratio - ratio, float(0)) : max(_param.armor_small_armor_ratio - ratio, float(0));
               float yOff = yDiff / meanLen;
               float rotationScore = -(ratiOff * ratiOff + yOff * yOff);
               //得到匹配的装甲板
               ArmorDescriptor armor(leftLight, rightLight, armorType, _grayImg, rotationScore, _param);

               armors.emplace_back(armor);
               break;
           }
       }
        return armors;
    }

    void adjustRec(cv::RotatedRect& rec)
    {
        using std::swap;

        float& width = rec.size.width;
        float& height = rec.size.height;
        float& angle = rec.angle;



        while(angle >= 90.0) angle -= 180.0;
        while(angle < -90.0) angle += 180.0;


        if(angle >= 45.0)
        {
            swap(width, height);
            angle -= 90.0;
        }
        else if(angle < -45.0)
        {
            swap(width, height);
            angle += 90.0;
        }


    }
    cv::Mat _debugImg;
private:
    int _enemy_color;
    int _self_color;

    cv::Rect _roi; //ROI区域

    cv::Mat _srcImg; //载入的图片保存于该成员变量中
    cv::Mat _roiImg; //从上一帧获得的ROI区域
    cv::Mat _grayImg; //ROI区域的灰度图
    vector<ArmorDescriptor> _armors;

    ArmorParam _param;
};


int main(int argc, char *argv[])
{
    std::string img_folder = "/media/user/png/"; // 图像序列所在的文件夹路径
    std::string out_folder = "/media/user/output/"; // 图像序列所在的文件夹路径
//	capture.open("/media/user/png/output_%2d.png", CAP_IMAGES );
	//从设置帧开始
	long frameToStart = 1;
	cout << "从第" << frameToStart << "帧开始读" << endl;
	int frameTostop = 100;
	if (frameTostop < frameToStart)
	{
		cout << "结束帧小于开始帧,错误" << endl;
	}
	int count = 0; 

    Mat img;
    int i = frameToStart;
    struct timeval start, end;
    double elapsed_time;

    while(i < frameTostop) 
	{  
		Mat frame;  
		count++;  
		std::string img_path = img_folder + "output_" + std::to_string(i) + ".png";
		img = imread(img_path );
        if (img.empty())
        {
            std::cerr << "无法读取图像文件 " << img_path << std::endl;
            return -1;
        }
        
        gettimeofday(&start, NULL); // 记录开始时间
        
        ArmorDetector detector;
        detector.init(RED);
        detector.loadImg(img);
//        cout << "detect" << endl;
        detector.detect();
        
        gettimeofday(&end, NULL); // 记录结束时间
        elapsed_time = (end.tv_sec - start.tv_sec) + (end.tv_usec - start.tv_usec) / 1000000.0;
        
        img_path = out_folder + "result_" + std::to_string(i) + ".png";
        imwrite(img_path, detector._debugImg);
	    i++;
	    cout << count << ":" << elapsed_time << endl;
	}
}

Test Results

You can view the test result video on Bilibili: https://www.bilibili.com/video/BV1gz4y1t7QB/.

The original video we used was recorded at 12 frames per second. During processing, we printed the processing time for each frame:

[root@milkv]/media/user# ./armor
Reading from the 1st frame
1: 0.304027
2: 0.313252
3: 0.314704
4: 0.407992
5: 0.357159
6: 0.324584
7: 0.322873
8: 0.333659
9: 0.312911
10: 0.313014
11: 0.318744
12: 0.574995
13: 0.366383
14: 0.311588
15: 0.414218
16: 0.354194
17: 0.33607
18: 0.314957
19: 0.501529
20: 0.364166
21: 0.394598
22: 0.44688

From the processing results, it appears that the armor plate detection in the video is as effective as on a computer, but the processing speed is significantly slower. It’s processing at approximately 3 frames per second, which doesn’t meet the real-time detection requirements. The processing capability of the Milk-V Duo processing board may be better suited for scenarios involving camera capture and compressed transmission. For algorithms like video detection, if real-time requirements are not strict, it may still be worth trying.