Open Access Paper
21 October 2024 Research on the detection of safflower filaments in natural scenarios based on deep learning algorithms
Bangbang Chen, Baojian Ma, Xiangdong Liu, Di Yan
Author Affiliations +
Proceedings Volume 13401, International Conference on Automation and Intelligent Technology (ICAIT 2024); 1340107 (2024) https://doi.org/10.1117/12.3036156
Event: 2024 International Conference on Automation and Intelligent Technology (ICAIT 2024), 2024, Wuhan, China
Abstract
In response to the challenges posed by high labor costs and low mechanization in the harvesting of safflower filaments in Xinjiang, this study introduces an intelligent detection method leveraging YOLOv8s. A safflower filament dataset was developed and enhanced as the basis for constructing a detection model that incorporates the C2f, SPPF, and Detect modules, and the Loss function. The model was evaluated using recall rate (R), precision (P), and mean average precision (mAP) as metrics. We compared 12 variants of target algorithms based on YOLOv3, YOLOv5, YOLOv6, and YOLOv8. The findings indicate that YOLOv8s achieved a precision of 82.8%, a recall of 78.2%, and mAP of 86.2%. Relative to YOLOv3-tiny and YOLOv5s, YOLOv8s demonstrated higher recall and mAP. Despite its compact size of only 5.96MB, YOLOv8s exhibited superior confidence with no missed or false detections compared to these models. To further affirm the reliability of YOLOv8s, its detection performance on safflower filaments was tested under various conditions, achieving mAP values of 91.8%, 92.8%, 90.3%, 79%, and 92.5% respectively, showcasing its rapid and accurate detection capabilities while maintaining lightness and robustness, potentially serving as a technical reference for the development of intelligent safflower harvesting robots.

1.

INTRODUCTION

Safflower, celebrated as one of Xinjiang’s four esteemed medicinal herbs, boasts therapeutic benefits including antiinflammatory effects, blood circulation promotion, and blood pressure reduction [1]. It is predominantly cultivated in locales such as Changji, Hami, and Yumin. At present, the collection of safflower filaments is largely manual, a process both inefficient and costly, often resulting in significant economic losses for farmers due to harvesting delays. In recent years, the mechanical harvesting of safflower filaments has emerged as a key research area. For example, Weibin Cao’s [2] team at Shihezi University has developed a combing and clamping harvesting head assembly that utilizes multiple sets of combing and clamping heads to harvest the filaments, achieving a total plant clean rate of 79.55%. The handheld rotary cutting filament harvester head, designed by Zhengguo Zhang’s [3] team at Xinjiang Agricultural University, facilitates point-to-point handling of flowers, utilizing a slanting rotating cutter blade to separate the filaments from the fruit at the harvesting point. Meanwhile, Yun Ge’s [4] team has developed a roller-type safflower filament harvesting head that is manually operated, requiring laborers to align the filaments by hand and carry the device on their backs, resulting in low efficiency. To enhance the efficiency of manual harvesting and further advance mechanized harvesting, the rapid progression of artificial intelligence and the increasing maturity of machine learning in agricultural machinery make the study of intelligent mechanized harvesting of safflower filaments extremely valuable for engineering applications.

Despite the nascent state of research in the field of intelligent mechanical harvesting of safflower filaments, with no mature technologies established, Zhang Zhengguo [5] and others have proposed a detection method for safflower filaments in complex environments using an improved YOLOv3 model. This method utilizes the lightweight GhostNet as the backbone network, incorporates the SPP structure to enhance feature representation, and integrates the CBAM module into the feature pyramid. Compared to algorithms like Faster R-CNN, YOLOv3, and YOLOv4, this approach offers certain advantages in terms of accuracy in detecting safflower filaments. Another innovation is the C-YOLOv5m network model proposed by Hui Guo and others [6], which improves detection accuracy by inserting attention modules into the backbone and neck networks of YOLOv5m. Despite these advancements, challenges such as low detection precision and high model memory consumption persist. While research on the intelligent detection of safflower filaments is relatively sparse, mature detection technologies exist in other fields. For instance, Jicheng Zhang [7] proposed a method based on deep residual learning for detecting mature strawberries with an accuracy of 94.28%. Xiao He [8] improved the SSD network for detecting diseases and pests on watermelon leaves, enhancing the detection accuracy by 1.1% to 3.8%. Wenjing Zhang and others [9], addressing the issue of mutual occlusion between tomato branches and fruits, proposed a detection method based on the Faster R-CNN network structure, achieving an average detection rate of 95.2% for tomatoes. Jinwen Qi [10] enhanced the detection accuracy for apples to 95.7% using data augmentation techniques and an improved YOLOv5 algorithm. Yubin Lan and others [11] introduced the Ghost module and Ghost BottleNeck structure from the GhostNet network into the YOLOv5s model, along with the CA attention module, to improve the detection accuracy of ginger diseases and pests. Jiachai Huang and others [12] developed an improved lightweight Mobile-YOLOv5s strawberry detection algorithm incorporating the Alpha-IoU loss function and reclustered anchors using the K-Means++ algorithm, achieving a detection precision of 99.5% for mature strawberries. Hongping Zhou and others [13] proposed a classification and detection method for camellia fruits based on transfer learning and the YOLOv8n algorithm, reaching the highest mAP value of 92.7%. Finally, Baoxia Du and others [14] improved the existing YOLOv8 algorithm to develop a new apple detection model, which improved the average precision by 1.7%.

Building on the aforementioned findings, this study proposes a safflower filament detection method based on YOLOv8s. Images of safflower filaments collected from various natural environments are augmented and labeled, then processed through the fine-tuned YOLOv8s network model to achieve recognition in natural settings. This approach aims to develop a lightweight and rapid detection model for filament detection, providing technical support for the design of vision systems in intelligent harvesting robots.

2.

MATERIALS AND METHODS

2.1

Materials

This study focuses on the dryland safflower extensively cultivated in Xinjiang. Experimental data collection was conducted at the safflower planting base of Xinjiang University of Technology in early August 2023. Data was collected under various natural conditions (Sunny, Cloudy, Obstructed, Shaking, etc.), to avoid model dependency on single photos and data overfitting. Data augmentation techniques such as brightness adjustment, mirroring, Gaussian noise addition, and photo rotation were applied to the original images, expanding the dataset to 1600 images. Some examples are shown in Figure 1. Based on the actual situations faced by intelligent picking robots, the safflower filaments requiring recognition in the images were categorized into two classes for data labeling: “Blooming Filaments” (Safflower-B) for naturally open filaments and “Withered Filaments” (Safflower-D) for already faded filaments. The labeled 1600 safflower filament images were divided into training set (1120 images), validation set (320 images), and test set (160 images) in a ratio of 7:2:1.

Figure 1.

Example of the safflower filaments Dataset

00021_PSISDG13401_1340107_page_2_1.jpg

2.2

Experimental platform

The experiments were conducted on a Win11 operating system with an AMD Ryzen Threadripper PRO 3975WX 32-Cores 3.50GHz processor and an NVIDIA RTX A5000 graphics card. PyTorch was used as the deep learning framework, PyCharm2023 as the programming platform, and Python as the programming language. All algorithms were executed in this environment.

2.3

Application of YOLOv8s network in safflower filament detection

This study employed the YOLOv8 network model for safflower filament detection. Compared to the YOLOv5 network, YOLOv8 replaces the C3 module with the C2f module, reducing parameters and computational load, further emphasizing lightweight features to improve model performance.

2.3.1

YOLOv8 network structure

The proposed YOLOv8 network structure [15] is illustrated in Figure 2. It mainly consists of Backbone network, Neck network, and Head network. The Backbone network, responsible for feature extraction, comprises 5 convolutional modules, 4 C2f modules, and 1 SPPF module. The convolutional modules perform downsampling and nonlinear representation, while the C2f modules reduce parameters to decrease computational load. The Backbone network extracts features using a series of convolutional and deconvolutional layers, employing residual connections [16] and bottleneck structures to reduce network size and enhance performance. The Neck network, a crucial component of the YOLOv8 model, employs multi-scale feature fusion to enhance feature representation capability by merging feature maps from different stages of the Backbone network. The Head network embodies the core modification of YOLOv8-Decoupled-Head, dividing the original detection head into two parts responsible for final object detection and classification tasks.

Figure 2.

YOLOv8 network structure

00021_PSISDG13401_1340107_page_3_1.jpg

The first layer of the Backbone network is the C2f module (Figure 3), designed to increase feature richness and network expressiveness by adding more skip connections and extra Split operations, facilitating better learning and understanding of image information, thus improving accuracy and performance.

Figure 3.

C2f module

00021_PSISDG13401_1340107_page_3_2.jpg

The SPPF module’s main function is multi-scale feature extraction and enhancement to improve the robustness of object detection. Compared to the SPP module used in YOLOv5 (Figure 4), employing the SPPF module can reduce computational load to a certain extent and has advantages in multi-scale feature extraction and enhanced feature representation.

Figure 4.

SPPF module

00021_PSISDG13401_1340107_page_3_3.jpg

2.3.2

Evaluation metrics

The evaluation metrics for YOLOv8s primarily assess the safflower filament detection model through Recall, Precision, and Mean Average Precision (mAP). The formulas for these metrics are shown below (1) to (3).

00021_PSISDG13401_1340107_page_4_1.jpg
00021_PSISDG13401_1340107_page_4_2.jpg
00021_PSISDG13401_1340107_page_4_3.jpg

In the formula, TP stands for true positive, FP for false positive, FN for false negative, and TN for true negative. A true negative means that, within a certain sample, both the actual and predicted labels are 0. A true positive means that both the actual and predicted labels are 1 within a certain sample. A false positive indicates that the actual label is 0 but the predicted label is 1 in a sample. A false negative means that in a certain sample, the actual label is 1 but the predicted label is 0. The parameter k represents the number of categories of detected safflower filaments, and this study only identifies Safflower-B and Safflower-D, so k = 2.

3.

RESULTS AND ANALYSIS

3.1

Model training

This research model was trained for 200 epochs. As shown in Figure 5, the P, R, and mAP indicators of the YOLOv8s model change with the number of iterations. When the training reached 150 epochs, the model became basically stable, with the P value for safflower filament detection at 82.8%, R value at 78.2%, and mAP at 86.2%.

Figure 5.

Training results

00021_PSISDG13401_1340107_page_4_4.jpg

3.2

Detection result analysis

3.2.1

Performance comparison of different object detection algorithms

In order to evaluate the detection performance of the YOLOv8s network on saffron stigma images, under the same conditions, the saffron stigma training set was separately trained based on the YOLOv3, YOLOv3-spp, YOLOv3-tiny, YOLOv5s, YOLOv5m, YOLOv5n, YOLOv6s, YOLOv6m, YOLOv6n, YOLOv8s, YOLOv8m, and YOLOv8n object detection algorithms for 200 epochs. Then, the performance of the 12 detection algorithms on the saffron stigma test set was evaluated. Table 1 shows the comparison of the performance detection results of the 12 object detection algorithms.

Table 1.

Performance comparison of 12 object detection algorithms

AlgorithmP/%R/%mAP/%Model size (MB)Time (ms)
YOLOv383.174.184.5207.82.3
YOLOv3-spp79.580.085.5209.92.3
YOLOv3-tiny83.076.484.123.22.7
YOLOv5s83.976.185.918.52.7
YOLOv5m82.378.678.450.52.7
YOLOv5n77.978.784.65.32.6
YOLOv8s82.878.286.25.962.8
YOLOv8m82.479.285.549.62.6
YOLOv8n77.181.986.05.952.7
YOLOv6s82.378.584.431.32.6
YOLOv6m82.777.985.399.42.5
YOLOv6n79.380.485.48.72.7

Analyzing the test results of the three YOLOv3 algorithms, it was found that the detection precision P value of the YOLOv3-tiny algorithm for safflower filaments was 3.5 percentage points higher than that of the YOLOv3-spp algorithm, and its R value was 2.3 percentage points higher than that of the YOLOv3 algorithm. The model size is 184.6MB and 186.7MB smaller than YOLOv3 and YOLOv3 algorithms, respectively. The precision of the YOLOv3-tiny algorithm is slightly lower than that of YOLOv3, but it can maintain relatively high precision while being lightweight. Analyzing the three YOLOv5 algorithms shows that the detection precision P value of the YOLOv5s algorithm for safflower filaments is 1.6 and 6 percentage points higher than the YOLOv5m and YOLOv5n algorithms, respectively, and its mAP value is 7.5 and 1.3 percentage points higher than the YOLOv5m and YOLOv5n algorithms, respectively. The model size is 32MB smaller than the YOLOv5m model, and its size is slightly larger than the YOLOv5n. However, since the P value of the YOLOv5n is much lower than other models, the YOLOv5s algorithm can maintain relatively high precision for safflower filament recognition. Analyzing the three YOLOv6 algorithms shows that the recall rate R value of the YOLOv6n algorithm is much higher than other models, 1.9 and 2.5 percentage points higher than the YOLOv6s and YOLOv6m, respectively. Its mAP value is 1 and 0.1 percentage points higher than YOLOv6s and YOLOv6m, respectively. The model size is 22.6MB and 90.7MB smaller than the YOLOv6s and YOLOv6m, respectively. The YOLOv6n algorithm has a relatively small size, making it suitable for model transfer learning and ensuring relatively high precision. Analyzing the test results of the three YOLOv8 algorithms, it was found that the detection precision P value of the YOLOv8s algorithm for safflower filaments was 0.4 and 5.7 percentage points higher than the YOLOv8m and YOLOv8n algorithms, respectively, and its mAP value was 0.7 and 0.2 percentage points higher than the YOLOv8m and YOLOv8n algorithms, respectively. The model size is 43.64MB smaller than the YOLOv8m model, and its size is slightly larger than the YOLOv8n. However, since the precision of the YOLOv8n is too low, the YOLOv8s algorithm can ensure the recognition precision of safflower filaments. In summary, the YOLOv8s, YOLOv5s, YOLOv6n, and YOLOv3-tiny models were found to have high recognition precision for safflower filaments.

3.2.2

Confidence comparison

Using the YOLOv3-tiny, YOLOv5s, YOLOv6n, and YOLOv8s models, we conducted confidence testing on three randomly selected images. The results are shown in Table 2, and the detection effects are illustrated in Figure 6. The three images of safflower filaments are labeled as T1, T2, and T3 respectively. In the images, purple arrows indicate missed detections.

Figure 6.

Confidence comparison effect

00021_PSISDG13401_1340107_page_6_1.jpg

Table 2.

Confidence comparison results

AlgorithmImage NumberIdentification numberConfidence     
 T130.64,0.77,0.78    
YOLOv3-tinyT220.81,0.84     
 T360.75,0.79,0.71,0.33,0.81,0.70 
 T130.52,0.79,0.80    
 T220.910.85     
YOLOv5s         
 T370.77,0.78,0.77,0.49,0.78,0.80, 
   0.30      
 T130.51,0.79,0.78    
YOLOv6nT220.89,0.86     
 T370.82,0.5,0.75,0.58,0.85,0.82,0.8
 T140.58,0.54,0.81,0.82,   
YOLOv8sT220.93,0.84     
 T370.82,0.78,0.8,0.76,0.75,0.49,0.45

As shown in Figure 6, four models’ detection of the same safflower filament images reveals: The YOLOv3-tiny model displays a red-purple arrow in image T1, indicating missed filament detections, no special occurrences in image T2, and a purple arrow in image T3, indicating missed red filament detections; The YOLOv5s model has a purple arrow in image T1, showing missed red filament detections, with no missed detections in images T2 and T3; The YOLOv6n model shows a purple arrow in image T1 indicating missed detections, while no missed or false detections are observed in T2 and T3; The YOLOv8s model shows no missed or false detections in images T1, T2, and T3, and has a higher confidence level.

3.2.3

Different scenarios for safflower filament detection performance tests

To further determine the model with excellent performance in recognizing and detecting saffron threads among the four models YOLOv8s, YOLOv5s, YOLOv6n, and YOLOv3-tiny, images of safflower filaments under wide-angle, backlit, sunny, occluded, and shaky conditions were selected for validation and testing. The results are shown in Table 3.

Table 3.

Model detection results in different scenarios

AlgorithmcategoryP/%R/%mAP/%
 Wide-angle86.777.387.8
 Backlit91.578.689.9
YOLOv3-tinySunny96.075.691.7
 Occluded86.364.079.2
 Shaky89.079.590.0
 Wide-angle88.178.091.0
 Backlit88.379.291.5
YOLOv5sSunny94.981.993.0
 Occluded89.466.883.9
 Shaky93.081.192.5
 Wide-angle87.880.190.8
 Backlit98.168.988.7
YOLOv6nSunny91.982.792.8
 Occluded93.359.880.1
 Shaky92.379.792.0
 Wide-angle87.884.091.8
 Backlit90.179.490.3
YOLOv8sSunny93.881.692.8
 Occluded87.065.079.0
 Shaky93.681.292.5

As Table 3 shows, under wide-angle conditions, the YOLOv5s model has higher recognition accuracy for safflower filaments than the YOLOv8s model. However, the recall rate and mAP (mean Average Precision) of the YOLOv8s model are superior to those of YOLOv5s. In wide-angle environments, the recall rate R and mAP of the YOLOv8s model are significantly higher than other models. In backlight conditions, the accuracy of the YOLOv8s model is much higher than that of YOLOv5s, with a recall rate R significantly surpassing the other three models, indicating that the YOLOv8s object detection algorithm can effectively identify and detect safflower filaments in backlight conditions. On clear days, the recognition accuracy (P value), R value, and mAP of the YOLOv8s model are slightly lower than those of YOLOv5s, but since the YOLOv5s model has a size of 18.5MB, which is much larger than the YOLOv8s model, the recall rate R and mAP of YOLOv8s are higher than those of YOLOv3-tiny, making YOLOv8s suitable for lightweight applications and better at detecting safflower filaments in sunny conditions. In obstructed environments, the accuracy of the YOLOv8s algorithm is slightly lower than that of YOLOv5s, but the large size of the YOLOv5s model does not meet the lightweight characteristic, whereas the recognition accuracy P of YOLOv8s is higher than that of YOLOv3-tiny, and its recall rate R is higher than both YOLOv3-tiny and YOLOv6n models, suggesting that YOLOv8s can better identify and pick safflower filaments in densely grown safflower fields. Under conditions of shaking and external disturbances, the recognition accuracy of the YOLOv8s model is much higher than that of YOLOv5s, YOLOv6n, and YOLOv3-tiny models, with an R value and mAP also surpassing other models, thus, the YOLOv8s object detection algorithm can better detect safflower filaments under the influence of external disturbances. The comparison of model metrics in different environments reveals that the YOLOv8s model can effectively recognize safflower filaments, which is significant for the harvesting of safflower filaments.

4.

CONCLUSION

  • (1) The YOLOv8s detection algorithm has a precision P value of 82.8%, a recall R value of 78.2%, and a mean average precision mAP value of 86.2%. The model size is 5.96MB, and the detection speed for safflower filaments is 2.7ms. Compared with YOLOv3-tiny and YOLOv5s, the R value of the YOLOv8s algorithm is 1.8 and 2.1 percentage points higher respectively, and the mAP values are 2.3, 0.3, and 0.8 percentage points higher than those of the YOLOv3-tiny, YOLOv5s, and YOLOv6n models, respectively. The model also maintains a light weight while having a fast detection speed, has obvious advantages in model training, can minimize memory usage to the greatest extent, and facilitates model transfer and learning.

  • (2) For safflower filaments under conditions of large viewing angles, backlighting, clear days, obstructions, and shaking, the detection method for safflower filaments proposed in this study can effectively recognize and detect safflower filaments, indicating that the method based on the YOLOv8s detection network is robust.

ACKNOWLEDGEMENTS

This work was supported by the second batch of Tianshan Talent Cultivation Plan for Young Talent Support Project 2023TSYCQNTJ0040.

REFERENCES

[1] 

National Pharmacopoeia Commission, “Pharmacopoeia of the People’s Republic of China,” 232 –233 China Medical Science and Technology Press, Beijing, China (2020). Google Scholar

[2] 

Cao Weibin, Yang Shuangping, Li Shufeng, et al., “Parameter optimization of height limiting device for comb-type safflower harvesting machine,” Transactions of the Chinese Society of Agricultural Engineering., 35 (14), 48 –56 (2019). https://doi.org/10.11975/j.issn.1002-6819.2019.14.006 Google Scholar

[3] 

Zhang Zhenguo, Lv Quangui, Ren Jieyu, et al., “Design of critical components for safflower harvesting machinery by rotary shear,” Journal of Chinese Agricultural Mechanization., 40 (07), 01 –06 (2019). https://doi.org/10.13733/j.jcam.issn.2095-5553.2019.07.01 Google Scholar

[4] 

Ge Yun, Zhang Lixin, Ying Qing, et al., “Dynamic model for sucking process of pneumatic cutting-type safflower harvest device,” International Journal of Agricultural and Biological Engineering., 9 (5), 43 –50 (2016). Google Scholar

[5] 

Zhang Zhenguo, Xing Zhenyu, Zhao Minyi, et al., “Detecting safflower filaments using an improved YOLOv3 under complex environments,” Transactions of the Chinese Society of Agricultural Engineering., 39 (3), 162 –170 (2023). https://doi.org/10.11975/j.issn.1002-6819.202211204 Google Scholar

[6] 

Guo Hui, Chen Haiyang, Gao Guomin, et al., “Safflower Corolla Object Detection and Spatial Positioning Methods Based on YOLOv5m,” Transactions of the Chinese Society for Agricultural Machinery., 54 (07), 272 –281 (2023). https://doi.org/10.6041/j.issn.1000-1298.2023.07.027 Google Scholar

[7] 

Zhang Jicheng, Li Deshun, “Ripe strawberry recognition method based on deep residual learning,” Journal of Chinese Agricultural Mechanization., 43 (02), 136 –142 https://doi.org/10.13733/j.jcam.issn.2095-5553.2022.02.019 Google Scholar

[8] 

He Xiao, “Deep Learning Based Recognition and Detection Method of Waterme on Leaf Diseases,” Google Scholar

[9] 

Hunan Agricultural University, China (2021). Google Scholar

[10] 

Zhang Wenjing, Zhao Xingxiang, Ding Ruirou, et al., “A Detection and Recognition Method for Tomato on Faster R-CNN Algorithm,” Journal of Shandong Agricultural University (Natural Science Edition)., 52 (04), 624 –630 (2021). https://doi.org/10.3969/j.issn.1000-2324.2021.04.017 Google Scholar

[11] 

Qi Jinwen, “Research on Apple Target Recognition and Localization Algorithm Based on Deep Learning,” Electronic Technology & Software Engineering, 08 189 –192 (2022) https://kns.cnki.net Google Scholar

[12] 

Lan Yubin, Sun Binshu, Zhang Lechun, et al., “Identifying diseases and pests in ginger leaf under natural scenes using improved YOLOv5s,” Transactions of the Chinese Society of Agricultural Engineering., 40 (1), 210 –216 (2024). https://doi.org/10.11975/j.issn.1002-6819.202310124 Google Scholar

[13] 

Huang Jiacai, Zhao Xuedi, Gao Fangzheng, et al., “Recognizing and detecting the strawberry at multi-stages usingimproved lightweight YOLOv5s,” Transactions of the Chinese Society of Agricultural Engineering., 39 (21), 181 –187 (2023). https://doi.org/10.11975/j.issn.1002-6819.202307186 Google Scholar

[14] 

Zhou Hongping, Jin Shouxiang, Zhou Lei, et al., “Classification and recognition of camellia oleifera fruit in the field based on transfer learning and YOLOv8n,” Transactions of the Chinese Society of Agricultural Engineering., 39 (20), 159 –166 (2023). https://doi.org/10.11975/j.issn.1002-6819.202307244 Google Scholar

[15] 

Du Baoxia, Tang You, Xin Peng, et al., “Apple detection method based on improved YOLOv8,” Wireless Internet Science and Technology., 20 (13), 119 –122 (2023) https://kns.cnki.net Google Scholar

[16] 

Lou Haitong, Duan Xuehu, Guo Junmei, et al., “DC-YOLOv8: small-size object detection algorithm based on camera sensor,” Electronics., 12 (10), 2323 (2023). https://doi.org/10.3390/electronics12102323 Google Scholar

[17] 

Szegedy, Christian, et al., “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI conference on artificial intelligence, 4278 –4284 (2017). https://doi.org/arxiv-1602.07261 Google Scholar
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Bangbang Chen, Baojian Ma, Xiangdong Liu, and Di Yan "Research on the detection of safflower filaments in natural scenarios based on deep learning algorithms", Proc. SPIE 13401, International Conference on Automation and Intelligent Technology (ICAIT 2024), 1340107 (21 October 2024); https://doi.org/10.1117/12.3036156
Advertisement
Advertisement
KEYWORDS
Object detection

Detection and tracking algorithms

Performance modeling

Backlighting

Education and training

Deep learning

Head

RELATED CONTENT

Steel surface defect detection based on improved YOLOv8
Proceedings of SPIE (December 07 2023)
One-stage defect detection method for plastic caps
Proceedings of SPIE (December 07 2023)
Research on traffic sign detection based on improved YOLOv4
Proceedings of SPIE (October 11 2023)
Tobacco shred varieties detection based on Yolov7
Proceedings of SPIE (January 16 2025)

Back to Top