Open Access
21 June 2024 Special Section Guest Editorial: Quality Control by Artificial Vision VII
Author Affiliations +
Abstract

Guest Editors Igor Jovančević and Jean-José Orteu introduce the Special Section on Quality Control by Artificial Vision VII.

Quality control by artificial vision has experienced a swift evolution because of advances in artificial intelligence, 2-D and 3-D vision sensors, image processing, and nonconventional optics. In recent years, new acquisition methods combined with smart image processing algorithms and deep learning have allowed quality control by artificial vision to emerge as a distinct scientific domain. Based largely upon the 16th International Conference on Quality Control by Artificial Vision 2023 in Albi, France ( https://qcav2023.sciencesconf.org/), this special section offers insights into this newly emerged research domain.

In the field of surface anomaly detection (AD) by deep learning, Rački et al. propose the coupling of unsupervised and supervised approaches. They use an unsupervised approach to build a model for generating pseudo labels, followed by a supervised approach to increase the robustness of AD. The proposed approach yields results that are comparable to the fully supervised approach, with a reduced need for labeled anomalous samples.

Ueda et al. propose a multi-object tracking method to estimate the 3D shape of individual wires inside electrical cables using X-ray CT images. Knowing the 3D shape of each individual wire is essential for analyzing precisely the properties of the cables, such as bending stiffness. The 3D shape of individual wires is estimated by tracking their position over the cross-sectional images using a long-short term memory neural network. The effectiveness of the proposed method is demonstrated through experiments on actual annotated cables, even in presence of noisy data.

Helvig et al. propose an open-access annotated database for crack detection and localization on metallic materials using the flying spot laser infrared thermography method and deep learning approaches. The database is used for a benchmark of several state-of-the-art machine learning architectures. The authors propose a transfer learning approach, and they show that the performance increases when the models are pretrained on their proposed publicly available dataset.

In the field of control and monitoring of industrial crystallization processes, Rahmani et al. propose an innovative image analysis method specifically designed for analyzing crystallization videos. The proposed method involves the dynamic segmentation of observed aggregates, provides access to the particle size distribution of the aggregates in the reactor over time, and highlights the key stages of crystallization.

Pižurica et al. introduce a novel neural architecture search toolkit (GT-NAS), which can produce faster and smaller CNN architectures, while keeping or even exceeding state-of-the-art accuracy. An application of GT-NAS to surface defect detection is showcased to prove its effectiveness. Moreover, the toolkit is generic, i.e. not limited to one specific use case, and can be used in other domains as well.

Došljak et al. proposed a novel method to enhance the robustness of deep-learning models to the domain gap between synthetically generated 3D data from CAD models and 3D real point clouds acquired via a 3D scanner. They tested their methods by applying them to perform conformity check of complex mechanical assemblies using neural networks for point cloud classification.

Hachem et al. present a method for measuring dimensional and shape defects in wire-laser additive manufacturing using a global stereocorrelation approach. The proposed method achieves a 1.65% error margin compared to the reference system (ATOS Core) and demonstrates good measurement repeatability. A Canon EOS7D camera is used, whose intrinsic and extrinsic parameters are determined through separate calibration processes. The influence of texture patterns on measurement accuracy is examined, with a projected speckle pattern yielding results closest to the reference system.

The work of Lemghari et al. addresses the challenge of noisy labels in classification datasets by using a framework that combines set-valued classifiers with Venn-Abers predictors. The proposed method detects noisy samples and then relabels them, demonstrating superior accuracy on MNIST, CIFAR-10, and Clothing1M datasets compared to existing techniques like T-forward, VolMinNet, and DivideMix. The authors discuss how this work can be expanded further, e.g. by using belief functions to model uncertainty, extending the approach to multi-class classification, or adapting it to handle noisy labels in datasets with long-tail distributions, thereby enhancing its applicability in industrial anomaly detection scenarios.

In the study by Sanou et al., the authors used a deep learning model to create a semi-automated method for annotating objects in electron microscopy images with the goal of semiconductor inspection and precise defects characterization. The authors improved existing state-of-the-art deep learning models and adapted them for the challenges inherent in electron microscopy. To improve the quality of segmentation contours, the authors proposed an innovative C-DML loss function, which incorporates constraints inherent to the physical properties of electron microscopy images.

In the work by Toigo et al., the authors propose a microservice architecture for the easy deployment of computer vision algorithms combined with deep-learning algorithms on camera devices. The study analyzes two real-world applications of the proposed microservice system: one with hard time constraints but relatively simple shapes to analyze for defects, and another with more flexible time delay constraints but more complex shapes to analyze for defects. In both cases, the authors successfully met the required time constraints while achieving nearly perfect results.

Biography

Igor Jovančević graduated from the computer science program of the Faculty of Natural Science and Mathematics at the University of Montenegro with a mathematics degree in 2008. He graduated in 2011 from a joint Erasmus Mundus Master program in Computer Vision and Robotics (VIBOT) conducted by the University of Burgundy, University of Girona, and Heriot-Watt University. He received his PhD in computer vision in 2016 from IMT Mines Albi, a French “Grande Ecole” specializing in process engineering. He worked at Diotasoft, as a research engineer focusing on computer vision applications on the problems of industrial inspection and manufacturing process monitoring. He is currently working as an assistant professor at the University of Montenegro, focusing on applying computer vision to solve real-world use cases.

Jean-José Orteu graduated in 1987 from a French “Grande Ecole” (ENSEIRB, Bordeaux, France) with an engineering degree in electrical and software engineering and a master’s thesis in automatic control. He received his PhD in computer vision in 1991 from Université Paul Sabatier (Toulouse, France). Currently, he is a full professor at IMT Mines Albi (Albi, France), a French “Grande Ecole” specialized in process engineering. He carries out his research work in the Institut Clément Ader (ICA) laboratory (250 people). For more than 15 years, he has developed computer-vision-based solutions for 3D measurements in experimental mechanics (PhotoMechanics) and, since 2010, he is more specifically involved in the application of computer vision to NDE, inspection, and manufacturing process monitoring.

© 2024 SPIE and IS&T
Igor Jovančević and Jean-José Orteu "Special Section Guest Editorial: Quality Control by Artificial Vision VII," Journal of Electronic Imaging 33(3), 031201 (21 June 2024). https://doi.org/10.1117/1.JEI.33.3.031201
Published: 21 June 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine vision

Quality control

Computer vision technology

Deep learning

3D modeling

Data modeling

3D image processing

RELATED CONTENT


Back to Top