In-situ Laser Powder Bed Fusion (LPBF) sensor packages seek to enable both the commercial and Department of Defense (DoD) supply chains via process monitoring for qualification and machine feedback. An automated material identification and geometric segmentation would be valuable for LPBF process monitoring. In this paper, various segmentation approaches are presented and discussed to determine the best approach. Later, deep learning method(s) to classify the materials as either AlSi10Mg or IN718 are presented. Diverse videos (in terms of shape, size, structure, and camera angle) of both materials are captured and labeled as either AlSi10Mg (24357 frames) or IN718 (9222 frames). A given frame can contain single or multiple parts of a material. The segmentation approach is applied to extract each part and 121,036 images are obtained. The dataset is randomly split into groups of 72%, 8% and 20% for training, validation, and testing respectively. Classification performance(s) using the proposed Convolutional Neural Network (CNN) in addition to transfer learning approaches using established networks such as AlexNet, ResNet, and SqueezeNet are studied. An overall accuracy of 99.6% is obtained on a set of 24,214 test images. In addition, efficacy of the proposed classification model is demonstrated by testing the algorithm on a completely different variant (in terms of shape, size, structure, or camera angle) of either material. The class activation mapping results of these networks are presented, yielding an insight into the network’s decision, and assisting the manufacturers in their decision-making process.
Identifying defective builds early on during Additive Manufacturing (AM) processes is a cost-effective way to reduce scrap and ensure that machine time is utilized efficiently. In this paper, we present an automated method to classify 3Dprinted polymer parts as either good or defective based on images captured during Fused Filament Fabrication (FFF), using independent machine learning and deep learning approaches. Either of these approaches could be potentially useful for manufacturers and hobbyists alike. Machine learning is implemented via Principal Component Analysis (PCA) and a Support Vector Machine (SVM), whereas deep learning is implemented using a Convolutional Neural Network (CNN). We capture videos of the FFF process on a small selection of polymer parts and label each frame as good or defective (2674 good frames and 620 defective frames). We divide this dataset for holdout validation by using 70% of images belonging to each class for training, leaving the rest for blind testing purposes. We obtain an overall accuracy of 98.2% and 99.5% for the classification of polymer parts using machine learning and deep learning techniques, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.