Open Access Paper
17 September 2019 Feature detection in unorganized pointclouds
Marc Preissler, Gunther Notni
Author Affiliations +
Proceedings Volume 11144, Photonics and Education in Measurement Science 2019; 111440M (2019) https://doi.org/10.1117/12.2530809
Event: Joint TC1 - TC2 International Symposium on Photonics and Education in Measurement Science 2019, 2019, Jena, Germany
Abstract
Feature detection in pointclouds is becoming increasingly important for a wide range of applications. For example fields of robotics, autonomous driving and medical image processing are using acquisition sensors with 3 dimensional output. These points are usually defined by x, y, and z coordinates and represents the outer surface or shell of an object. Challenges are to develop methods for effective ways to get just the relevant information in pointclouds and accelerate postprocessing steps. The approaches presented in this paper are based on innervolumetric 3D data, which are described by chaotically organized points. The data used are generated by the layer-by-layer acquisition and composition of individual point clouds in additive manufacturing. The data is merged, preprocessed and subsequently the characteristics are extracted.

1.

INTRODUCTION

The digitalization of information plays a major role at present and in the future. Digitalization originally refers to the conversion of analogue values into digital formats. This transformation has enormous potential to simplify existing processes and procedures. Digitalization is also a major topic by the term “Industry 4.0” and is being a reminiscence of software versioning. Manufacturing processes, supply chains and complete factories should be better controllable and analysable through the digital linking of sensors and actors. The high grade of cross-linking should support the idea of a digital factory and contribute to more efficiency.1 The way to store, share and process digital information are much easier. The common result of any digitalization are files that consist of a sequence of bits and bytes. These days, information is captured in a digital way and the conversion from analogue to digital is usually omitted. One obvious example is the field of photography and imaging. Almost all images are captured in digital form. This paper also focuses on digital image information, but uses a further spatial dimension: the depth and is so called 3-dimensional. 3-dimensional information can be found in many areas of daily life. The applications range from gesture control,2 digitalization of components,3 task for forensics,4 archaeology5 and of course in various fields of robotics. 3-dimensional information captured in real time is used to navigate autonomous vehicles.6 This includes the correct detection of the environment as well as unexpected influences such as collision detection. Real-time 3-dimensional position detection is also essential for the interaction between humans and robots in order to guarantee correct operation and safety.7 The transformation of 3-dimensional information into more abstract information is the same for all applications. This means that the relevant and irrelevant information must be automatically differentiated in order to obtain the target information under resource-saving and accelerated data processing. This paper therefore presents a possibility for feature detection in 3-dimensional space, which is becoming increasingly important for a wide range of applications. The addressed fields of robotics, autonomous driving and industrial image processing are using acquisition sensors with 3 dimensional output. Usually the raw data of most 3D acquisition systems are output as pointclouds. A pointcloud is a set of different unorganized data points in a 3-dimensional space. These points are usually defined by x, y, and z coordinates and represents the outer surface or shell of an object. Originated in the performance of these sensors the information content and also the file size of datasets are getting larger. On other side hardware performance is also getting more powerful, but definitely reaches specified limits. For this reason alternative effective methods are necessary to analyze 3-dimensional informations from corresponding sensors. The aim of this paper is to show different ways for getting informations of relevant measurement tasks in pointclouds and reduce size of dataset for further tasks. A high-performance hardware demonstrator for capturing 3-dimensional informations is developed and appropriate steps for image preprocessing is researched. The technical sensor principle is based on fringe projection and has a wide range of possibilities for configuration. Low budget hardware is also subject of this work. Furthermore different methods for reducing the size of dataset in pointclouds are approached and presented in this work. Finally results are demonstrated in an example field of application and shows ways for process control for different additive manufacturing processes and is introduced shortly.

2.

METHOD

2.1

Hardware setup

A hardware platform for additive manufacturing processes8 is developed for a Fused Filament Fabrication machine (FFF) and is used for the method described here. Fused Filament Fabrication is currently one of the most widespread additive manufacturing method. It works on an additive principle by laying down filament materials in layers. The filament can be plastics, metal or also wood enriched. Reasons for the high grade of acceptance are the small necessary space for a FFF machine and the manageable use of resources. Furthermore the costs are relatively low and the material variety is increasing continuously. This circumstance can often implies an unstable additive manufacturing process and the necessary experience is missing. The self-made and adaptable hardware platform for inline process monitoring is able to output 3-dimensional information about the manufactured layer during process time. The assembly position is chosen over the additive manufacturing machine in bird view to capture the information from layer to layer. The working distance to the additive layer is always fixed by moving the building platform downwards. Other assembly position are not acceptable, because a modification of the additive manufacturing machine would be necessary and might decrease the efficiency of the manufacturing process. Furthermore the hardware platform should also usable for other additive manufacturing processes and inline process monitoring tasks in future. The hardware platform has a stereoscopic camera system and the necessary projector for fringe projection is placed between both cameras. The necessary stereoscopic camera calibration for validated data is performed before with a 5 mm checkerboard.9 The GigE Vision cameras and C-mount lenses are replaceable to diversify the field of view. At once the field of view defines the accuracy of the measurement system and furthermore the pointcloud resolution depends on it. The experimental design for this paper composes of a Sony IMX249 2.4 MP image sensor and a lens with a focal length of 25 mm. This setup offers a maximum available workspace in the manufacturing machine of 220 x 220 mm. The maximum depth in focus depends on the selected lens and the selected aperture. But the depth of focus has to be a minimum of the chosen layer thickness. Common fused filament fabrication machines are able to manufacture a layer thickness of 50 – 400 μm.

2.2

Data preprocessing

Data acquisition begins after the first additive layer has been manufactured and is continued after each further additive layer. At the end of the manufacturing process, the number of individual pointclouds is the same as the number of layers the finished object consists of. Each individual pointcloud represents the surfaces of the corresponding manufactured layer. Afterwards the individual layers must be correctly merged to get innervolumetric 3d-data.10 Innervolumetric 3d-data are characterized by the fact that they describe not only the outer surface of an object, but also information about geometric dimensions from the inside of an object. The generation of this data presupposes that certain information about the manufactured object is available. Therefore, the machine-readable Gcode is used, for example, to determine the layer height and to carry out the transformation of the coordinate systems of the manufacturing machine and 3d-capturing system.11 After successful generation of the innervolumetric data, the next steps for data preparation have to be executed. The aim of this work is the automatically recognize geometrical features in pointclouds. The concentration is placed on the hull curve of the object and has to be defined. The workflow used is shown in Figure 1. Pointclouds are characterized by the chaotic distribution of a set S of points in 3-dimensional space. These points describe n-surfaces which need to be reconstructed for further preprocessing steps and is done with Alpha shape,12 which is based on the Delaunay triangulation. In this case is n the number of layers of the manufactured object. This surface reconstruction also triangulates the inner data, which is irrelevant and must be removed. This is where the Ambient Occlusion process comes in. Ambient Occlusion takes a number of well distributed view direction and for point of the surface it computes how many time it is visible from these directions. The result of this value is saved into vertex quality. Points located inside the innervolumetric data thus have a very low vertex quality and can be eliminated by selecting a suitable threshold. The result is only the surface left that describes the hull curve of the object. This now describe quite exactly the outer points in the innervolumetric data at the beginning. The Hausdorff Distance δ is used to filter the points and computes the distance between two meshes (1).

00030_PSISDG11144_111440M_page_3_1.jpg

Figure 1.

Workflow for singlelayer to hullcurve pointcloud

00030_PSISDG11144_111440M_page_3_2.jpg

The closest point foreach sample is defined and eliminate the rest of points. Result now are only points, which describe hull curve and are used for further processing steps. All geometric information within the object is removed and the data is reduced to the essential.

2.3

Feature extraction from pointcloud

The previously created hull curve is the basis for feature detection in further steps. The next step determines which points can all be found in a indeterminate plane. With the help of a defined maximum tolerance for vertical distance to plane, the surface element in the pointcloud is defined.13 These points are now taken from the remaining point cloud and searched sequentially for further surface elements. At the end all points should be assigned to individual planes and there should be no points left. These procedures allow pre-filtering of the given raw data and simplify further data processing. Furthermore, a reduction of the data to be evaluated brings always involves an acceleration in data processing. The individual planes localized in the point cloud have no local boundary and a theoretical infinite two dimensional extension, i.e. also beyond the area of the point cloud. Afterwards, it is verified where the intersection edges of the planes can be found. Potentially, each intersecting line could contain a physical edge of the object. With appropriate selection criteria, it can then be checked whether a body edge exists or whether the intersection line can be discarded. The schematic sequence is shown in Figure 2. Each intersection line found is described by a support and a direction vector. This has the advantage that the direction vector already describes the direction where a potential physical edge could be located. By using suitable exclusion criteria, e.g. the assumption that a physical edge can only be present where points are actually represented, free intersection lines in three-dimensional space can be excluded. Under the assumption of consistent data, all physical edges in a point cloud should be found and represented by this method. In the next program step all found physical edges are to be compared now regarding common intersection points. Straight lines have a diameter of zero, so the probability of touching two straight lines is infinitesimal small. Therefore, it is necessary to work with tolerances and to allow minimum distances between the straight lines. After all intersection points of the found edges have been identified, closed contours have to be found and evaluated accordingly.

Figure 2.

Program flow chart

00030_PSISDG11144_111440M_page_4_1.jpg

3.

RESULTS AND DISCUSSION

The result of the pre-processed steps can be seen in Figure 3. The initial data are based on the innervolumetric 3d data and have been processed accordingly. The generated hull curve, described by the filtered points, is used for plane fitting. In the example shown here, 10 planes have been found.The intersection line between the individual planes has been determined and the exact body edges are shown in red. Furthermore, the length of the individual distances is determined and visualized. The evaluation of the number of corner points, the lengths of the individual edges and the corresponding angles then allow the conclusion that an octagon could be detected at this point. In the same way, other geometric primitives can be classified and combined to form more complex structures. The challenges occur when handling more irregular point clouds or different densities of points. This makes it more difficult to identify the individual planes and find correct body edges.

Figure 3.

Result of edge detection

00030_PSISDG11144_111440M_page_4_2.jpg

4.

CONCLUSION

The work presented here shows how innervolumetric data are generated and captured during additive manufacturing. Subsequently it is shown how the data are preprocessed accordingly and how an hull curve is created which is used for further data processing and feature extraction. The feature extraction is based on finding planes in the hull curve, where an approach is also presented here. The intersection lines and the corresponding intersection points are then determined using suitable selection criteria. As a result, physically existing body edges are visually displayed and parameters such as the length and angle of the edges are output. The results is ready in several seconds and can be used for further identification of 3d dimensional objects.

ACKNOWLEDGMENTS

We thank the federal ministry of education and research for support of this work. The work is related to the project Qualimess next generation (03IPT709X).

REFERENCES

[1] 

Lasi, H., Fettke, P., Kemper, H.-G., Feld, T., and Hoffmann, M., “Industry 4.0,” Business & information systems engineering, 6 (4), 239 –242 (2014). https://doi.org/10.1007/s12599-014-0334-4 Google Scholar

[2] 

Villaroman, N., Rowe, D., and Swan, B., “Teaching natural user interaction using openni and the microsoft kinect sensor,” in Proceedings of the 2011 conference on Information technology education, 227 –232 (2011). Google Scholar

[3] 

Kuş, A., “Implementation of 3d optical scanning technology for automotive applications,” Sensors, 9 (3), 1967 –1979 (2009). https://doi.org/10.3390/s90301967 Google Scholar

[4] 

Thali, M. J., Braun, M., and Dirnhofer, R., “Optical 3d surface digitizing in forensic medicine: 3d documentation of skin and bone injuries,” Forensic science international, 137 (2-3), 203 –208 (2003). https://doi.org/10.1016/j.forsciint.2003.07.009 Google Scholar

[5] 

Brutto, M. L. and Meli, P., “Computer vision tools for 3d modelling in archaeology,” International Journal of Heritage in the Digital Era, 1 (1_suppl), 1 –6 (2012). https://doi.org/10.1260/2047-4970.1.0.1 Google Scholar

[6] 

Pomerleau, F., Colas, F., Siegwart, R., et al., “A review of point cloud registration algorithms for mobile robotics,” Foundations and Trends® in Robotics, 4 (1), 1 –104 (2015). https://doi.org/10.1561/2300000035 Google Scholar

[7] 

Hägele, M., Schaaf, W., and Helms, E., “Robot assistants at manual workplaces: Effective co-operation and safety aspects,” in Proceedings of the 33rd ISR (International Symposium on Robotics), (2002). Google Scholar

[8] 

Preissler, M., Zhang, C., Rosenberger, M., and Notni, G., “Platform for 3d inline process control in additive manufacturing,” Optical Measurement Systems for Industrial Inspection X, 10329 (103290R), International Society for Optics and Photonics(2017). Google Scholar

[9] 

Zhang, Z., “A flexible new technique for camera calibration,” IEEE Transactions on pattern analysis and machine intelligence, 22 (2000). https://doi.org/10.1109/34.888718 Google Scholar

[10] 

Preissler, M., Zhang, C., and Notni, G., “Approach for optical innervolumetric 3-dimensional data acquisition,” Journal of Physics: Conference Series, 1065 (3), 032005 (2018). Google Scholar

[11] 

Preissler, M., Zhang, C., Rosenberger, M., and Notni, G., “Approach for process control in additive manufacturing through layer-wise analysis with 3-dimensional pointcloud information,” in 2018 Digital Image Computing: Techniques and Applications (DICTA), 1 –6 (2018). Google Scholar

[12] 

Edelsbrunner, H. and Mücke, E. P., “Three-dimensional alpha shapes,” ACM Transactions on Graphics (TOG, 13 (1), 43 –72 (1994). https://doi.org/10.1145/174462.156635 Google Scholar

[13] 

Torr, P. H. and Zisserman, A., “Mlesac: A new robust estimator with application to estimating image geometry,” Computer vision and image understanding, 78 (1), 138 –156 (2000). https://doi.org/10.1006/cviu.1999.0832 Google Scholar
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Marc Preissler and Gunther Notni "Feature detection in unorganized pointclouds", Proc. SPIE 11144, Photonics and Education in Measurement Science 2019, 111440M (17 September 2019); https://doi.org/10.1117/12.2530809
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Additive manufacturing

Manufacturing

Sensors

Data processing

3D acquisition

Data acquisition

3D image processing

RELATED CONTENT


Back to Top