Jonas Bals1,Matthias Epple1,2
University of Duisburg-Essen1,CeNIDE - Center for Nanointegration Duisburg-Essen2
Jonas Bals1,Matthias Epple1,2
University of Duisburg-Essen1,CeNIDE - Center for Nanointegration Duisburg-Essen2
Scanning electron microscopy (SEM) is a standard method in the morphological analysis of materials, from the nanometer to the centimeter scale. An automated analysis of SEM images helps to save time and also reduces the operator bias when it comes to extract sample parameters like average particle size, particle size distribution, and diversity of particle shapes.<br/>Deep learning and especially deep convolutional neural networks (CNNs) have entered materials science, contributing to image segmentation [1], image classification [2], and object detection [3]. However, a successful training of such machine learning approaches requires a large number of manually annotated training images. Their manual analysis can be extremely time consuming as human evaluators are needed.<br/>The 3D graphics software Blender 3.1.2 [4] was used for artificial image creation, assembling several thousand objects to obtain heightmaps and error-free binary annotation maps. In a second step, we trained a generative adversarial network (GAN) [5] to create a variety of photorealistic SEM images of nanoparticles from these heightmaps.<br/>We have prepared synthetic SEM images as training data by a combination of Blender and GANs. We have first created artificial heightmaps by simulating the dropping of several thousand objects (“nanoparticles”) onto a planar surface with Blender. These objects were typical nanoparticles like spheres, deformed spheres, cubes and truncated cubes, hexagonal plates, rods, or star-shaped particles. The hereby created annotation maps as well as height maps have been used to train two networks. First a GAN to create artificial SEM images, and second a UNet model trained completely on artificial SEM images and binary annotation maps [1]. UNet was then validated against human labelled real data reaching an IoU of 81.7%. This still underperforms human labelled data by 11.76 points. Our method builds on the previously published GAN of [6] and outperforms they annotation map approach by 0.69 points. Notably, we found that a major obstacle to obtain better training data is the addition of the “right amount” of noise to an image created by the GAN. This will require further refinements of the method.<br/>In conclusion, synthetic SEM images prepared by Blender and GANs can be used to create large training data sets for automated analysis of SEM images by machine learning.<br/><br/>[1] Ronneberger, O., Fischer, P., & Brox, T. (2015). In: Navab, N. et al. (Eds.): MICCAI 2015, 234-241<br/>[2] He, K., Zhang, X., Ren, S., & Sun, J. (2016). In: Proceedings of the IEEE CVPR 2016, 770-778<br/>[3] Ren, S., He, K., Girshick, R., Sun, J. (2015). In: Proceedings of the NIPS 2015<br/>[4] Blender Online Community. (2018). http://www.blender.org, Last visited: 06.05.2022 11:03<br/>[5] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Bengio, Y. (2014)., Proceedings of the NIPS 2014<br/>[6] Rühle, B., Krumrey, J. F., & Hodoroaba, V.-D. (2021). Scientific Reports, 11, 4942