520 |
3 |
|
|a It is very challenging to accurately understand and characterize the internal structure of three-dimensional (3D) rock masses using geological monitoring and conventional laboratory measures. One important method for obtaining 3D core images involves reconstructing their 3D structure from two-dimensional (2D) core images. However, traditional 2D–3D reconstruction methods are mostly designed for binary core images, rather than grayscale images. Furthermore, the reconstruction structure cannot reflect the gray level distribution of the core. Here, by combining the dimension promotion theory in super-dimension (SD) reconstruction and framework of deep learning, we propose a novel convolutional neural network framework, the cascaded progressive generative adversarial network (CPGAN), to reconstruct 3D grayscale core images. Within this network, we propose a loss function based on the gray level distribution and pattern distribution to maintain the texture information of the reconstructed structure. Simultaneously, by adopting SD dimension promotion theory, we set the input and output of every single node of the CPGAN network to be deep gray-padding structures of equivalent size. Through the cascade of every single node network, we thus ensured continuity and variability between the reconstruction layers. In addition, we used 3D convolution to determine the spatial characteristics of the core. The reconstructed 3D results showed that the gray level information in the 2D image were accurately reflected in the 3D space. This proposed method can help us to understand and analyze various parameter characteristics in cores. Copyright © 2022 Li, Jian and Han.
|