Deep learning-based dehazing of remote sensing images faces two major problems: distortion of remote sensing images and lack of large-scale real paired datasets. To solve these problems, we propose a remote sensing image dehazing method based on data mixing and Laplace network. The Laplace pyramid can divide remote sensing images into different frequency domain layers (the low-frequency layer retains global color information, and the high-frequency layer retains texture details from coarse to fine), and these features are fed into a lightweight multilayer perceptron to learn long-range dependencies. A backbone network consisting of a spatial weighted residual channel attention module can help the residual haze removal module to learn the distribution of haze in remote sensing images for effective haze removal. To address the problem of lack of large-scale real datasets, we cross-mix and restructure the synthetic dataset with the small-sample real dataset, and use the restructured mixed dataset for training, and the trained model can effectively recover the color information of real remote sensing images. After validating the effectiveness and superiority of our model on synthetic datasets, hybrid datasets, and synthetic hyperspectral datasets, we conduct generalizability experiments, and the results show the potential application of our method in advanced vision tasks.
This article presents a dust image enhancement method based on color correction and haze removal. For color correction, a novel image prior---histogram similarity prior is proposed. With this image prior, image color is recovered by histogram matching. For contrast enhancement, a haze removal method based on boundary condition constraints and Retinex theory is developed, where boundary condition constrains is used for initial transmission map estimation, and Retinex is employed for reducing the effect of the incident light on output images. Considerable evaluations on both synthetic and real dust images show that this method performs favorably against other existing similar methods.
We present a hierarchical extraction algorithm to extract pole-like objects (PLOs) from scene point clouds. First, the point clouds are divided into a set of data blocks along the x- and y-axes after computing the dimensionality structure of each point. An effective height constrained voxel-based segmentation algorithm is proposed to segment the scene point clouds. The adjacent voxels with similar heights are grouped into an individual object based on the spatial proximity and height information. The objects consisting of a smaller number of voxels and most of the linear points are extracted and regarded as the pole-like candidates (PLCs). Then a Euclidean distance clustering algorithm is adopted to segment the PLCs and remove the floating and short segments. Next, each PLC is divided along the z-axis to extract the vertical structure. The straightness of the vertical structure is computed to remove the false PLOs. Finally, a collection of characteristics, such as point distribution and size, are applied to classify the PLOs into a street light pole, high-mast light, beacon light, and single pole. The experimental results demonstrate that our method can extract PLOs quickly and effectively.
Image restoration is a significant task in the fields of computer vision and image processing. Image restoration research consists of two aspects: kernel estimation and image restoration. A single image restoration method based on L1-regularized blur kernel estimation is proposed in this paper. First, a bilateral filter is used to remove the image noise effectively. Second, the improved shock filter is used to enhance the edge information of the image. Subsequently, L1-regularization method is used to estimate the blur kernel of the blurred image alternately, during which Split-Bregman algorithm is used to optimize the solution process. Finally, Hyper-Laplacian and sparse priors are applied to the image obtained from the non-blind deconvolution process. Experimental results show that compared to other methods, better restoration results as well as improved computational efficiency can be achieved with the proposed method.
KEYWORDS: 3D modeling, Clouds, Reconstruction algorithms, Binary data, Optical engineering, Chemical elements, Data modeling, Laser scanners, Systems modeling, 3D scanning
We present a method to reconstruct the three-dimensional (3-D) Tang Dynasty building model from raw point clouds. Different from previous building modeling techniques, our method is developed for the Tang Dynasty building which does not exhibit planar primitives, facades, and repetitive structural elements as residential low- or high-rise buildings. The proposed method utilizes the structural property of the Tang Dynasty building to process the original point clouds. First, the raw point clouds are sliced into many parallel layers to generate a top-bottom hierarchical structure, and each layer is resampled to achieve a subset purification of 3-D point clouds. In addition, a series of different building components of the building are recognized by clustering these purifications of 3-D point clouds. In particular, we get the tree-structured topology of these different building components during slicing and clustering. Second, different solutions are explored to reconstruct its 3-D model for different building components. The overall model of building can be gotten based on the building components and tree-structured topology. Experimental results demonstrate that the proposed method is more efficient for generating a high realistic 3-D model of the Tang Dynasty building.
Proceedings Volume Editor (1)
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.