The aim of this paper is to investigate cervical cancer image processing technology utilizing deep learning. Cervical cancer stands as a prevalent malignancy in females, and precise identification and localization of cancer cells hold paramount significance for treatment and prognosis evaluation. This paper presents the fundamental workflow of cervical cancer image processing and the associated principles of deep learning, including convolutional neural networks, autoencoders, and generative adversarial networks. In recent times, the swift advancement of deep learning technology has brought forth novel concepts and approaches for cervical cancer image processing. This paper is oriented toward the exploration of cervical cancer image processing technology grounded in deep learning. First, the basic workflow of cervical cancer image processing, including steps such as image acquisition, preprocessing, feature extraction, and target detection, is introduced. The application of deep learning in cervical cancer image processing is discussed in detail. As one of the core deep learning technologies, convolutional neural networks (CNNs) have achieved significant results in the fields of image classification, segmentation, and detection. This paper shall present the fundamental principles and prevalent architectures of CNNs, alongside their instances of utilization in cervical cancer image processing. Furthermore, the utilization of alternative deep learning approaches in cervical cancer image processing is also introduced. Subsequently, the paper contrasts the strengths and weaknesses of diverse deep learning techniques in cervical cancer image processing and deliberates the challenges and future trajectories of development within this domain.
Traditional imaging lidar exhibits an obvious trade-off between the resolution and the size of its optical system. In order to realize a miniaturized super-resolution (SR) imaging lidar, Fourier ptychography (FP) has been introduced to break through the diffraction limit of the camera lens. FP, derived from synthetic aperture method, is capable of acquiring high resolution and large field-of-view reconstructed images without increasing the aperture size by capturing multiple images with diverse incident angles before computationally combining with phase retrieval algorithm. In this work, a SR imaging lidar system was proposed by using reflective-type FP, which mainly consists of a s-CMOS camera, a Nd:YAG laser, and a 2-D translation stage so as to achieve aperture scanning on the x and y axes. To validate this technique experimentally, a set of images of a positive USAF chrome-on-glass target were obtained for quantitative analysis, and an uneven 1 yuan nickel-on-steel RMB coin was used to simulate the applicability of the SR imaging lidar in practical applications. The observations show that the obtained images based on FP technique have an obvious improvement in resolution, contrast, and clarity. It is worth mentioning that the resolution of these reconstructed images is increased over 3 times in the experiment on the USAF target. Moreover, the images under different apertures were collected, processed and analyzed, which suggest the initial image quality has a non-negligible influence on the reconstructed results. This technique not only improves the performance of the imaging lidar while maintaining low costs, but also bring new vitality in remote image recognition and analysis.
KEYWORDS: Clouds, LIDAR, Data modeling, Visual process modeling, 3D modeling, Data fusion, 3D vision, Target detection, Image registration, Data processing
It is indispensable to obtain more information such as the 3D structure of the space target by detecting and identifying the target, when complete the on-orbit servicing and on-orbit control tasks. Both lidar and binocular stereo vision can provide three dimensional information of the environment. But it is very sensitive to the illuminance of environment and difficult to image registration at weak texture region, when we are using the binocular stereo vision in space. And lidar also has some defects such as the lidar data is sparse and the scanning frequency is low. So lidar and binocular stereo vision should be used together. The data of the lidar and binocular stereo vision are fused to make up for each others flaws.
In this paper, uniform point drift registration method is used in the fusion of point cloud which is sampled by lidar and binocular stereo vision. In this method, the two groups of point cloud are considered as one which submit to mixed probability distribution and the other one which is sampled from the points submit to mixed probability distribution. The transformation estimation between the two groups of the point cloud is maximum likelihood estimation. The transformation is required to take overall smoothness. In other words, the point clouds should be uniformed. The uniform point drift method can solve the registration problem efficiently for 3D reconstruction. Usually the time can be compressed by 10%.
Due to extended objects are influenced by occluded and blurred edge, the stability of target tracking is not good by the figure algorithms or the corner algorithms. In order to solute this problem, an improved multi-resolution(MR) fuzzy clustering algorithm based on Markov random field(MRF) is firstly used to segment the candidate targets of the extended objects from the observed images, then a new proposed target tracking structure algorithm, based on the stabilization of the extended objects’ skeletons and the partially un-occluded and un-blurred edge feature of the extended objects, is applied to extract the skeletons, corners, intersection points and their spatial location relationship of the candidate extended targets to determine the true tracking target or not. The experimental results show that the established algorithm can effectively complete the segmentation and extraction of the partially occluded and blurred extended objects with a very satisfied reliability and robustness.
An ordinary space optical remote sensing camera is an optical diffraction-limited system and a low-pass filter from the theory of Fourier Optics, and all the digital imaging sensors, whether the CCD or CMOS, are low-pass filters as well. Therefore, when the optical image with abundant high-frequency components passes through an optical imaging system, the profuse middle-frequency information is attenuated and the rich high-frequency information is lost, which will blur the remote sensing image. In order to overcome this shortcoming of the space optical remote sensing camera, an online compensating approach of the Modulation Transfer Function in the space cameras is designed. The designed method was realized by a hardware analog circuit placed before the A/D converter, which was composed of adjustable low-pass filters with a calculated value of quality factor Q. Through the adjustment of the quality factor Q of the filters, the MTF of the processed image is compensated. The experiment results display that the realized compensating circuit in a space optical camera is capable of improving the MTF of an optical remote sensing imaging system 30% higher than that of no compensation. This quantized principle can efficiently instruct the MTF compensating circuit design in practice.
KEYWORDS: Image segmentation, Digital signal processing, Image processing, Field programmable gate arrays, Data processing, Parallel processing, Data communications, Image processing algorithms and systems, Data acquisition, Interfaces
In order to realize the real-time segmentation processing of multi spectral images in practice, a real-time multi-spectral images segmentation system composed of four TMS320C6455 DSPs, two Virtex-4(V4 XC4VLX80)FPGAs and one Virtex-2 Pro(V2 Pro20)FPGA is designed. Through the optimization of the cooperation processing of the multi DSP and multi FPGA, the parallel multitask processing ability of the DSPs and the effective interface coordination ability of the FPGAs in the built system are used fully. In order to display the processing ability, the segmentation test experiments of 10 spectra visible images, with 1024×1024, segmented by the Multi-scale Image Segmentation Method, was done in the built multi spectral images segment system. The experiment results prove that the multi DSP and multi FPGA multi spectral images processing system designed in this paper satisfies the real-time processing requirement in engineering practice.
It is quite difficult to realize the automatic detection and determination of the small moving targets by the images data
captured from a mono-aperture imaging system. Therefore a five-ocular composite optical imaging system is designed.
The system is constituted by an infrared imaging system in the center and four visible imaging subsystems around the
center subsystem. According to the inner feature of the overlapping field of view in the five-ocular composite optical
imaging system, the automatic detection and recognition algorithm of the small moving targets for this system is built
up. The algorithm is divided into four steps: the first step is preprocessing to get rid of low frequency background
pixels, the second step is detecting the candidates of the small moving targets in the high frequency images remained by
preprocessing, the third step is determining the true small moving targets from the candidates, and the last step is
combining the detecting results of the visible subsystems into the detecting results of the infrared subsystem to
determine the small moving targets are "live" or "dead", because the "dead" targets with lower temperature are not
observed in the infrared subsystem. The test result in the experimental system indicated that the designed detection and
recognition algorithm increased the detecting probability of the small moving targets, and decreased the probability of
false alarms. The detection and recognition method was proved to be feasible and effective.
A detection algorithm for small moving targets is proposed. The new algorithm firstly utilizes convolution filtering for
noise smoothing, and then a proposed preprocessing method based on the norm of the difference vectors of the processed
images sequence is applied to remove most of low-frequency background. Furthermore, optic flow technique is adopted
to segment the doubtful small moving targets from the subimage remained by preprocessing. Finally, the statistic
information for each of doubtful small moving targets is calculated. From the statistical feature, a determining criterion is
established to determine whether each of the doubtful small moving targets is a true target or not. Because the
preprocessing approach can get rid of most of the low-frequency background effectively, the calculation quantity of the
sequential processing by optic flow is decreased largely. The experiments in a designed test system prove that the
proposed detection algorithm can detect small moving targets in 30fps, 512x512 pixels, staring images sequence with
SNR no less than 3dB, and the correct detecting probability is up to 96%, which can satisfy the real time processing
requirements in practice.
The variable relationship between the threshold of high-frequency extrapolation and the entropy of its correspondent
reconstructed image in the Wavelet Bicubic Interpolation Algorithm is analyzed. The information entropy is used as a
cost function and a Maximal Entropy Wavelet Bicubic Interpolation Search Algorithm is proposed. This algorithm can
automatically search an extrapolation threshold to reconstruct an image with maximal entropy. Although the detail
information of the reconstructed maximal entropy image is larger than its original image, it may introduce a lot of
uncertain and incorrect information. In order to remedy this shortcoming of the proposed algorithm, a new cost function
based on the old one is established. The new cost function can not only remedy the shortcoming of the entropy function
as a cost function, but also a weight introduced in the new cost function can be adjusted to reconstruct different
superresolution images to satisfy different practical requirements. Thus a Weighted Wavelet Bicubic Interpolation
Search Algorithm is established. The experiment results prove that if the distribution of the processed images is close to
the maximum likelihood distribution, a large weight will be selected to reconstruct a relative better superresolution
image with better details, and if the distribution of the processed images is far from the maximum likelihood distribution,
a little weight will be selected to reconstruct a relative better superresolution image with better visual effect. Therefore,
the weight in the new algorithm can be selected from the requirements to satisfy different practical cases.
According to the feature of remote sensing staring binocular imaging system, which is the information quantity passed through the overlapped Field Of View (FOV) is larger than that passed through the non-overlapped FOV, a new parallel high-speed automatic detecting algorithm of moving point targets is proposed. In the proposed detecting algorithm, the Difference Vector Norm of the detected images sequence is used as a preprocessing method to get rid of low-frequency noise and background pixels, then the Optical Flow Algorithm is applied to segment the doubtful moving point targets
from the subimage remained by preprocessing. If doubtful moving point targets are detected by Optical Flow, the binocular system will be rotated to make the overlapped FOV direct to each of the doubtful moving point targets, and a new proposed space-time parallelizing determining approach is used to determine whether they are true moving point targets or not. Because the preprocessing can get rid of most of the low-frequency noise and background pixels, the
calculating quantity of the sequential Optical Flow is reduced largely. At the same time, the new proposed determining algorithm is space-time parallel processing, which can decrease the determining time largely. The experiment results prove that the average detecting time of moving point targets of the proposed algorithm in the staring infrared binocular imaging system is reduced 50% than that of the traditional detecting approach, and if the SNR of processed images is no less than 3dB, the correct determining probability is 97%.
When the size of a needing scene is beyond the scope of an optical sensor, it is difficult to take the whole scene at the same time. In this case, the needing scene can be captured by several optical sensors at one time, the overlapped images can be taken. Using the images, the whole scene is reproduced. This paper presents a robust image mosaics method that employs wavelet transform technique. The new developed registration and fusion algorithm implemented automatically and simultaneously without known camera motion and focal length. Wavelet transform guarantees not only a global optimal solution, but also scale and transform invariance for image alignment. This feature guarantees that the scheme has higher performance than the traditional mosaic techniques. In the same time, the hardware structure and the software designing principle of the Image Mosaics System (IMS) based on the Digital Signal Processor are expounded. To further improve the image mosaics quality, an image enhancement approach is also employed. In the paper, the concept, algorithm and experiments are described. The test results showed that the IMS is efficient and accurate for acquisition of seamless mosaic of the overlapped images, and at the same time, is adaptive to the real-time requirements. An adaptive resolution, seamless and a wide field of view image can be acquired.
It is very difficult to measure the distance of a moving point target which is not cooperative for detecting in the remote sensing field, because that the point target has no geometrical dimensions and textures can be used and is easy to be missed. In this paper, a new algorithm based on the image sequence and the nonlinear regressive filtering algorithm is proposed, in order to determine the 3-D parameters of the moving point target in an efficient passive way. And a new multi-channel optical imaging system is designed, which is composed of a high- resolution center imaging system and four low-resolution sub-imaging systems. By the geometrical relationship of the four sub-imaging system, the initial values of the nonlinear regressive filtering algorithm for estimating can be obtained easily. Finally, the experiments of the proposed algorithm have been done on a real system, and the results proved that the algorithm could passively obtain the 3-D parameters of the moving point target efficiently. Furthermore, in the estimating procedure, the character of the nonlinear regressive filtering algorithm saves lots of memory units and reduces the computing quantity.
A new concept that is the Discontinuous Frame Difference in image sequences is proposed in this paper, and is applied to the Optical Flow Algorithm. The modified Optical Flow Algorithm overcomes the shortcoming of the traditional Optical Flow Algorithm that cannot detect the small moving target whose moving displacement is less than one pixel between two continuous frames. The infinite norm of the Discontinuous Frame Difference Vector is used to preprocess the image sequences to get rid of most of pixels taht are not the pixels of the moving target. And then the instantaneous velocities of the pixels remained by preprocessing is calculated by the Optical Flow Algorithm. If the pixels in an area have the moving continuity and consistency, a moving object is determined. For the preprocessing is able to get rid of most of pixels, the calculation quantity of the Optical Flow is reduced greatly. But the preprocessing is probably to lose some candidate pixels of the moving target, so the Gray Intensity Analysis is used to find these pixels back again. The Discontinuous Frame Difference Optical Flow Field Algorithm can be composed of the parallel structure system, which can detect different kinds of moving objects with different velocities. The experiment result proves the effectiveness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.