In order to enhance the capability of space-based surveillance, the detailed modeling for visible imaging characteristics of space object is described in this paper. Firstly, a space-based imaging detection model is built based on the scattering visible radiation from space object. The model consists of radiation transmission based on the bidirectional reflectance distribution function (BRDF) and grayscale transformation based on the 256 levels. Then, according to the position of the sun, object and detector, the imaging conditions such as imaging angle and size are analyzed. Finally, the grayscale images of the HuanJing-1 satellite are simulated. It shows the grayscales for the different regions of the object appear great difference, indicating that the space-based detector needs a larger dynamic range.
KEYWORDS: Satellites, Bidirectional reflectance transmission function, Sun, Digital signal processing, Surveillance, Mathematical modeling, Sensors, Dye sensitized solar cells, Solar cells, Electromagnetic scattering theory
In order to enhance the capability of space-based surveillance, the detailed non-resolved space object characterization with brightness data is described in this paper. Firstly, according to the optical scattering theory, mathematical model for brightness characteristics of space object is established with the bidirectional reflectance distribution function (BRDF) by region classification and grid division. Then, brightness of typical geosynchronous satellites is simulated. Influences of the shape, size and status on brightness are analyzed. Characterization with brightness data is proposed. It shows the shape, size and status of the object can be deduced with brightness data over a range of time-space periods. Finally, the several special fields of non-resolved space object characterization are discussed
This paper presents an approach to enhance the resolution of refocused images by super resolution
methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number
of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which
defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as
equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic
camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super
resolution algorithms. Without other degradation effects in simulation, the super resolved image
achieves a resolution as high as predicted by the proposed model. We also build an experimental setup
to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low
resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with
more spatial details. To evaluate the performance of the proposed method, we finally compare the
reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).
Light field photography captures 4D radiance information of a scene. Digital refocusing and digital correction of
aberrations could be done after the photograph is taken. However, capturing 4D light field is costly and tradeoffs
between different image quality metrics should be made and evaluated. This paper explores the effects of light field
photography on image quality by quantitatively evaluating some basic criteria for an imaging system. A simulation
approach was first developed by ray-tracing a designed light field camera. A standard testing chart followed by ISO
12233 was provided as the input scene. A sequence of light field raw images were acquired and processed by light field
rendering methods afterwards. Through-focus visual resolution and MTF were calculated and analyzed. As a
comparison, the same tests were taken for the same main lens system as the results of conventional photography. An
experimental light field system was built up and its performance was tested. This work helps better understanding the
pros and cons of light field photography in contrast with conventional imaging methods and perceiving the way to
optimize the joint digital-optical design of the system.
Temporally and Spatially Modulated Fourier Transform Imaging Spectrometer (TSMFTIS) is a new imaging
spectrometer without moving mirror and slit. Through scanning, it can acquire sequential images superposed with
interference fringes. The interferogram can be acquired by orderly arranging the extracted interference information of the
same spatial point from the sequential images, and the spectrum can be recovered by using FFT. Therefore, the attitude
of the bearing platform will affect the images so as to reduce the accuracy of the recovered spectrums. Since current
attitude measurement accuracy can not meet the needs of error correction, in this paper, the image registration method is
applied to acquire the accurate translations for the future correction between two sequential images. The single-step DFT
registration method is applied to register the selected window areas away from the null optical path difference position in
sequential images. That is full utilizing of common information meanwhile reducing impact of interference fringes and
improving registration accuracy and efficiency. In the simulation experiment, a common large remote sensing image is
used as ground object. The Fourier shift principle is applied to acquire simulation scanning images with sub-pixel
displacement. Artificial spectral data cube produced with the RGB values of each image is utilized as the input data of
the TSMFTIS, and sequential images superposed with interference fringes are acquired. Registration according to the
method mentioned above is performed and the results are compared with the accurate values. It shows that the method is
feasible and can achieve sub-pixel level accuracy.
This paper presents a computer simulation of light field photography that records the light field by
inserting a microlens array in a conventional camera. A computational model is configured to emulate
how the 4D light field is distributed in the camera and then captured on a 2D sensor. Based on the
recorded light field, refocused images are calculated by spatial integration at different depths. In the
Fourier domain, a refocused photograph can be obtained by taking an appropriate 2D slice in the 4D light field. Due to this theorem, another refocusing algorithm in the Fourier domain is particularly explored in this paper. After reconstructing a focal stack of images at all depths in the scene, a photograph with extended depth of field can be calculated by wavelet based image fusion methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.