Image and Signal Processing Methods

Evaluation of positioning error-induced pixel shifts on satellite linear push-broom imagery

[+] Author Affiliations
Xiangjun Wang, Yang Li, Feng Liu

Tianjin University, State Key Laboratory of Precision Measuring Technology and Instruments, 92 Weijin Road, Nankai District, Tianjin 300072, China

Hong Wei

University of Reading, School of Systems Engineering, Computer Vision Group, Whiteknights Reading, Berkshire RG6 6AY, United Kingdom

Guimin Jia

Civil Aviation University of China, Department of Aviation Automation, 2898 Jinbei Road, Dongli District, Tianjin 300300, China

J. Appl. Remote Sens. 9(1), 095061 (Sep 11, 2015). doi:10.1117/1.JRS.9.095061
History: Received May 18, 2015; Accepted July 31, 2015
Text Size: A A A

Open Access Open Access

Abstract.  Georeferencing is one of the major tasks of satellite-borne remote sensing. Compared to traditional indirect methods, direct georeferencing through a Global Positioning System/inertial navigation system requires fewer and simpler steps to obtain exterior orientation parameters of remotely sensed images. However, the pixel shift caused by geographic positioning error, which is generally derived from boresight angle as well as terrain topography variation, can have a great impact on the precision of georeferencing. The distribution of pixel shifts introduced by the positioning error on a satellite linear push-broom image is quantitatively analyzed. We use the variation of the object space coordinate to simulate different kinds of positioning errors and terrain topography. Then a total differential method was applied to establish a rigorous sensor model in order to mathematically obtain the relationship between pixel shift and positioning error. Finally, two simulation experiments are conducted using the imaging parameters of Chang’ E-1 satellite to evaluate two different kinds of positioning errors. The experimental results have shown that with the experimental parameters, the maximum pixel shift could reach 1.74 pixels. The proposed approach can be extended to a generic application for imaging error modeling in remote sensing with terrain variation.

Figures in this Article

An important task of satellite-borne remote sensing is georeferencing which uses internal orientation parameters (IOPs) given by precalibration in a laboratory and exterior orientation parameters (EOPs) to coregister the remotely sensed images with a real-world geodetic coordinate pixel by pixel.1 Traditionally, indirect georeferencing has been performed on frame images to acquire the EOPs which contain information about position and attitude, and then applies either spatial resection for a single view or the well-known aerial triangulation for multiple views.2 However, this procedure requires numerous ground control points (GCPs) which are expensive to acquire and time-consuming to process.3 Furthermore, with the increasing demand of high-spatial resolution remote sensing imagery, classical frame sensors are gradually replaced by linear push-broom sensors, thus the traditional indirect georeferencing became unsuitable since every scan line of a linear push-broom sensor has an independent set of EOPs and it would require too many constraints to satisfy the mathematical relationship.4 As a result, the establishment of a new method that can perform real-time measurement of the EOPs that correspond to each scan line is inevitable.

Fortunately, with the rapid development of the Global Positioning System/inertial navigation system (GPS/INS), the EOPs of a linear array sensor at the moment of recording can be determined instantaneously without GCPs or additional processes. Consequently, a technology called direct georeferencing (DG) was proposed which takes advantage of GPS/INS-derived EOPs to obtain three-dimensional (3-D) object coordinates corresponding to pixels on a linear push-broom image.5,6 DG is much faster and cheaper than the traditional indirect method when colinearity equations are employed, and even without any GCPs the absolute georeferencing accuracy can be better than 20 m and the standard deviation less than 1 pixel, so that it has drawn a great deal of attention now.711

Although DG has many advantages, it is heavily dependent on the position and orientation precision of its components which are affected by the impreciseness of the instruments themselves and alignment uncertainties between sensors and the outer environment,3 such as inaccuracy of the interior and exterior parameters, boresight offsets between GPS/INS and imaging instruments, imperfection of mutual location of lens system and CCD image receiver, inherent noise of video sensor, error of lens system, optical inhomogeneity of the atmosphere, tuning effect of the earth’s curvature, and variations of the terrain elevation.8,1214 Among those influencing factors, boresight offsets and terrain variation have a significant impact on the accuracy of georeferencing15 since they could lead to differences between a computed detecting light array and the true light array, as shown in Fig. 1, resulting in orientation uncertainty and the pixel shifts. Due to the pixel shifts mentioned above, the sensed positions of object are deviated from their actual positions, seriously degrading the quality of the georeferencing. However, to our knowledge, only a few researchers have explicitly focused on this issue. In Refs. 16 and 17, the orientation uncertainty was linearly compensated using a few GCPs; nevertheless, the distribution of orientation uncertainty-induced pixel shifts is not strictly linear. In Ref. 8, only five points on a linear CCD array were analyzed which cannot lead to a comprehensive understanding of the pattern of pixel shifts.

Graphic Jump Location
Fig. 1
F1 :

Illustration of positioning error.

In this study, the relationship between positioning error and the corresponding pixel shifts was quantitatively assessed. The object space coordinate variation of ground points was introduced to formulate the model of the real terrain topography. Under the assumption that the attitude and orbit of the satellite-borne imaging system are stable, the mathematical model between the positioning error and image pixel shifts on a linear push-broom image is established by the rigorous sensor model (RSM). Finally, simulation experiments are conducted on the basis of the Chinese Chang’ E-1 satellite imaging parameters to demonstrate the distribution of pixel shifts.

The rest of the paper is organized as follows. Section 2 concisely introduces the related principles of a linear push-broom imaging system and the imaging parameters of the Chang’ E-1 satellite. Section 3 describes the modeling of positioning error-induced pixel shifts on the CCD image plane. Section 4 details the two simulation experiments to give a quantitative illustration of pixel shifts under different positioning errors. Section 5 gives the simulation results and discussion. Finally, Sec. 6 concludes the paper.

Brief Introduction of Linear Push-Broom Imaging

Different from traditional perspective frame imaging, linear push-broom imaging makes use of line-central projection to obtain images during the linear CCD flying over an observed area. Based on the structure of CCD sensors, they can be categorized into a single-linear CCD sensor, dual-linear CCD sensor, three-line array CCD sensor, or multiple-line array CCD sensor. Suppose that one linear image Ii is acquired by a single line of a linear CCD camera at time ti. Then a push-broom image I can be mathematically defined as a data set of sequential linear images Ii acquired from the same linear CCD sensor, which is described as follows:18Display Formula

I={Ii|i=1,2,,n},(1)
where i=1,2,,n is the push-broom image serial number at each imaging position corresponding to time ti. Each Ii has its own perspective projection center and IOPs and EOPs.

In the image plane, the x-axis is along the push-broom imaging pixel array as the cross-track direction, and the flight direction, namely the along-track direction, is defined as the y-axis. As shown in Fig. 2, a single-line push-broom image has one row in the y-axis, and as many columns as the number of linear CCD pixels in the x-axis. Therefore, it is not difficult to infer that y is equal to 0 in the single-line push-broom image.

Graphic Jump Location
Fig. 2
F2 :

Coordinate system of single-line linear CCD image.

The most commonly used physical camera model for DG is the RSM,19 which can be presented as Display Formula

[XYZ]=[XsiYsiZsi]+λMi[xiyif],(2)
where (X,Y,Z) is the object space coordinate of an arbitrary ground point P, (Xsi,Ysi,Zsi) is the object space coordinate of the camera optical center at time ti, λ is the projection scaling coefficient, and Mi is the rotation matrix containing exterior orientation angles including yaw, pitch, and roll angles of the imaging CCD platform. (xi,yi) are the image coordinates of the ground point P in the image space, and f is the focal length of the optical imaging system. The perspective spatial model of the image plane, object space coordinate, and camera coordinate system is shown in Fig. 3, in which O is the center of the CCD image line, OwXYZ is the object space coordinate, and OcXcYcZc is the camera coordinate system.

Graphic Jump Location
Fig. 3
F3 :

The perspective spatial model.

Chang’ E-1 Three-Line Array CCD Imaging Model

The Chinese Chang’ E-1 satellite adopts the three-line array push-broom asynchronous stereoscopic method to acquire image data of the lunar surface.20 The Chang’ E-1 linear push-broom imaging system is composed of an area CCD with a solid-state array measuring 1024×1024 pixels. The 11th, 512th, and 1013th linear CCD arrays in the along-track direction are selected to construct the three-line array CCD imaging system. The three different linear arrays share the same focal plane at forward, nadir, and backward view, respectively, and the looking angles between the adjacent views are 16.7 deg. The lunar surface is continuously scanned by the three views with the same imaging cycle during the fly. Two-dimensional images of the land surface are acquired, which contain information from three viewing channels and can be used for 3-D surface reconstruction.

Figure 4 shows the definition of Chang’ E-1 exterior attitude angles, where φ, ω, and κ stand for yaw, pitch, and roll angles, respectively. Figure 5 shows the Chang’ E-1 imaging geometry and its ground covering track.

Graphic Jump Location
Fig. 4
F4 :

Configuration of Chang’ E-1 exterior attitude angles.

Graphic Jump Location
Fig. 5
F5 :

Chang’ E-1 three-line array CCD camera imaging process.

As mentioned above, the main objective of this research is to evaluate the effects of positioning error on the push-broom imagery and to establish a model of the corresponding pixel shifts. Figure 6 is an illustration of pixel shift which is induced by positioning error in the cross-track direction. P is the point of the true direction the light array should point at, however, the calculated direction in which the light array points is Q rather than P because of the positioning error. Accordingly, the horizontal and vertical differences in the object space coordinates are dX and dZ, respectively, and result in the pixel shift dx on a push-broom image plane.

Graphic Jump Location
Fig. 6
F6 :

Pixel shift induced by positioning error.

Since the topographic morphology of the terrain surface can be demonstrated by the combination of different coordinate values X, Y, and Z in the object space coordinates, taking into account the uncertainty of the positioning accuracy, the problem discussed in this paper can be transformed to build an error model between the pixel shift (Δx,Δy) in the image plane coordinate and the positioning error-induced variation of ground points (X,Y,Z) in the object space coordinate.

According to the RSM, the imaging process of a push-broom linear CCD can be described by a collinearity equation,21 given by Display Formula

{xi=fa1(XXsi)+b1(YYsi)+c1(ZZsi)a3(XXsi)+b3(YYsi)+c3(ZZsi)yi=fa2(XXsi)+b2(YYsi)+c2(ZZsi)a3(XXsi)+b3(YYsi)+c3(ZZsi),(3)
where (xi,yi) is the image plane coordinate of the ground point P at time ti, and (X,Y,Z) and (Xsi,Ysi,Zsi) have the same definition as in Eq. (2). aj, bj, and cj (j=1,2,3) are the nine elements of the rotation matrix that are defined in22Display Formula
{a1=cosφi·cosκisinφi·sinωi·sinκia2=cosφi·sinκisinφi·sinωi·sinκia3=sinφi·cosωib1=cosωi·sinκib2=cosωi·cosκib3=sinωic1=sinφi·cosκi+cosφi·sinωi·sinκic2=sinφi·sinκi+cosφi·sinωi·sinκic3=cosφi·cosωi,(4)
where φi, ωi, and κi are the yaw, pitch, and roll angles of the linear CCD at time ti. For the definitions of the attitude angles of Chang’ E-1, refer to Fig. 4.

It is known that a push-broom image is the combination of lines captured by a linear camera, where the pixel shifts in the along-track direction can be derived from the cross-track pixel shifts. In other words, only to model the pixel shifts along the x-direction in the image plane is adequate under the assumption that the imaging platform is stable. Moreover, the relationship between the positioning error and the pixel shifts of three-line arrays can each be derived from the other since their relative positions are fixed. Therefore, according to Eqs. (3) and (4), the value of xi can be decided by the arbitrary ground point (X,Y,Z) and the camera optical center (Xsi,Ysi,Zsi) in the object space coordinate, the camera attitude angles (φi,ωi,κi), and the focal length f. Thus, the pixel shifts can be expressed as follows: Display Formula

xi=F(X,Y,Z,Xsi,Ysi,Zsi,φi,ωi,κi,f).(5)

All variables in the right-hand side of Eq. (5) are independent, this according to the total differential analysis, it becomes Display Formula

dxi=m1·dX+m2·dY+m3·dZ+m4·dXsi+m5·dYsi+m6·dZsi+m7·dφi+m8·dωi+m9·dκi+m10·df,(6)
where mk(k=1,2,3,,10) are the weighting coefficients of each element. In satellite remote sensing, the satellite has a stable orbit and is not subject to atmospheric turbulence, so the movement of the satellite could be assumed to be uniform circular motion, thus the orbit height and the attitude angles are constant: Display Formula
dXsi=dYsi=dZsi=dφi=dωi=dκi=0.(7)
Then Eq. (6) becomes Display Formula
dxi=m1·dX+m2·dY+m3·dZ.(8)
The next step is to calculate the weighting coefficients in Eq. (8), which can be obtained by the following steps:

  1. substitute Eq. (4) into Eq. (3) to acquire the polynomial of function F in Eq. (5);
  2. expand the polynomial in Eq. (1) and calculate the partial derivative of xi;
  3. substitute Eq. (7) into the derivative expression in Eq. (2) to acquire the three weighting coefficients m1, m2, and m3.

After the above three steps, the unique solutions of weighting coefficients can be calculated as Display Formula

{m1=fR[R(cosφi·cosκisinφi·sinκi·sinωi)+S·sinφi·cosωi]m2=fR(Rcosωi·sinκi+S·sinωi)m3=fR[R(sinφi·cosκi+cosφi·sinωi·sinκi)S·cosωi·cosφi],(9)
where R and S are defined as Display Formula
{R=(sinφi·cosωi)(XXsi)sinωi(YYsi)+cosφi·cosωi(ZZsi)S=(cosφi·cosκisinφi·sinωi·sinκi)(XXsi)+cosωi·sinκi(YYsi)+(sinφi·cosκi+cosφi·sinωi·sinκi)(ZZsi).(10)

Thus, the mathematical model between the positioning error and the corresponding pixel shifts on linear push-broom imagery is finished.

To simulate different kinds of terrain surface variation and positioning errors, the experiment consists of two parts: the first part focuses on the pixel shifts introduced by constant positioning errors at different terrain surface points in a given area. The second part demonstrates the pixel shifts caused by different positioning errors at a fixed terrain surface point. The simulation was implemented in MATLAB.

The Chang’ E-1 satellite imaging parameters were applied to simulate the pixel shifts corresponding to positioning errors. To make use of the model derived in Sec. 4, the following information is necessary:

  1. the focal length of the linear push-broom imaging CCD camera;
  2. the initial attitude angles;
  3. the satellite and ground point positions in the object space coordinate.

The parameters of the Chang’ E-1 satellite imaging system23 are shown in Table 1 and Fig. 7 is the configuration of the parameters. The linear push-broom imaging sensor covers 60 km in the cross-track direction at a 120-m spatial resolution, which means that only 500 pixels are actually being used. We assume that the satellite moves along a straight line and keeps the attitude angles and flight height unchanged. Therefore, the initial exterior angles can be given as φ=ω=κ=0. Meanwhile, if we select the nadir view for simulation, it is easy to derive that Xs=Ys=0, and Zs is 200 km, which is the satellite orbit altitude.

Table Grahic Jump Location
Table 1Parameters of Chang’ E-1 imaging system.
Graphic Jump Location
Fig. 7
F7 :

Configuration of Chang’ E-1 imaging parameters. (a) The pixel size of the linear CCD sensor. (b) The ground resolution and swath width of a scan line. (c) The configuration of forward, nadir, and backward views.

Pixel Shifts Caused by Constant Positioning Errors in a Given Area

Due to the swath width of Chang’ E-1, the linear push-broom image is 60 km, and the X range of the ground points in the object space coordinate can be predefined as (30,000,30,000) (unit: m). The corresponding Z is set in the range of (6000,6000) (unit: m), which means that along the altitude direction, the altitude Z of the terrain surface point varies from 6000m to 6000 m to the base level (Z=0). A set of points T={Pi,j|i=1,2,3,,61;j=1,2,3,,61} in the given area is selected. The points are uniformly distributed, thus the intervals between these points are 1000 m along the cross-track direction and 200 m along the altitude direction. The object space coordinate of Pi,j is denoted by (Xi,Yj). Since the spatial resolution of Chang’ E-1 is 120 m, the positioning error of each point at time T is set to a constant value of dX=120m and dZ=120m.

Pixel Shifts Caused by Different Positioning Errors at a Fixed Ground Point

In order to analyze the pixel shifts introduced by different positioning errors at a fixed point, the following parameters were set. A specific point X=500, Z=0 (unit: m) in the object space coordinate was selected as the fixed point; the range of positioning errors is set as dX[200,200], dZ[100,100] (unit: m), which means the maximum variation of the positioning error is 200 m along the cross-track direction and 100 m along the altitude direction. The variation interval is set as Δ(dX)=5, Δ(dZ)=2.5 (unit: m).

The distribution of the absolute values of pixel shifts introduced by the constant positioning error is shown in Fig. 8. The X-axis and Z-axis indicate the cross-track direction and the altitude direction in the object space coordinate, respectively. It can be seen that with the increase in altitude, the pixel shift became larger, although the effect of the X value variation on the pixel shift was not as great as that of the altitude.

Graphic Jump Location
Fig. 8
F8 :

Distribution of pixel shifts induced by constant positioning error.

Figure 9 shows the pixel shifts at different Z values along the X-axis. The pixel shifts are nonlinear and symmetrically distributed along the X-axis like a single valley. The minimum point is reached at X=0; in other words, within the same altitude value of Z, the nearer the imaging point is to the principal point, the smaller the pixel shift it will have. This relationship can also be seen in Fig. 10; with the increase of the absolute value of X, the pixel shifts were also increased. Because the pixel shifts are symmetrically distributed along the X-axis, we only illustrate the minus values of X in Fig. 10. It can be seen that the pixel shifts had a linearly positive correlation on the Z-axis.

Graphic Jump Location
Fig. 9
F9 :

Comparison of pixel shifts introduced by constant positioning error along the altitude direction.

Graphic Jump Location
Fig. 10
F10 :

Comparison of pixel shifts introduced by constant positioning error along the cross-track direction.

Figure 11 shows the pixel shifts induced by different positioning errors at a fixed ground point, where the axes are defined as in Fig. 8. It is obvious that the positioning errors both along the cross-track direction dX and altitude direction dZ affect the final distribution significantly. If we set dX and dZ to 100 and 50 m, respectively, as illustrated in Table 2, it can be seen that the pixel shifts introduced by dZ are more serious than those by dX. In addition, the pixel shifts are symmetrically distributed around the zero value of dX and dZ. Figures 12 and 13 illustrate the pixel shifts introduced by different positioning errors in the altitude direction and the cross-track direction at a fixed ground point, respectively. As shown in Fig. 9, the ranges of the two different kinds of positioning errors are from 0 to 100m and 0 to 200m, at which both are halves of their full ranges. With the increase of the absolute values of dX and dZ, the pixel shifts were increased nonlinearly. It is interesting to notice that when only one of the positioning errors exists, either the altitude or the cross-track, pixel shifts were piecewise linearly correlated to the positioning errors as shown in Fig. 12, dZ=0, and Fig. 13, dX=0.

Graphic Jump Location
Fig. 11
F11 :

Distribution of pixel shifts caused by different positioning errors at a fixed point.

Table Grahic Jump Location
Table 2Pixel shift of fixed dX or dZ.
Graphic Jump Location
Fig. 12
F12 :

Comparison of pixel shifts introduced by different attitude positioning errors.

Graphic Jump Location
Fig. 13
F13 :

Comparison of pixel shifts introduced by different cross-track positioning errors.

Here, we applied dX=120m and dZ=120m in the first experiment, given that the orbit height of Chang’ E-1 is 200 km and the angle of the positioning errors is only about 0.0344 deg. However, the pixel shift of the first experiment reached 0.96 near the nadir view of a low altitude and reached 1.04 away from the nadir view of a high altitude. In the second experiment, the angle of the positioning error varies from 0 deg to 0.057 deg while the pixel shift varies from 0 to 1.72. In other words, though the angles of the positioning error are small, the induced pixel shifts can have a significant impact on the satellite remote sensing applications such as registration and change detection.24,25

The study was focused on the analysis of the impact of satellite positioning errors on the linear push-broom remote sensed image in DG. Given that the satellite-borne imaging system moves at a stable orbit height and attitude, the research contributes to establish the mathematical model which describes the relationship between positioning errors and pixel shifts. The experiments have been conducted by using the Chang’ E-1 imaging parameters in order to obtain a reasonable result. Quantitative descriptions of pixel shifts which are induced either by a fixed positioning error at different terrain ground points or by varied positioning errors at a fixed terrain ground point were illustrated. The experimental results indicate that the positioning errors along the altitude direction cause larger pixel shifts than those along the cross-track direction on the linear push-broom imagery. Furthermore, the pixel shifts are nonlinearly related to the deviation of positioning errors.

Future study will explore the effects of the positioning error with different flight attitudes and imaging angles on linear push-broom imagery. This will contribute to find out the relationship between positioning errors and pixel shifts in any aspect. The pixel shifts’ compensation model for improving the quality of satellite-borne remote sensing imagery will also be studied.

This work was supported by the National Natural Science Foundation of China (Grant Nos. 60872097 and 61179043), Tianjin Science and Technology Pillar Program (Grant No. 08ZCKFJC27900), and National High Technology Research and Development Program 863 (Grant No. 2010AA122200).

Beekhuizen  J.  et al., “Effect of DEM uncertainty on the positional accuracy of airborne imagery,” IEEE Trans. Geosci. Remote Sens.. 49, , 1567 –1577 (2011).CrossRef
Yastikli  N., and Jacobsen  K., “Direct sensor orientation for large scale mapping—potential, problems, solutions,” Photogramm. Rec.. 20, , 274 –284 (2005).CrossRef
Mumtaz  R., , Palmer  P. L., and Waqar  M. M., “Georeferencing of UK DMC stereo-images without ground control points by exploiting geometric distortions,” Int. J. Remote Sens.. 35, , 2136 –2169 (2014). 0143-1161 CrossRef
Haala  N.  et al., “On the performance of digital airborne pushbroom cameras for photogrammetric data processing—a case study,” Int. Arch. Photogramm. Remote Sens.. XXXIII, (Part B4 ), 324 –331 (2000). 1682-1750 
Schwarz  K. P.  et al., “An integrated INS/GPS approach to the georeferencing of remotely sensed data,” Photogramm. Eng. Remote Sens.. 59, , 1667 –1674 (1993).
Heipke  C., , Jacobsen  K., , Wegmann  H., “The OEEPE test on integrated sensor orientation—results of phase I,” In Photogrammetric Week ‘01. , , Fritsch  D., and Spiller,  R. Eds., pp. 195 –204,  Herbert Wichmann Verlag ,  Heidelberg  (2001).
Yousefzadeh  M., and Mojaradi  B., “Combined rigorous-generic direct orthorectification procedure for IRS-p6 sensors,” ISPRS J. Photogramm. Remote Sens.. 74, , 122 –132 (2012). 0924-2716 CrossRef
Bettemir  O. H., “Prediction of georeferencing precision of pushbroom scanner images,” IEEE Trans. Geosci. Remote Sens.. 50, , 831 –838 (2012).CrossRef
Hernández-López  D.  et al., “Calibration and direct georeferencing analysis of a multi-sensor system for cultural heritage recording,” Photogramm. Fernerkund. Geoinf.. 3, , 237 –250 (2012).CrossRef
Zhao  H.  et al., “Direct georeferencing of oblique and vertical imagery in different coordinate systems,” ISPRS J. Photogramm. Remote Sens.. 95, , 122 –133 (2014). 0924-2716 CrossRef
Kempeneers  P.  et al., “Geometric errors of remote sensing images over forest and their propagation to bidirectional studies,” IEEE Geosci. Remote Sens. Lett.. 10, , 1459 –1463 (2013).CrossRef
Wolf  P. R., and Dewitt  B. A., Elements of Photogrammetry with Applications in GIS. , 3rd ed.,  McGraw-Hill ,  New York  (2000).
Friedrich  J., , Leloglu  U. M., and Tunali  E., “Geometric camera calibration of the BilSAT small satellite: preliminary results,” in  ISPRS Workshop Topographic Mapping from Space (with Special Emphasis on Small Satellites) , 14–16 February 2006, Vol. XXXVI-1/W41,  ISPRS ,  Ankara, Turkey .
Mikhail  E. M., , Bethel  J. S., and McGlone  J. C., Introduction to Modern Photogrammetry. ,  Wiley ,  New York  (2001).
Breuer  M., and Albertz  J., “Geometric correction of airborne whiskbroom scanner imagery using hybrid auxiliary data,” Int. Arch. Photogramm. Remote Sens.. XXXIII, (Part B3 ), 93 –100 (2000). 1682-1750 
Leprince  S.  et al., “Automatic and precise orthorectification, coregistration, and subpixel correlation of satellite images, application to ground deformation measurements,” IEEE Trans. Geosci. Remote Sens.. 45, , 1529 –1558 (2007). 0196-2892 CrossRef
Schowengerdt  R. A., Remote Sensing: Models and Methods for Image Processing. ,  Academic Press ,  Waltham, Massachusetts  (2006).
Poli  D., , Zhang  L., and Gruen  A., “Orientation of satellite and airborne imagery from multi-line pushbroom sensors with a rigorous sensor model,” Int. Arch. Photogramm. Remote Sens.. XXXV, (Part B1 ), 130 –135 (2004). 1682-1750 
Leprince  S., , Muse  P., and Avouac  J. P., “In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation,” IEEE Trans. Geosci. Remote Sens.. 46, , 2675 –2683 (2008). 0196-2892 CrossRef
Baochang  Z., , Jianfeng  Y., and Desheng  W., “Design and on-orbit measurement of Chang’ E-1 satellite CCD stereo camera,” Spacecr. Eng.. 18, , 30 –36 (2009).
Gruen  A.  et al., “Sensor modelling for aerial mobile mapping with three-line-scanner (TLS) imagery,” Int. Arch. Photogramm. Remote Sens.. XXXIV, (Part 2 ), 139 –146 (2002).
Fraser  C. S., and Yamakawa  T., “Insights into the affine model for high-resolution satellite sensor orientation,” ISPRS J. Photogramm. Remote Sens.. 58, , 275 –288 (2004). 0924-2716 CrossRef
Wang  M., , Hu  F., and Li  J., “Epipolar resampling of linear pushbroom satellite imagery by a new epipolarity model,” ISPRS J. Photogramm. Remote Sens.. 66, , 347 –355 (2011). 0924-2716 CrossRef
Brown  K. M., , Foody  G. M., and Atkinson  P. M., “Modelling geometric and misregistration error in airborne sensor data to enhance change detection,” Int. J. Remote Sens.. 28, , 2857 –2879 (2007). 0143-1161 CrossRef
Shi  W., and Hao  M., “Analysis of spatial distribution pattern of change-detection error caused by misregistration,” Int. J. Remote Sens.. 34, , 6883 –6897 (2013). 0143-1161 CrossRef

Xiangjun Wang received his BS, MS, and PhD degrees in precision measurement technology and instruments from Tianjin University, Tianjin, China, in 1980, 1985, and 1990, respectively. Currently, he is a professor and director of the precision measurement system research group at Tianjin University. His research interests include photoelectric sensors and testing, computer vision, image analysis, MOEMS, and MEMS.

Yang Li received his BS degree in electrical engineering and automation from Yanshan University, Qinhuangdao, China, in 2009. Currently, he is pursuing his PhD in precision measurement technology and instruments from Tianjin University, Tianjin, China, in the master-doctor program for students. His research interest includes remote sensing and image analysis.

Hong Wei obtained her PhD from Birmingham University in 1996. Then she worked as a postdoctoral research assistant on a Hewlett Packard sponsored project, high-resolution CMOS camera systems. She also worked as a research fellow on an EPSRC-funded Faraday project, model from movies. She joined the University of Reading in 2000. Her current research interest includes intelligent computer vision and its applications in remotely sensed images and face recognition (biometric).

Guimin Jia received her BS degree from the School of Instrument and Electronics, Zhongbei University, Shanxi, China, in 2005, and her PhD in precision measurement technology and instruments from Tianjin University, Tianjin, China, in 2011 and 2013. Currently, she is a lecturer with the School of Aviation Automation, Civil Aviation University of China, Tianjin, China. Her research interest includes remote sensing and image processing.

Feng Liu received his BS degree in measurement technology and instruments from Dalian Jiaotong University, Dalian, China, in 2001, and his MS and PhD degrees in precision measurement technology and instruments from Tianjin University, Tianjin, China, in 2006 and 2009. Currently, he is a lecturer with the Department of Precision Measurement Technology and Instruments at Tianjin University, Tianjin, China. His research interests include image processing and computer vision.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Xiangjun Wang ; Yang Li ; Hong Wei ; Guimin Jia and Feng Liu
"Evaluation of positioning error-induced pixel shifts on satellite linear push-broom imagery", J. Appl. Remote Sens. 9(1), 095061 (Sep 11, 2015). ; http://dx.doi.org/10.1117/1.JRS.9.095061


Figures

Graphic Jump Location
Fig. 1
F1 :

Illustration of positioning error.

Graphic Jump Location
Fig. 2
F2 :

Coordinate system of single-line linear CCD image.

Graphic Jump Location
Fig. 3
F3 :

The perspective spatial model.

Graphic Jump Location
Fig. 4
F4 :

Configuration of Chang’ E-1 exterior attitude angles.

Graphic Jump Location
Fig. 5
F5 :

Chang’ E-1 three-line array CCD camera imaging process.

Graphic Jump Location
Fig. 6
F6 :

Pixel shift induced by positioning error.

Graphic Jump Location
Fig. 7
F7 :

Configuration of Chang’ E-1 imaging parameters. (a) The pixel size of the linear CCD sensor. (b) The ground resolution and swath width of a scan line. (c) The configuration of forward, nadir, and backward views.

Graphic Jump Location
Fig. 8
F8 :

Distribution of pixel shifts induced by constant positioning error.

Graphic Jump Location
Fig. 9
F9 :

Comparison of pixel shifts introduced by constant positioning error along the altitude direction.

Graphic Jump Location
Fig. 10
F10 :

Comparison of pixel shifts introduced by constant positioning error along the cross-track direction.

Graphic Jump Location
Fig. 11
F11 :

Distribution of pixel shifts caused by different positioning errors at a fixed point.

Graphic Jump Location
Fig. 12
F12 :

Comparison of pixel shifts introduced by different attitude positioning errors.

Graphic Jump Location
Fig. 13
F13 :

Comparison of pixel shifts introduced by different cross-track positioning errors.

Tables

Table Grahic Jump Location
Table 1Parameters of Chang’ E-1 imaging system.
Table Grahic Jump Location
Table 2Pixel shift of fixed dX or dZ.

References

Beekhuizen  J.  et al., “Effect of DEM uncertainty on the positional accuracy of airborne imagery,” IEEE Trans. Geosci. Remote Sens.. 49, , 1567 –1577 (2011).CrossRef
Yastikli  N., and Jacobsen  K., “Direct sensor orientation for large scale mapping—potential, problems, solutions,” Photogramm. Rec.. 20, , 274 –284 (2005).CrossRef
Mumtaz  R., , Palmer  P. L., and Waqar  M. M., “Georeferencing of UK DMC stereo-images without ground control points by exploiting geometric distortions,” Int. J. Remote Sens.. 35, , 2136 –2169 (2014). 0143-1161 CrossRef
Haala  N.  et al., “On the performance of digital airborne pushbroom cameras for photogrammetric data processing—a case study,” Int. Arch. Photogramm. Remote Sens.. XXXIII, (Part B4 ), 324 –331 (2000). 1682-1750 
Schwarz  K. P.  et al., “An integrated INS/GPS approach to the georeferencing of remotely sensed data,” Photogramm. Eng. Remote Sens.. 59, , 1667 –1674 (1993).
Heipke  C., , Jacobsen  K., , Wegmann  H., “The OEEPE test on integrated sensor orientation—results of phase I,” In Photogrammetric Week ‘01. , , Fritsch  D., and Spiller,  R. Eds., pp. 195 –204,  Herbert Wichmann Verlag ,  Heidelberg  (2001).
Yousefzadeh  M., and Mojaradi  B., “Combined rigorous-generic direct orthorectification procedure for IRS-p6 sensors,” ISPRS J. Photogramm. Remote Sens.. 74, , 122 –132 (2012). 0924-2716 CrossRef
Bettemir  O. H., “Prediction of georeferencing precision of pushbroom scanner images,” IEEE Trans. Geosci. Remote Sens.. 50, , 831 –838 (2012).CrossRef
Hernández-López  D.  et al., “Calibration and direct georeferencing analysis of a multi-sensor system for cultural heritage recording,” Photogramm. Fernerkund. Geoinf.. 3, , 237 –250 (2012).CrossRef
Zhao  H.  et al., “Direct georeferencing of oblique and vertical imagery in different coordinate systems,” ISPRS J. Photogramm. Remote Sens.. 95, , 122 –133 (2014). 0924-2716 CrossRef
Kempeneers  P.  et al., “Geometric errors of remote sensing images over forest and their propagation to bidirectional studies,” IEEE Geosci. Remote Sens. Lett.. 10, , 1459 –1463 (2013).CrossRef
Wolf  P. R., and Dewitt  B. A., Elements of Photogrammetry with Applications in GIS. , 3rd ed.,  McGraw-Hill ,  New York  (2000).
Friedrich  J., , Leloglu  U. M., and Tunali  E., “Geometric camera calibration of the BilSAT small satellite: preliminary results,” in  ISPRS Workshop Topographic Mapping from Space (with Special Emphasis on Small Satellites) , 14–16 February 2006, Vol. XXXVI-1/W41,  ISPRS ,  Ankara, Turkey .
Mikhail  E. M., , Bethel  J. S., and McGlone  J. C., Introduction to Modern Photogrammetry. ,  Wiley ,  New York  (2001).
Breuer  M., and Albertz  J., “Geometric correction of airborne whiskbroom scanner imagery using hybrid auxiliary data,” Int. Arch. Photogramm. Remote Sens.. XXXIII, (Part B3 ), 93 –100 (2000). 1682-1750 
Leprince  S.  et al., “Automatic and precise orthorectification, coregistration, and subpixel correlation of satellite images, application to ground deformation measurements,” IEEE Trans. Geosci. Remote Sens.. 45, , 1529 –1558 (2007). 0196-2892 CrossRef
Schowengerdt  R. A., Remote Sensing: Models and Methods for Image Processing. ,  Academic Press ,  Waltham, Massachusetts  (2006).
Poli  D., , Zhang  L., and Gruen  A., “Orientation of satellite and airborne imagery from multi-line pushbroom sensors with a rigorous sensor model,” Int. Arch. Photogramm. Remote Sens.. XXXV, (Part B1 ), 130 –135 (2004). 1682-1750 
Leprince  S., , Muse  P., and Avouac  J. P., “In-flight CCD distortion calibration for pushbroom satellites based on subpixel correlation,” IEEE Trans. Geosci. Remote Sens.. 46, , 2675 –2683 (2008). 0196-2892 CrossRef
Baochang  Z., , Jianfeng  Y., and Desheng  W., “Design and on-orbit measurement of Chang’ E-1 satellite CCD stereo camera,” Spacecr. Eng.. 18, , 30 –36 (2009).
Gruen  A.  et al., “Sensor modelling for aerial mobile mapping with three-line-scanner (TLS) imagery,” Int. Arch. Photogramm. Remote Sens.. XXXIV, (Part 2 ), 139 –146 (2002).
Fraser  C. S., and Yamakawa  T., “Insights into the affine model for high-resolution satellite sensor orientation,” ISPRS J. Photogramm. Remote Sens.. 58, , 275 –288 (2004). 0924-2716 CrossRef
Wang  M., , Hu  F., and Li  J., “Epipolar resampling of linear pushbroom satellite imagery by a new epipolarity model,” ISPRS J. Photogramm. Remote Sens.. 66, , 347 –355 (2011). 0924-2716 CrossRef
Brown  K. M., , Foody  G. M., and Atkinson  P. M., “Modelling geometric and misregistration error in airborne sensor data to enhance change detection,” Int. J. Remote Sens.. 28, , 2857 –2879 (2007). 0143-1161 CrossRef
Shi  W., and Hao  M., “Analysis of spatial distribution pattern of change-detection error caused by misregistration,” Int. J. Remote Sens.. 34, , 6883 –6897 (2013). 0143-1161 CrossRef

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement


 

  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.