Open Access Paper
4 May 2018 A crash course in basic single-scan target tracking (abridged)
Author Affiliations +
Abstract
This talk goes through the components in generic single-scan target tracking algorithms from filtering to data association, track initiation, and termination. In many areas, reference is made to functions in the open-source copyleft-free Tracker Component Library (available online) so that attendees can rapidly apply the algorithms that are discussed.
00024_PSISDG10633_106330P_page_1_1.jpg
00024_PSISDG10633_106330P_page_1_2.jpg
00024_PSISDG10633_106330P_page_2_1.jpg

Different Impressions Obtained from the Literature:

  • A control systems problem to point an antenna towards an object of interest.

  • The prediction of the future state of a dynamical system based on measurements and models.

  • The act of connecting a vehicle’s consecutive positions over time.

  • A problem that was solved by Rudolph E. Kálmán in 1960.

00024_PSISDG10633_106330P_page_2_2.jpg

Target Tracking Is:

  • An aid to reduce the workload of radar operators.

  • A process of finding objects of interest where humans couldn’t discern them.

  • An optional part of a radar/sonar system.

  • An indispensable part of a radar system.

  • A critical part of a missile control system or of a counter-missile system.

  • A trivial connecting of the dots.

  • Something that people can do better than the computer.

  • Something that the computed can do better than people.

00024_PSISDG10633_106330P_page_3_1.jpg

Target Tracking Is:

  • An aid to reduce the workload of radar operators.

  • A process of finding objects of interest where humans couldn’t discern them.

  • An optional part of a radar/sonar system.

  • An indispensable part of a radar system.

  • A critical part of a missile control system or of a counter-missile system.

  • A trivial connecting of the dots.

  • Something that people can do better than the computer.

  • Something that the computed can do better than people.

The difficulty and utility of target tracking methods depend on the application.

00024_PSISDG10633_106330P_page_3_2.jpg
00024_PSISDG10633_106330P_page_4_1.jpg
00024_PSISDG10633_106330P_page_4_2.jpg
  • Getting started can be difficult.

  • No comprehensive textbooks on tracking exist.

  • Some useful books:

    • (Bar-Shalom, Li, and Kribarajan): Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software

    • (Crassidis, Junkins) Optimal Estimation of Dynamic Systems

    • (Bar-Shalom, Willett, Tian) Tracking and Data Fusion: A Handbook of Algorithms

    • (Blackman, Popoli) Design and Analysis of Modern Tracking Systems

    • (Maybeck) Stochastic Models, Estimation, and Control

    • (Stone, Streit, Corwin, Bell) Bayesian Multiple Target Tracking

    • (Challa, Moreland, Mušicki, Evans) Fundamentals of Object Tracking

    • (Mahler) Statistical Multisource-Multitarget Information Fusion

00024_PSISDG10633_106330P_page_5_1.jpg
  • The International Conference on Information Fusion by the International Society of Information Fusion (ISIF) is the most relevant to target tracking, especially networked/multistatic tracking.

  • The Tracker Component Library (TCL) offers over 1,000 free, commented Matlab routines related to Tracking, Coordinate Systems, Mathematics, Statistics, Combinatorics, Astronomy, etc.

00024_PSISDG10633_106330P_page_5_2.jpg
  • 1. Mathematical Preliminaries

  • 2. Coordinate Systems (in the unabridged slides)

  • 3. Measurements and Noise

  • 4. Measurement Conversion (in the unabridged slides)

  • 5. Bayes’ Theorem and the Linear Kalman Filter Update

  • 6. Stochastic Calculus and Linear Dynamic Models (in the unabridged slides)

  • 7. The Linear Kalman Filter Prediction

  • 8. Linear Initial State Estimation and the Information Filter

  • 9. Nonlinear Measurement Updates

  • 10. Square Root Filters (in the unabridged slides)

  • 11. Direct Filtering Versus Measurement Conversion

  • 12. Data Association

  • 13. Integrated and Cascaded Logic Trackers

  • 14. Dealing with Beams (in the unabridged slides)

  • 15. Summary

00024_PSISDG10633_106330P_page_6_1.jpg
00024_PSISDG10633_106330P_page_6_2.jpg

Useful Mathematical Tools

  • 1. Univariate and Multivariate Taylor Series Expansions

    • Given in the unabridged version of the slides.

  • 2. Useful Probability Distributions.

  • 3. Cubature Integration.

  • 4. The Cramér-Rao Lower Bound.

00024_PSISDG10633_106330P_page_7_1.jpg
  • The four most prevalent probability distributions in target tracking tend to be:

    • 1. The Multivariate Gaussian Distribution.

      • Usual assumed noise distribution; discussed in the unabridged slides.

    • 2. The Central Chi-Square Distribution.

    • 3. The Binomial Distribution.

    • 4. The Poisson Distribution.

  • In the TCL, functions relating to these and many other distributions are given in “Mathematical Functions/Statistics/Distributions.”

  • For the above distributions, see GaussianD, ChiSquareD, BinomialD, and PoissonD in the TCL.

00024_PSISDG10633_106330P_page_7_2.jpg
00024_PSISDG10633_106330P_page_7_3.jpg
  • The central chi-squared distribution with k degrees of freedom is

    00024_PSISDG10633_106330P_page_7_4.jpg

    where Γ is the gamma function.

  • Plotted is χ2(x, 3).

  • Confidence regions of a desired % are easily determined using Gaussian approximations, Mahalanobis distances, and chi-squared statistics.

00024_PSISDG10633_106330P_page_8_2.jpg
  • Given a Gaussian PDF estimate of a target, a point x, is within the first pth-percentile if

    00024_PSISDG10633_106330P_page_8_1.jpg

    where γp depends on p and on dx, the dimensionality of x.

    Confidence Region p
    dx0.90.990.9990.99990.99999
    12.70556.634910.827615.136719.5114
    24.60529.210313.815518.420723.0259
    36.251411.344916.266221.107525.9017
    610.644616.811922.457727.856333.1071
    914.683721.666027.877233.719939.3407

    Values of γp for p and dx.

  • Use ChiSquareD.invCDF in the TCL to determine γp.

00024_PSISDG10633_106330P_page_8_3.jpg
  • The chi-squared distribution plays a role in assessing covariance consistency.

  • The covariance is consistent if it realistically models the error.

  • The Normalized Estimation Error Squared (NEES) is the simplest of multiple methods for assessing consistency.

    00024_PSISDG10633_106330P_page_8_4.jpg
    • i and Pi are estimated mean and covariance from ith random trial.

    • xi true value from ith random trial.

  • If estimator is unbiased, covariance is always correct and errors truly Gaussian, then the NEES is 00024_PSISDG10633_106330P_page_8_5.jpg time a central chi-squared random variable with Ndx degrees of freedom.

  • The function calcNEES in the TCL can be useful.

00024_PSISDG10633_106330P_page_9_5.jpg
  • Consider constant false alarm rate (CFAR) detector with a given PFA per cell, such as the ones given by the CACFAR or OSCFAR functions in the TCL.

  • Grid of N cells (e.g. in range and range-rate).

  • Probability of n false alarms is binomially distributed.

    00024_PSISDG10633_106330P_page_9_1.jpg
    with mean
    00024_PSISDG10633_106330P_page_9_2.jpg

  • The binomial distribution is almost never used in trackers.

  • It is approximated by a Poisson distribution with the same 00024_PSISDG10633_106330P_page_9_6.jpg.

    • The asymptotic equivalence is derived in the unabridged slides.

00024_PSISDG10633_106330P_page_9_3.jpg

Example:

00024_PSISDG10633_106330P_page_9_4.jpg
  • Both plots, 00024_PSISDG10633_106330P_page_9_7.jpg for both distributions.

  • At N = 1000, the binomial and Poisson plots look the same.

00024_PSISDG10633_106330P_page_10_1.jpg
  • Many integrals cannot be solved analytically with a finite number of terms.

  • Try to evaluate a Fresnel integral:

    00024_PSISDG10633_106330P_page_10_2.jpg

  • Quadrature integration is a technique for efficient numerical evaluation of univariate integrals.

  • Cubature integration is multivariate quadrature integration.

  • The TCL has many functions related to cubature integration in “Mathematical Functions/Numerical Integration” and “Mathematical Functions/Numerical Integration/Cubature Points.”

00024_PSISDG10633_106330P_page_10_3.jpg

Numerically integrate the function from 0 to 2.

00024_PSISDG10633_106330P_page_10_4.jpg
  • Evaluate

    00024_PSISDG10633_106330P_page_10_5.jpg

00024_PSISDG10633_106330P_page_11_1.jpg

Numerically integrate the function from 0 to 2.

00024_PSISDG10633_106330P_page_11_2.jpg
  • Basic calculus solution: A Riemann sum:

    00024_PSISDG10633_106330P_page_11_3.jpg

00024_PSISDG10633_106330P_page_11_4.jpg
  • Riemann sums converge very slowly.

  • The idea in quadrature/cubature is the relation

    00024_PSISDG10633_106330P_page_11_5.jpg
    is exact for a particular weighting function w for all polynomials f up to a certain order and approximate for other functions f.
    • S is a region, such as ℝn or the surface of a hypersphere.

    • Unlike a Riemann sum, N is finite.

    • Cubature weights ωi and points xi depend on w and the order.

  • Efficient: For a fifth-order integral with a multivariate Gaussian weighting function, one can choose points such that N = 12.

00024_PSISDG10633_106330P_page_12_1.jpg
  • Many parts of target tracking involve solving difficult multivariate integrals.

  • Many algorithms fall into one of two categories:

    • 1. Use cubature integration for the integrals.

    • 2. Use a Taylor series expansion to turn the problem polynomial and solvable.

  • This comes up again and again.

00024_PSISDG10633_106330P_page_12_2.jpg
  • The Cramér-Rao Lower Bound (CRLB) is a lower bound on the variance (or covariance matrix) of an unbiased estimator.

  • The CRLB and a posterior CRLB (PCRLB) are widely used to asses tracker performance.

  • Under certain conditions, the CRLB states

    00024_PSISDG10633_106330P_page_12_3.jpg
    • A matrix inequality refers to sorted eigenvalues.

    • x is the quantity being estimated.

    • T(z) is the best unbiased estimator.

    • J is the Fisher information matrix.

    • The expectation is taken over the conditional PDF p(z|x) if x is deterministic.

  • The Fisher information matrix has two equivalent formulations:

    00024_PSISDG10633_106330P_page_12_4.jpg
    00024_PSISDG10633_106330P_page_12_5.jpg

00024_PSISDG10633_106330P_page_13_1.jpg
00024_PSISDG10633_106330P_page_13_2.jpg
00024_PSISDG10633_106330P_page_13_3.jpg
  • Are these points false alarms or a possible track over time?

  • Are they accurate measurements that are far apart?

  • Are false alarms very unlikely or highly likely?

00024_PSISDG10633_106330P_page_14_1.jpg
00024_PSISDG10633_106330P_page_14_2.jpg
  • Are these points false alarms or a possible track over time?

  • These are the same points as before at a different scale.

  • Measurements are inherently noisy.

  • Knowledge of measurement noise level determines scale.

00024_PSISDG10633_106330P_page_14_3.jpg
00024_PSISDG10633_106330P_page_14_4.jpg
  • The blue line is “connect-the-dots.” The orange line just adds interpolation.

  • The blue/orange lines are only good if the points are very accurate.

  • The green line is much more reasonable if the points are inaccurate.

  • The noise level determines the best fit.

00024_PSISDG10633_106330P_page_15_1.jpg
00024_PSISDG10633_106330P_page_15_2.jpg
  • Given a PDF p(x) representing the target state estimate at a particular time.

  • Given a measurement z and a conditional PDF of the measurement p(z|x).

  • Bayes’ theorem states that

    00024_PSISDG10633_106330P_page_15_3.jpg

  • The value p(z) is essentially a normalizing constant.

    00024_PSISDG10633_106330P_page_15_4.jpg

    where S is whatever space x is in (For discrete variables, the integral becomes a sum).

00024_PSISDG10633_106330P_page_16_1.jpg
  • Bayes’ theorem underlies all rigorous measurement update algorithms in tracking.

  • The Kalman filter measurement update is just Bayes’ theorem applied to a linear/Gaussian measurement model assuming a Gaussian prior.

  • Notation change for standard tracking:

    • The “prior” subscript will be replaced by “k|k − 1” to indicate that one has an estimate of a current (step k) state given prior (step k − 1) information.

    • The “posterior” subscript will be replaced by “k|k” to indicate that one has an estimate of a current state given current information.

00024_PSISDG10633_106330P_page_16_2.jpg
00024_PSISDG10633_106330P_page_16_3.jpg
  • The discrete measurement update step of the Kalman filter

    with common notation/terminology.

  • The updated covariance estimate has been reformulated in Joseph’s form for numerical stability.

  • See KalmanUpdate in “Dynamic Estimation/Measurement Update” in the TCL.

00024_PSISDG10633_106330P_page_17_1.jpg
  • The Kalman filter update is optimal for measurements that are linear combinations of the target state.

  • However, why not just apply Bayes’ theorem more precisely?

  • Bayes’ theorem is again:

    00024_PSISDG10633_106330P_page_17_2.jpg
    • Just multiply two known functions and normalize the result.

  • Bayes’ theorem is trivial. Why not always do it optimally?

00024_PSISDG10633_106330P_page_17_3.jpg
  • Bayes’ theorem is just normalized multiplication. Why not just discretize space and do everything almost optimally on a grid?

  • Simplest “optimal” Bayesian filter:

    • 1. Discretize the entire estimation space

    • 2. Evaluate probabilities on a discrete grid for given distributions

    • 3. Multiply matrices of probabilities to get posterior; normalize

  • It is simple.

  • With parallelization over GPUs, couldn’t it be done quickly?

00024_PSISDG10633_106330P_page_18_1.jpg
  • Why the brute-force grid approach is seldom done:

    • One target 3D position and velocity in 50 km cube all directions about sensor, speed in any direction to Mach 4 (1372, m/s), discretized to 5m and 1m/s.

    • Grid for single probability density function (PDF) is more than 2 × 1022 in size (we need two).

    • As floating doubles, one grid requires more than 82 zettabytes of RAM (1 ZB=1 trillion GB).

    • 64GB RAM stick ≈ $255 so cost ≈ $330 trillion ($660 trillion for two grids, US GDP ≈ $53 trillion).

    • Computing power to multiply two grids in 1 ms is ≈ 20 exaflops.

    • Most powerful supercomputer (Tianhe-2, China) 33.85 petaflops. We need 612 of them.

  • A smarter approach would be to use some type of adaptive grid or set of points.

    • This is the basis of particle filters (to be discussed later).

  • The Kalman filter is much faster than the most efficient particle filter.

00024_PSISDG10633_106330P_page_18_2.jpg
00024_PSISDG10633_106330P_page_19_1.jpg
00024_PSISDG10633_106330P_page_19_2.jpg
  • The stochastic dynamic models describe prediction when the initial state x is deterministic.

  • The prediction step of the standard Kalman filter is derived in the unabridged slides and handles random x.

  • See the discKalPred function in the TCL.

00024_PSISDG10633_106330P_page_19_3.jpg
00024_PSISDG10633_106330P_page_20_1.jpg

Two common approaches to starting the filter are

  • 1. One-point initiation.

    • See the functions in “Dynamic Models/One-Point Initialization” in the TCL.

  • 2. Using an information filter.

    • See infoFilterUpdate and infoFilterDiscPred in the TCL.

    • This is discussed in the unabridged slides.

00024_PSISDG10633_106330P_page_20_5.jpg
  • One-point initiation is the simplest approach:

    • The initial state and covariance matrix are

      00024_PSISDG10633_106330P_page_20_2.jpg
      00024_PSISDG10633_106330P_page_20_3.jpg

      where

      • dx and dz are the dimensionalities of the state and the Cartesian-converted measurement.

      • 00024_PSISDG10633_106330P_page_20_4.jpg are large variances based on the maximum velocity, acceleration, etc of the target.

  • Known position, other components “uninformative”.

  • Updates and predictions can then be done using the standard Kalman filter.

  • A rule of thumb for σi is to use the maximum value of the value of the moment divided by 2 or 3.

00024_PSISDG10633_106330P_page_21_1.jpg
00024_PSISDG10633_106330P_page_21_2.jpg
00024_PSISDG10633_106330P_page_21_3.jpg
  • Measurement updates are possible without Cartesian conversion.

  • Major nonlinear filtering algorithms shown.

  • We focus on the Extended Kalman Filter and variants of the cubature Kalman filter (which include the “unscented” KF).

  • See EKFUpdate and cubKalUpdate in the TCL.

00024_PSISDG10633_106330P_page_21_2.jpg
  • The Kalman filter arose from a Bayesian update given that a linear measurement and the state are jointly Gaussian.

  • Approximating a nonlinear measurement

    00024_PSISDG10633_106330P_page_22_1.jpg

    where w is Gaussian, as jointly Gaussian with the state, one still has the same basic update equations as the Kalman filter

    00024_PSISDG10633_106330P_page_22_2.jpg
    00024_PSISDG10633_106330P_page_22_3.jpg
    but the quantities 00024_PSISDG10633_106330P_page_22_4.jpg are now integrals.

00024_PSISDG10633_106330P_page_22_5.jpg
00024_PSISDG10633_106330P_page_22_6.jpg
  • The simplest solution to the nonlinear integrals is to use cubature integration, shown above.

  • The square root is a lower-triangular Cholesky decomposition.

  • The vector formulation above requires all cubature weights be positive, but allows for Joseph’s form to be used.

  • A Joseph’s formulation supporting negative cubature weights is probably impossible.

00024_PSISDG10633_106330P_page_23_1.jpg
00024_PSISDG10633_106330P_page_23_2.jpg
  • An alternative approach is to use a Taylor series expansion of the nonlinear function.

  • The result is the extended Kalman filter (EKF), shown above.

00024_PSISDG10633_106330P_page_23_3.jpg
00024_PSISDG10633_106330P_page_24_1.jpg
  • Two common approaches for basic tracking exist:

    • 1. Cartesian converting measurements (and covariances) and using a linear filter.

    • 2. Directly using measurements in a nonlinear filter.

  • These shall be compared in a simple example.

00024_PSISDG10633_106330P_page_24_2.jpg
00024_PSISDG10633_106330P_page_24_3.jpg
  • A flat Earth.

  • All ships on the surface traveling −10m/s in the negative z direction.

  • The target initially at an altitude of 7 km going 100m/s.

  • Radars on ships pointed 15° up from the horizontal.

  • = 0.4802m2/s3

  • Measurements every T = 0.5 s.

  • Tracks initialized via an information filter with 2 converted measurements.

  • 00024_PSISDG10633_106330P_page_24_4.jpg.

00024_PSISDG10633_106330P_page_25_1.jpg
00024_PSISDG10633_106330P_page_25_2.jpg
  • The positional RMSE error of three different tracking algorithms. The CKF used 5th order points.

  • The CKF has the best RMSE performance.

00024_PSISDG10633_106330P_page_25_3.jpg
00024_PSISDG10633_106330P_page_25_4.jpg
  • The NEES of three different tracking algorithms.

  • The EKF is bad; the CKF is the best over time; converted measurements are initially the best.

00024_PSISDG10633_106330P_page_26_1.jpg
00024_PSISDG10633_106330P_page_26_2.jpg
00024_PSISDG10633_106330P_page_26_3.jpg
  • Common algorithms for assigning measurements to targets shown.

  • We focus on non random finite set (RFS)-based single scan approaches.

00024_PSISDG10633_106330P_page_27_1.jpg

Topics considered are:

  • 1. The Likelihood Function.

  • 2. Naïve Nearest Neighbor, the Score Function, and Global Nearest Neighbor (GNN)

  • 3. Probabilistic Data Association (PDA) and Joint Probabilistic Data Association (JPDA) variants

00024_PSISDG10633_106330P_page_27_2.jpg
  • Consider one known target with a Gaussian prediction x̂k|k−1, Pk|k−1 with a 100% detection probability and with NM measurements present.

  • Which measurement should be assigned to the target?

  • Single-scan data association algorithms make this decision based only on the current state prediction x̂k|k−1, Pk|k−1.

  • Multiple scan data association look at multiple sets of measurements.

00024_PSISDG10633_106330P_page_28_1.jpg
00024_PSISDG10633_106330P_page_28_2.jpg
  • Let Hp be a matrix so Hpx extracts the position components of a Cartesian state.

  • Given Cartesian-converted measurements 00024_PSISDG10633_106330P_page_28_3.jpg one might assign the ith one such that

    00024_PSISDG10633_106330P_page_28_4.jpg

  • This is usually bad:

    • Measurements are more accurate in range than cross range.

      • Cross-range becomes worse farther away from sensor, as illustrated (monostatic).

    • The shape of the uncertainty region of the state can matter.

      • Target ellipse crosses multiple range cells in image.

00024_PSISDG10633_106330P_page_28_5.jpg
  • One cannot convert the state to the measurement coordinate system and use a similar l2 norm.

    • Mixing units (e.g. range, angle, and even range rate) makes no sense.

  • Valid distance measures can be derived from likelihood functions and likelihood ratios.

    • Another reason that measurement covariance matrices matter.

  • Let Zk−1 be the set of all measurements up to discrete time k − 1 and Θk−1 be the information of which measurements are assigned to the track up to time k − 1.

  • A valid cost function is the likelihood p(z|Zk−1, Θk−1).

00024_PSISDG10633_106330P_page_29_1.jpg
  • Written out, the likelihood of the ith measurement:

    00024_PSISDG10633_106330P_page_29_2.jpg

  • 00024_PSISDG10633_106330P_page_29_3.jpg depends on the covariance matrix Ri of the ith measurement.

  • Taking the negative logarithm of the likelihood and dropping the normalizing constant terms and 1/2 scale factor one has a Mahalanobis distance:

    00024_PSISDG10633_106330P_page_29_4.jpg

  • From the mathematics section, we know that Mahalanobis distances can be used for chi-squared testing to determine whether measurements can even be considered valid.

    • The exclusion of measurements from possible assignments is gating.

00024_PSISDG10633_106330P_page_29_5.jpg
00024_PSISDG10633_106330P_page_29_6.jpg
  • For multiple targets, one is tempted to assign the highest likelihood measurement to each target.

  • In the above scenario, both targets would be assigned to measurement m1.

  • Naïve nearest neighbor leads to track coalescence and ultimately, needless track loss.

  • A practical algorithm must assign measurements jointly across targets, accounting for missed detections.

  • Naïve nearest neighbor is one of the options in singleScanUpdate in the TCL.

00024_PSISDG10633_106330P_page_30_1.jpg
  • We want to derive a cost function (a score function) that can be used for multiple target assignment.

  • The exponential of the score function derived in the unabridged slides here is computed in makeStandardLRMatHyps and makeStandardCartOnlyLRMatHyps in the TCL.

00024_PSISDG10633_106330P_page_30_1.jpg
  • Under many standard assumptions, the marginal change in the log-likelihood for assigning a measurement is

    00024_PSISDG10633_106330P_page_30_2.jpg
    • 00024_PSISDG10633_106330P_page_30_3.jpg is the predicted measurement from the tth target,

    • 00024_PSISDG10633_106330P_page_30_4.jpg is the innovation covariance the for ith measurement and tth target.

  • The term ΔΛt,i is the marginal score function for single-frame assignment.

  • Summing the marginals for a full target-measurement assignment, one forms the full score function Λ(θ) for a scan.

00024_PSISDG10633_106330P_page_31_1.jpg
  • When using a converted measurement filter, the units of 00024_PSISDG10633_106330P_page_31_2.jpg are in Cartesian coordinates, but the units of λ are usually in the radar’s local coordinates.

  • The proper conversion of λ to Cartesian coordinates yields a different λ at every point.

    • Cartesian λ is higher closer to the sensor.

  • The Cartesian version of λ given λ in the measurement coordinate system is

    00024_PSISDG10633_106330P_page_31_3.jpg

  • In the TCL, necessary Jacobians are in “Coordinate Systems/Jacobians/Converted Jacobians” and include calcRuvConvJacob and calcPolarConvJacob, among others.

00024_PSISDG10633_106330P_page_31_5.jpg
  • One could assign measurements to targets and false alarms by choosing the assignment θ that maximizes the score function.

  • How many valid assignments are there for m measurement and NT targets?

    00024_PSISDG10633_106330P_page_31_4.jpg

  • Suppose there are 3000 measurements and targets, and no false alarms or missed detections.

    • There are 3000! ≈ 4.14 × 109130 hypotheses.

    • This is about one googol (10100) raised to 91.3.

00024_PSISDG10633_106330P_page_32_1.jpg
  • There are 3000! ≈ 4.14 × 109130 hypotheses, but only 30002 = 9 × 106 marginal hypotheses (values of ΔΛt,i).

  • The efficient solution is formulated as a GNN assignment (2D assignment) problem:

    00024_PSISDG10633_106330P_page_32_2.jpg
    00024_PSISDG10633_106330P_page_32_3.jpg
    00024_PSISDG10633_106330P_page_32_4.jpg
    00024_PSISDG10633_106330P_page_32_5.jpg

  • NR = NT and NC = NT + m, number of measurements plus missed detection hypotheses.

00024_PSISDG10633_106330P_page_32_6.jpg
  • Each target gets its own missed detection hypotheses; costs for other targets’ hypotheses are −∞.

  • To use the algorithm note that the cost matrix takes the form

    00024_PSISDG10633_106330P_page_32_7.jpg

  • 2D assignment is a binary integer programming problem.

  • A polynomial time solution is implemented as assign2D and kBest2DAssign in the TCL

00024_PSISDG10633_106330P_page_33_1.jpg
  • The GNN algorithm is a maximum-likelihood approach.

  • An alternative is to use the expected value over all possible target-measurement assignments.

  • For a single target, the expected value and the covariance of the estimate are called probabilistic data association (PDA).

  • For multiple targets, it is called Joint Probabilistic Data Association (JPDA).

  • Variants of the PDA and JPDA are implemented in singleScanUpdate in the TCL.

00024_PSISDG10633_106330P_page_33_1.jpg
  • For the tth target, the JPDA update is

    00024_PSISDG10633_106330P_page_33_2.jpg
    00024_PSISDG10633_106330P_page_33_3.jpg
    00024_PSISDG10633_106330P_page_33_4.jpg
    • βi,t is the probability of assigning measurement i to target t (0 is a missed detection).

    • Superscripts of i and t indicate measurement and target hypotheses.

    • Ip is information on the (assumed Gaussian) prior estimates.

  • The literature often uses a simpler expression for 00024_PSISDG10633_106330P_page_33_5.jpg that is not quadratic in form and subject to finite precision errors.

00024_PSISDG10633_106330P_page_34_1.jpg
  • Assumptions going into the PDA/JPDA are that the prior distributions on all targets are Gaussian.

  • The covariance cross terms between targets are not zero, but are omitted.

  • The hardest part of the PDA/JPDA is the computation of the β values.

00024_PSISDG10633_106330P_page_34_2.jpg
00024_PSISDG10633_106330P_page_34_3.jpg
  • Gating and clustering are important parts of a large-scale JPDA implementation.

  • In the above figure, measurements are said to gate with a target if in the ellipse overlaps them.

    • In practice, use a chi-squared test on the Mahalanobis distance.

  • There are three clusters of targets and measurements.

    • 1. Target t1 is in a cluster with m4.

    • 2. Targets t2 and t3 (linked by m2) cluster with m1, m2, and m3.

    • 3. Target t4 is in a cluster with m6.

00024_PSISDG10633_106330P_page_35_1.jpg
  • Brute-force gating and likelihood evaluation is implemented in the TCL via the makeStandardLRMatHyps and makeStandardCartOnlyLRMatHyps functions.

  • Clustering can be computationally efficiently performed using disjoint sets, an obscure Computer Science data structure.

  • Disjoint sets for clustering are implemented in the DisjointSetM and DisjointSet classes in the TCL; DisjointSet keeps track of only targets in clusters; DisjointSetM keeps track of targets and measurements in clusters.

00024_PSISDG10633_106330P_page_35_2.jpg
00024_PSISDG10633_106330P_page_35_3.jpg
  • An illustration go how separate clusters can be processed independently.

  • Gg is the set of targets and measurements in the gth cluster.

00024_PSISDG10633_106330P_page_36_1.jpg
  • When the β terms must be computed exactly, two approaches shall be considered:

    • 1. Via brute-force evaluation of all joint association events.

    • 2. Via matrix permanents.

  • The matrix permanent approach is faster, but brute force is necessary to derive some JPDAF variants.

00024_PSISDG10633_106330P_page_36_2.jpg
  • Consider a matrix of likelihoods with 00024_PSISDG10633_106330P_page_36_5.jpg non-normalized assignment probabilities:

    00024_PSISDG10633_106330P_page_36_3.jpg

  • The normalized expression for the β terms can be rewritten directly in terms of likelihoods using elements of C:

    00024_PSISDG10633_106330P_page_36_4.jpg

00024_PSISDG10633_106330P_page_37_1.jpg
  • The expression simplifies to

    00024_PSISDG10633_106330P_page_37_2.jpg

    where C̄j,k is the matrix C after removing row j and column k.

  • The matrix permanent cannot be evaluated in polynomial time unless P=NP.

  • Efficient exponential complexity algorithms exist. In the TCL, the function perm implements an efficient algorithm.

00024_PSISDG10633_106330P_page_37_3.jpg
  • Functions to explicitly compute the β values are implemented in the calc2DAssignmentProbs function in the TCL.

  • Many techniques to approximate β values exist and are implemented in calc2DAssignmentProbsApprox in the TCL.

  • Methods to do the complete PDA and JPDA update are given in singleScanUpdate in the TCL.

  • However, one usually uses a variant of the JPDA algorithm rather than the JPDA algorithm itself.

00024_PSISDG10633_106330P_page_38_1.jpg
  • Consider two targets whose states consist only of scalar position and have been stacked.

  • Suppose that the joint PDF for the two targets is

    00024_PSISDG10633_106330P_page_38_2.jpg

  • One target is located at +1 and one target is located at −1, but we do not know which.

  • E{x} = 0, where no target is located.

    • Identity uncertainty causes track coalescence!

  • Coalescence is not a “bias”.

  • Coalescence is the result of using the expected value given uncertain identity.

00024_PSISDG10633_106330P_page_38_3.jpg
  • The Set JPDAF, the GNN-JPDA and the JPDA* can reduce coalescence.

  • The GNN-JPDA is simple:

    • 1. Determine the measurement to use with a GNN filter, giving x̂x|x.

    • 2. Compute Pk|k as in the JPDA, using the GNN estimate as the mean x̂k|k.

  • The hard assignment avoids coalescence.

  • Computing Pk|k as a MSE matrix improves covariance consistency/reduces track loss.

  • Available as an option in singleScanUpdate in the TCL with exact and approximate βs.

00024_PSISDG10633_106330P_page_39_1.jpg
  • The brute-force computation of the βs had loops:

    • 1. Choose how many targets are observed.

    • 2. Choose which targets are observed.

    • 3. Choose which measurements originated from targets.

    • 4. Permute all associations of observed targets to target-originated measurements.

  • The JPDA* is the same as the JPDA except in the innermost loop, only the maximum likelihood permutation is used.

    • Has the smoothing of the expected value.

    • The hard decision gets rid of identity uncertainty: Resistant to coalescence.

  • Use calcStarBetasBF for the βs in the TCL. Available as an option in singleScanUpdate in the TCL.

00024_PSISDG10633_106330P_page_39_2.jpg
00024_PSISDG10633_106330P_page_39_3.jpg
  • A 2D example of the JPDA* including gating and clustering is given in demo2DDataAssociation in “Sample Code/Basic Tracking Example” in the TCL.

  • A sample run is shown above. Tracks were started from two cued measurements.

  • Estimated tracks: Red. True track: Dashed black. Detections: Blue. Very resistant to false alarms.

00024_PSISDG10633_106330P_page_40_1.jpg
00024_PSISDG10633_106330P_page_40_2.jpg
  • The GNN and JPDA algorithms only update established tracks.

  • Most practical systems require the ability to start and terminate tracks.

  • Two main categories of algorithms exist for single-scan data association approaches:

    • Cascaded Logic Trackers

      • Confirmed-tracks, pre-tracks and hard decisions for initiation and termination.

    • Integrated Trackers

      • Lots of targets, each with a probability of existing.

00024_PSISDG10633_106330P_page_41_1.jpg
00024_PSISDG10633_106330P_page_41_2.jpg
  • Multiple Types of cascaded logic trackers exist.

  • There are confirmed tracks and candidate tracks.

    • Sometimes pre-tracks too.

  • Scores usually updated via GNN assignments.

  • Measurements not in GNN assignments go on to the next stage.

  • Creation, promotion and deletion of tracks in purple-outlined boxes.

00024_PSISDG10633_106330P_page_41_3.jpg
00024_PSISDG10633_106330P_page_41_4.jpg
  • Integrated trackers maintain a probability of target existence with each possible target.

  • Usually, a track is not considered firm until its existence probability exceeds a threshold.

  • A track is not terminated until its existence probability goes below a lower threshold.

  • Measurement update implemented in the singleScanUpdateWithExistence function in the TCL.

00024_PSISDG10633_106330P_page_42_1.jpg
00024_PSISDG10633_106330P_page_42_2.jpg
  • A rigorous derivation of the JIPDA class of filters is usually done using finite set statistics.

  • A proper coverage of finite set statistics is beyond the scope of this presentation.

  • An example of a minimal end-to-end GNN-JIPDAF in 2D is given in demo2DIntegratedDataAssociation in “Sample Code/Basic Tracking Examples” in the TCL.

  • A plot of a run of the sample code with the detections and found tracks (green) and true tracks (red) is shown above for the simple two-target scenario.

00024_PSISDG10633_106330P_page_42_3.jpg
00024_PSISDG10633_106330P_page_43_1.jpg
  • Gaussian approximations and Poisson clutter are widely used.

  • Tracking algorithms need consistent measurement covariance matrices. Cross terms between range and range rate can matter.

  • The Kalman filter comes from a Bayesian update of a linear dynamic model and a linear measurement.

  • The EKF and CKF use Taylor series and cubature approximations to solve difficult integrals in an approximate nonlinear Kalman filter.

  • Approaches to measurement conversion with consistent covariances include using Taylor series and cubature approximations to solve difficult integrals.

00024_PSISDG10633_106330P_page_43_2.jpg
  • The GNN filter is a maximum likelihood filter for data association.

  • The JPDA is an MMSE (expected value) filter for data association.

  • One typically uses a variant of the JPDA, because the expected value is undesirable given target identity uncertainty.

  • Cascaded logic and integrated additions to GNN and JPDA filter variants allow for track initiation and termination.

  • Lots of free, commented Matlab code for tracking can be found at https://github.com/USNavalResearchLaboratory/TrackerComponentLibrary which is also http://www.trackercomponentlibrary.com

© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
David F. Crouse "A crash course in basic single-scan target tracking (abridged)", Proc. SPIE 10633, Radar Sensor Technology XXII, 106330P (4 May 2018); https://doi.org/10.1117/12.2307567
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Filtering (signal processing)

Detection and tracking algorithms

Nonlinear filtering

Personal digital assistants

Statistical analysis

Target detection

Logic

Back to Top