Since the derivation of PHD filter, a number of track management schemes have been proposed to adapt the PHD filter for
determining the tracks of multiple objects. Nevertheless, the problem remains that such approaches can fail when targets
are too close or are crossing. In this paper, we propose to improve the tracking by maintaining a set of locally-based
trackers and managing the tracks with an assignment method. Furthermore, the new algorithm is based on a Gaussian
mixture implementation of the CPHD filter, by clustering neighbouring Gaussians before the update step and updating
each cluster with the CPHD filter update. In order to be computationally efficient, the algorithm includes gating techniques
for the local trackers and constructs local cardinality distributions for the targets and clutter within the gated regions. An
improvement in multi-object estimation performance has been experienced on both synthetic and real IR data scenarios.
KEYWORDS: Probability theory, Sensors, Target recognition, Data fusion, Chemical elements, Target detection, Surveillance systems, Systems modeling, Kinematics, Data modeling
Surveillance systems typically perform target identification by
fusing target ID declarations supplied by individual sensors with
a prior knowledge-base. Target ID declarations are usually uncertain in the sense that: (1) their associated confidence factor is less than unity; (2) they are non-specific (the true hypothesis belongs to a subset A of the universe Θ). Prior knowledge is typically represented by a set of possibly uncertain implication rules. An example of such a rule is: if the target is Boeing 737 than it is neutral or friendly with probability 0.8. The uncertainty again manifests itself here in two ways: the rule holds only with a certain probability (typically less than 1.0) and the rule is non-specific (neutral or friendly). The paper describes how the fusion of ID declarations and the implication rules can be handled elegantly within the framework of the belief function theory as understood by the transferable belief model (TBM). Two illustrative examples are
worked out in details in order to clarify the theory.
KEYWORDS: Particles, Detection and tracking algorithms, Signal to noise ratio, Sensors, Particle filters, Tin, Digital filtering, Computer simulations, Data modeling, Target recognition
In this paper, a solution to the TENET nonlinear filtering challenge is presented. The proposed approach is based on particle filtering techniques. Particle methods have already been used in this context but our method improves over previous work in several ways: better importance sampling distribution, variance reduction through Rao-Blackwellisation etc. We demonstrate the efficiency of our algorithm through simulation.
This paper describes an application of sequential Monte Carlo estimation (particle filtering) to the problem of tracking targets occasionally hidden in the blind Doppler zones of a radar. A particle filter which incorporates the prior knowledge of the blind Doppler zone limits has been designed. The simulation results suggest significant improvement in track continuity over the standard Extended Kalman filter. As an operationally viable solution a hybrid tracker is envisaged which can switch between the EKF (with possible built-in data association logic) and the particle filter, depending on the tracking conditions.
The problem is on-line target state estimation from range and range-rate measurements. The motivation for this work comes from the need to track a target in the ISAR mode of the DSTO Ingara Multi-Mode Radar during an extended data collection. The paper makes three main contributions. First, the theoretical Cramér-Rao bound for the performance of an unbiased range-only tracking algorithm is derived. Second, three algorithms are developed and compared to the theoretical bounds of performance. Third, the developed techniques are applied to real data collected in the recent trials with the Ingara radar.
The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.
The tracking performance of the Particle Filter is compared with that of the Range-Parameterised EKF (RPEKF) and Modified Polar coordinate EKF (MPEKF) for a single-sensor angle-only tracking problem with ownship maneuver. The Particle Filter is based on representing the required density of the state vector as a set of random samples with associated weights. This filter is implemented for recursive estimation, and works by propagating the set of samples, and then updating the associated weights according to the new received measurement. The RPEKF, which is essentially a weighted sum of multiple EKF outputs, and the MPEKF are known for their robust angle-only tracking performance. This comparative study shows that the Particle Filter performance is the best, although the RPEKF is only marginally worse. The superior performance of the Particle Filter is particularly evident for high noise conditions where the EKF type trackers generally diverge. Also, the Particle Filter and the RPEKF are found to be robust to the level of a priori knowledge of initial target range. On the contrary, the MPEKF exhibits degraded performance for poor initialisation.
Data produced by a reproducible source contains redundant information which allows seismic inversion to simultaneously determine the high-frequency fluctuation in the p-wave velocity (or reflectivity) as well as the input energy source. The seismogram model is the plane-wave convolutional model derived from the constant density, variable sound velocity acoustic wave equation. The first step is to analyze this linearized model when the background velocity is constant. Then perturbations in the seismic data stably determine corresponding perturbations in the source and reflectivity. The stability of this determination improves as the slowness aperture over which the data is defined increases. Further, the normal operator for the convolutional seismogram model is continuous with respect to velocity. Thus the stability result for constant background velocities may be extended to more realistic background velocity models which vary slowly and smoothly with depth. The theory above is illustrated with four synthetic numerical examples derived from marine data. The examples indicate that for a wide slowness aperture, inversion is very effective in establishing the true shape of the reflectivity and the shape and location of the compactly supported energy source. As this aperture window narrows, the corresponding inversion-estimated model still describes the data quite accurately, but the inversion is not able to recover the original two distinct parameters.
The paper addresses two questions of time-varying higher-order spectra (TVHOS), whose solutions are essential for the further development of these methods and their applicability to a wide range of situations. They are: (1) defining cumulant TVHOS and (2) predicting the behavior of TVHOS of composite signals. It is shown first that the cumulant Wigner-Ville trispectrum (as a particular member of cumulant TVHOS), can preserve the essential properties of cumulant higher-order spectra (e.g., eliminates Gaussian additive noise) and at the same time is able to characterize the time-variations of the signal's spectral (i.e., trispectral) content. Secondly, when dealing with composite FM signals, a special kind of `non-oscillating cross-terms' appear in the moment TVHOS time-frequency subspace. These cross-terms cannot be eliminated by smoothing the WVT, but rather by appropriate slicing of the full time-multi-frequency space.
KEYWORDS: Fermium, Frequency modulation, Interference (communication), Time-frequency analysis, Signal to noise ratio, Modulation, Signal processing, Statistical analysis, Signal analyzers, Amplitude modulation
This paper consists of two parts. The first part reviews a class of higher-order Wigner-Ville distributions projected onto a single frequency axis. This class is referred to as polynomial Wigner-Ville distributions (PWVDs). For random processes, the expected value of PWVDs represent time-varying higher-order moment spectra. The second part defines and studies a particular member of a class of time-varying higher-order spectra based on PWVDs, namely a reduced Wigner-Ville trispectrum (RWVT). This novel time-frequency representation is shown to be a very efficient tool for analysis of FM signals affected by Gaussian amplitude modulation. In the paper, we present a statistical comparison of the RWVT and WVD based instantaneous frequency estimates for linear FM signals in absence and in presence of multiplicative noise. For multicomponent signals, the RWVT has to be re-defined and calculated in the full multi-lag domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.