One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically
available data sets with which to compare different systems. However, the measures of performance for tracking and
classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for
classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets
only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter
identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for
the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data
sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person,
Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also
contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated
objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter
the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of
different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object
Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted
publically.
J. L. Dournaux, A. Abchiche, D. Allan, J. P. Amans, T. P. Armstrong, A. Balzer, D. Berge, C. Boisson, J.-J. Bousquet, A. Brown, M. Bryan, G. Buchholtz, P. Chadwick, H. Costantini, G. Cotter, L. Dangeon, M. Daniel, A. De Franco, F. De Frondat, D. Dumas, J. P. Ernenwein, G. Fasola, S. Funk, J. Gironnet, J. Graham, T. Greenshaw, B. Hameau, O. Hervet, N. Hidaka, J.A. Hinton, J.M. Huet, I. Jégouzo, T. Jogler, T. Kawashima, M. Kraush, J. Lapington, P. Laporte, J. Lefaucheur, S. Markoff, T. Melse, L. Mohrmann, P. Molyneux, S. Nolan, A. Okumura, J. Osborne, R. Parsons, S. Rosen, D. Ross, G. Rowell, C. Rulten, Y. Sato, F. Sayède, J. Schmoll, H. Schoorlemmer, M. Servillat, H. Sol, V. Stamatescu, M. Stephan, R. Stuik, J. Sykes, H. Tajima, J. Thornhill, L. Tibaldo, C. Trichard, J. Vink, J. Watson, R. White, N. Yamane, A. Zech, A. Zink
The GCT (Gamma-ray Cherenkov Telescope) is a dual-mirror prototype of Small-Sized-Telescopes proposed for the Cherenkov Telescope Array (CTA) and made by an Australian-Dutch-French-German-Indian-Japanese-UK-US consortium. The integration of this end-to-end telescope was achieved in 2015. On-site tests and measurements of the first Cherenkov images on the night sky began on November 2015. This contribution describes the telescope and plans for the pre-production and a large scale production within CTA.
A. Brown, A. Abchiche, D. Allan, J.-P. Amans, T. Armstrong, A. Balzer, D. Berge, C. Boisson, J.-J. Bousquet, M. Bryan, G. Buchholtz, P. Chadwick, H. Costantini, G. Cotter, M. Daniel, A. De Franco, F. de Frondat, J.-L. Dournaux, D. Dumas, G Fasola, S. Funk, J. Gironnet, J. Graham, T. Greenshaw, O. Hervet, N. Hidaka, J. Hinton, J.-M. Huet, I. Jégouzo, T. Jogler, M. Kraus, J. Lapington, P. Laporte, J. Lefaucheur, S. Markoff, T. Melse, L. Mohrmann, P. Molyneux, S. Nolan, A. Okumura, J. Osborne, R. Parsons, S. Rosen, D. Ross, G. Rowell, Y. Sato, F. Sayede, J. Schmoll, H. Schoorlemmer, M. Servillat, H. Sol, V. Stamatescu, M. Stephan, R. Stuik, J. Sykes, H. Tajima, J. Thornhill, L. Tibaldo, C. Trichard, J. Vink, J. Watson, R. White, N. Yamane, A. Zech, A. Zink, J. Zorn
The Gamma-ray Cherenkov Telescope (GCT) is proposed for the Small-Sized Telescope component of the Cherenkov Telescope Array (CTA). GCT's dual-mirror Schwarzschild-Couder (SC) optical system allows the use of a compact camera with small form-factor photosensors. The GCT camera is ~ 0:4 m in diameter and has 2048 pixels; each pixel has a ~ 0:2° angular size, resulting in a wide field-of-view. The design of the GCT camera is high performance at low cost, with the camera housing 32 front-end electronics modules providing full waveform information for all of the camera's 2048 pixels. The first GCT camera prototype, CHEC-M, was commissioned during 2015, culminating in the first Cherenkov images recorded by a SC telescope and the first light of a CTA prototype. In this contribution we give a detailed description of the GCT camera and present preliminary results from CHEC-M's commissioning.
We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.
In this paper we investigate the problem of fusing a set of features for a discriminative visual tracking algorithm, where good features are those that best discriminate an object from the local background. Using a principled Mutual Information approach, we introduce a novel online feature selection algorithm that preserves discriminative features while reducing redundant information. Applying this algorithm to a discriminative visual tracking system, we experimentally demonstrate improved tracking performance on standard data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.