PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7667, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a local feature-based method for matching facial sketch images to face photographs, which
is the first known feature-based method for performing such matching. Starting with a training set of sketch to
photo correspondences (i.e. a set of sketch and photo images of the same subjects), we demonstrate the ability
to match sketches to photos: (1) directly using SIFT feature descriptors, (2) in a "common representation" that
measures the similarity between a sketch and photo by their distance from the training set of sketch/photo pairs,
and (3) by fusing the previous two methods. For both matching methods, the first step is to sample SIFT feature
descriptors uniformly across all the sketch and photo images. In direct matching, we simply measure the distance
of the SIFT descriptors between sketches and photos. In common representation matching, the distance between
the descriptor vectors of the probe sketches and gallery photos at each local sample point is measured. This
results in a vector of distances across the sketch or photo image to each member of the training basis. Further
recognition improvements are shown by score level fusion of the two sketch matchers. Compared with published
sketch to photo matching algorithms, experimental results demonstrate improved matching performances using
the presented feature-based methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A reliable thermal face recognition system can enhance the national security applications such as prevention against
terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs
or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition
approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and
nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will
be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face
pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present
on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked
with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved
upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing
condition.e
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human faces are smooth and symmetrical, making superquadrics a good choice for representation and normalization.
We present a novel approach to parameterize 3D faces using the powerful superquadric model in combination with an
Eigen decomposition which represents the finer features of faces. The superquadric fit also provides axes of symmetry
that yield a normalized face coordinate space necessary for applying PCA. Results of fitting on our data set, of 2 scans
each from 107 people, show reliable representation for yaw, pitch, and roll with average rotations of the order 10-3
radians, about each axis.
Parameterization can be used to partition the search space into smaller bins, thus effectively reducing the search
space complexity for matching and recognition. We show that it is possible to create about 20-40 clusters with as few
as 30 parameters. The accuracy of the clustering algorithm, in some cases, is as high as 90%. We believe this approach
to indexing 3D faces is an interesting extension to existing literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new face recognition algorithm has been proposed which is robust to variations in pose, expression, illumination and
occlusions such as sunglasses. The algorithm is motivated by the Edit Distance used to determine the similarity between
strings of one dimensional data such as DNA and text. The key to this approach is how to extend the concept of an Edit
Distance on one-dimensional data to two-dimensional image data. The algorithm is based on mapping one image into
another and using the characteristics of the mapping to determine a two-dimensional Pictorial-Edit Distance or P-Edit
Distance. We show how the properties of the mapping are similar to insertion, deletion and substitution errors defined in
an Edit Distance. This algorithm is particularly well suited for face recognition in uncontrolled environments such as
stand-off and other surveillance applications. We will describe an entire system designed for face recognition at a
distance including face detection, pose estimation, multi-sample fusion of video frames and identification. Here we
describe how the algorithm is used for face recognition at a distance, present some initial results and describe future
research directions.(
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report the development of a face recognition system which operates in the same way as humans in
that it is capable of recognizing a number of people, while rejecting everybody else as strangers. While
humans do it routinely, a particularly challenging aspect of the problem of open-world face recognition
has been the question of rejecting previously unseen faces as unfamiliar. Our approach can handle
previously unseen faces; it is based on identifying and enclosing the region(s) in the human face space
which belong to the target person(s).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic feature extraction in latent fingerprints is a challenging problem due to poor quality of most latents, such as unclear ridge structures, overlapped lines and letters, and overlapped fingerprints. We proposed a latent fingerprint enhancement algorithm which requires manually marked region of interest (ROI) and singular points. The core of the proposed enhancement algorithm is a novel orientation field estimation algorithm, which fits orientation field model to coarse orientation field estimated from skeleton outputted by a commercial fingerprint SDK. Experimental results on NIST SD27 latent fingerprint database indicate that by incorporating the proposed enhancement algorithm, the matching accuracy of the commercial matcher was significantly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In some applications such as field stations, disaster situations or similar conditions, it is desirable to have a contactless,
rugged means to collect fingerprint information. The approach described in this paper enables acceleration of the
capture process by eliminating an otherwise system and finger cleanup procedure, minimizes the chance of the spread of
disease or contaminations, and uses an innovative optical system able to provide rolled equivalent fingerprint
information desirable for reliable 2D matching against existing databases. The approach described captures highresolution
fingerprints and 3D information simultaneously using a single camera. Liquid crystal polarization rotators
combined with birefringent elements provides the focus shift and a depth from focus algorithm extracts the 3D data. This
imaging technique does not involve any moving parts, thus reducing cost and complexity of the system as well as
increasing its robustness. Data collection is expected to take less than 100 milliseconds, capturing all four-finger images
simultaneously to avoid sequencing errors. This paper describes the various options considered for contactless
fingerprint capture, and why the particular approach was ultimately chosen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Integrity of fingerprint data is essential to biometric and forensic applications. Accordingly, the FBI's Criminal Justice
Information Services (CJIS) Division has sponsored development of software tools to facilitate quality control functions
relative to maintaining its fingerprint data assets inherent to the Integrated Automated Fingerprint Identification System
(IAFIS) and Next Generation Identification (NGI). This paper provides an introduction of two such tools. The first FBI-sponsored
tool was developed by the National Institute of Standards and Technology (NIST) and examines and detects
the spectral signature of the ridge-flow structure characteristic of friction ridge skin. The Spectral Image
Validation/Verification (SIVV) utility differentiates fingerprints from non-fingerprints, including blank frames or
segmentation failures erroneously included in data; provides a "first look" at image quality; and can identify anomalies
in sample rates of scanned images. The SIVV utility might detect errors in individual 10-print fingerprints inaccurately
segmented from the flat, multi-finger image acquired by one of the automated collection systems increasing in
availability and usage. In such cases, the lost fingerprint can be recovered by re-segmentation from the now compressed
multi-finger image record. The second FBI-sponsored tool, CropCoeff was developed by MITRE and thoroughly tested
via NIST. CropCoeff enables cropping of the replacement single print directly from the compressed data file, thus
avoiding decompression and recompression of images that might degrade fingerprint features necessary for matching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss the problem of preserving the privacy of a digital face image stored in a central database. In the
proposed scheme, a private face image is dithered into two host face images such that it can be revealed only
when both host images are simultaneously available; at the same time, the individual host images do not reveal
the identity of the original image. In order to accomplish this, we appeal to the field of Visual Cryptography.
Experimental results confirm the following: (a) the possibility of hiding a private face image in two unrelated
host face images; (b) the successful matching of face images that are reconstructed by superimposing the host
images; and (c) the inability of the host images, known as sheets, to reveal the identity of the secret face image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometric systems usually do not possess a cryptographic level of security: it has been deemed impossible to perform a
biometric authentication in the encrypted domain because of the natural variability of biometric samples and of the
cryptographic intolerance even to a single bite error. Encrypted biometric data need to be decrypted on authentication,
which creates privacy and security risks. On the other hand, the known solutions called "Biometric Encryption (BE)" or
"Fuzzy Extractors" can be cracked by various attacks, for example, by running offline a database of images against the
stored helper data in order to obtain a false match. In this paper, we present a novel approach which combines Biometric
Encryption with classical Blum-Goldwasser cryptosystem. In the "Client - Service Provider (SP)" or in the "Client -
Database - SP" architecture it is possible to keep the biometric data encrypted on all the stages of the storage and
authentication, so that SP never has an access to unencrypted biometric data. It is shown that this approach is suitable for
two of the most popular BE schemes, Fuzzy Commitment and Quantized Index Modulation (QIM). The approach has
clear practical advantages over biometric systems using "homomorphic encryption". Future work will deal with the
application of the proposed solution to one-to-many biometric systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Template protection techniques are used within biometric systems in order to protect the stored biometric
template against privacy and security threats. A great portion of template protection techniques are based
on extracting a key from or binding a key to a biometric sample. The achieved protection depends on the
size of the key and its closeness to being random. In the literature it can be observed that there is a large
variation on the reported key lengths at similar classification performance of the same template protection
system, even when based on the same biometric modality and database. In this work we determine the analytical
relationship between the system performance and the theoretical maximum key size given a biometric source
modeled by parallel Gaussian channels. We consider the case where the source capacity is evenly distributed
across all channels and the channels are independent. We also determine the effect of the parameters such as
the source capacity, the number of enrolment and verification samples, and the operating point selection on the
maximum key size. We show that a trade-off exists between the privacy protection of the biometric system and
its convenience for its users.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fuzzy vault is a practical and promising scheme, which can protect biometric templates and perform secure
key management simultaneously. Aligning the query sample and the template sample in the encrypted domain
remains a challenging task in the fingerprint-based fuzzy vault scheme. To some extent, all the existing fingerprint
aligning methods in the encrypted domain have their own drawbacks, e.g., not enough alignment accuracy
or information leakage because of publishing helper data. In this paper, a novel fingerprint aligning method is
proposed, which integrates the fingerprint reference points and its neighboring region of interest(ROI) in a hierarchical
manner. The concept of mutual information(MI) in the information theory is used to assess the coincidence
extent of two fingerprints after being aligned. The novel alignment method is applied to fingerprint-based fuzzy
vault implementation. Out of information leakage consideration, the orientation features of fingerprint minutiae
are discarded and another distinguishing local feature, inter-minutiae ridge count, is used to replace the minutiae
orientation in the implementation of fingerprint-based fuzzy vault. Experiment on FVC2002 DB2a is conducted
to show the virtue of proposed alignment method and the promising performance of proposed fingerprint-based
fuzzy vault implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a central limit theorem (CLT) for the estimation of a false match rate for a single
matching system. The false match rate is often a significant factor in an evaluation of such a matching system. To
achieve the main result here we utilize the covariance/correlation structure for matching proposed by Schuckers.
Along with the main result we present an illustration of the methodology here on biometric authentication data
from Ross and Jain. This illustration is from resampling match decisions on three different biometric modalities:
hand geometry, fingerprint and facial recognition and shows that as the number of matching pairs grows the
sampling distribution for an FMR approaches a Gaussian distribution. These results suggest that statistical
inference for a FMR based upon a Gaussian distribution is appropriate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is not uncommon for contemporary biometric systems to have more than one match below the matching
threshold, or to have two or more matches having close matching scores. This is especially true for those that store large
quantities of identities and/or are applied to measure loosely constrained biometric traits, such as in identification from
video or at a distance. Current biometric performance evaluation standards however are still largely based on measuring
single-score statistics such as False Match, False Non-Match rates and the trade-off curves based thereon. Such
methodology and reporting makes it impossible to investigate the risks and risk mitigation strategies associated with not
having a unique identifying score. To address the issue, Canada Border Services Agency has developed a novel modality-agnostic
multi-order performance analysis framework. The framework allows one to analyze the system performance at
several levels of detail, by defining the traditional single-score-based metrics as Order-1 analysis, and introducing Order-
2 and Order-3 analysis to permit the investigation of the system reliability and the confidence of its recognition decisions.
Implemented in a toolkit called C-BET (Comprehensive Biometrics Evaluation Toolkit), the framework has been applied
in a recent examination of the state-of-the art iris recognition systems, the results of which are presented, and is now
recommended to other agencies interested in testing and tuning the biometric systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing definitions for biometric testing and evaluation do not fully explain errors in a biometric system. This paper
provides a definitional framework for the Human Biometric-Sensor Interaction (HBSI) model. This paper proposes six
new definitions based around two classifications of presentations, erroneous and correct. The new terms are: defective
interaction (DI), concealed interaction (CI), false interaction (FI), failure to detect (FTD), failure to extract (FTX), and
successfully acquired samples (SAS). As with all definitions, the new terms require a modification to the general
biometric model developed by Mansfield and Wayman [1].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To evaluate the performance of fingerprint-image matching algorithms on large datasets, a receiver
operating characteristic (ROC) curve is applied. From the operational perspective, the true accept
rate (TAR) of the genuine scores at a specified false accept rate (FAR) of the impostor scores and/or
the equal error rate (EER) are often employed. Using the standard errors of these metrics computed
using the nonparametric two-sample bootstrap based on our studies of bootstrap variability on large
fingerprint datasets, the significance test is performed to determine whether the difference between
the performance of one algorithm and a hypothesized value, or the difference between the
performances of two algorithms where the correlation is taken into account is statistically significant.
In the case that the alternative hypothesis is accepted, the sign of the difference is employed to
determine which is better than the other. Examples are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris recognition technology has the potential to broaden to include non-ideal imaging situations, as well as scale to
national-level deployments. Hence, the study of population factors, subject intrinsics, and sensing contexts that can
individually or collectively impact iris recognition performance assumes increased importance. This presentation
summarizes recent research on a number of such "quality variables" and motivates the need for large data sets exhibiting
a variety of such non-idealities to adequately characterize performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low quality iris images such as blurry, low resolution images with poor illumination create a challenge
for iris recognition systems. Therefore, an efficient enhancement of iris images are needed in challenging
environments. We propose a new iris recognition algorithm for enhancement of normalized iris images. Our
algorithm is based on the logarithmic image processing (LIP) image enhancement which is used as one of the 3
stages in the enhancement process. Methods are tested on the MBGC database to compare enrolled video iris
images from 124 subjects with 220 pixels resolutions to query video portal images from 110 subjects with 120
pixels resolution. Results from processing challenging MBGC iris data show significant improvement in the
performance of iris recognition algorithms in terms of equal error rates compared to the original (unenhanced
images) and the other fast image enhancement methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional computer systems authenticate users only at the initial log-in session, which can be the cause of a critical
security flaw. To resolve this problem, systems need continuous user authentication methods that continuously monitor
and authenticate users based on some biometric trait(s). We propose a new method for continuous user authentication
based on a Webcam that monitors a logged in user's face and color of clothing. Our method can authenticate users
regardless of their posture in front of the workstation (laptop or PC). Previous methods for continuous user
authentication cannot authenticate users without biometric observation. To alleviate this requirement, our method uses
color information of users' clothing as an enrollment template in addition to their face information. The system cannot
pre-register the clothing color information because this information is not permanent. To deal with the problem, our
system automatically registers this information every time the user logs in and then fuses it with the conventional
(password) identification system. We report preliminary authentication results and future enhancements to the proposed
system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometrics, such as fingerprint, iris scan, and face recognition, offer methods for identifying individuals based on a
unique physiological measurement. Recent studies indicate that a person's electrocardiogram (ECG) may also
provide a unique biometric signature. Several methods for processing ECG data have appeared in the literature and
most approaches rest on an initial detection and segmentation of the heartbeats. Various sources of noise, such as
sensor noise, poor sensor placement, or muscle movements, can degrade the ECG signal and introduce errors into
the heartbeat segmentation. This paper presents a screening technique for assessing the quality of each segmented
heartbeat. Using this technique, a higher quality signal can be extracted to support the identification task. We
demonstrate the benefits of this quality screening using a principal component technique known as eigenpulse. The
analysis demonstrated the improvement in performance attributable to the quality screening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometrics is described as the science of identifying people based on physical characteristics such as their
fingerprints, facial features, hand geometry, iris patterns, palm prints, or speech recognition. Notably, all of these
physical characteristics are visible or detectable from the exterior of the body. These external characteristics can be
lifted, photographed, copied or recorded for unauthorized access to a biometric system. Individual humans are unique
internally, however, just as they are unique externally.
New biometric modalities have been developed which identify people based on their unique internal
characteristics. For example, "BoneprintsTM" use acoustic fields to scan the unique bone density pattern of a thumb
pressed on a small acoustic sensor. Thanks to advances in piezoelectric materials the acoustic sensor can be placed in
virtually any device such as a steering wheel, door handle, or keyboard. Similarly, "Imp-PrintsTM" measure the electrical
impedance patterns of a hand to identify or verify a person's identity. Small impedance sensors can be easily embedded
in devices such as smart cards, handles, or wall mounts.
These internal biometric modalities rely on physical characteristics which are not visible or photographable,
providing an added level of security. In addition, both the acoustic and impedance methods can be combined with
physiologic measurements such as acoustic Doppler or impedance plethysmography, respectively. Added verification
that the biometric pattern came from a living person can be obtained. These new biometric modalities have the potential
to allay user concerns over protection of privacy, while providing a higher level of security.*
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a prototype video tracking and person categorization system that uses face and person soft biometric features
to tag people while tracking them in multiple camera views. Our approach takes advantage of temporal aspect of video
by extracting and accumulating feasible soft biometric features for each person in every frame to build a dynamic soft
biometric feature list for each tracked person in surveillance videos. We developed algorithms for extracting face soft
biometric features to achieve gender and ethnicity classification and session soft biometric features to aid in camera
hand-off in surveillance videos with low resolution and uncontrolled illumination. To train and test our face soft
biometry algorithms, we collected over 1500 face images from both genders and three ethnicity groups with various
sizes, poses and illumination. These soft biometric feature extractors and classifiers are implemented on our existing
video content extraction platform to enhance video surveillance tasks. Our algorithms achieved promising results for
gender and ethnicity classification, and tracked person re-identification for camera hand-off on low to good quality
surveillance and broadcast videos. By utilizing the proposed system, a high level description of extracted person's soft
biometric data can be stored to use later for different purposes, such as to provide categorical information of people, to
create database partitions to accelerate searches in responding to user queries, and to track people between cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for an automated surveillance system is pronounced at night when the capability of the human eye
to detect anomalies is reduced. While there have been significant efforts in the classification of individuals
using human metrology and gait, the majority of research assumes a day-time environment. The aim of this
study is to move beyond traditional image acquisition modalities and explore the issues of object detection and
human identification at night. To address these issues, a spatiotemporal gait curve that captures the shape
dynamics of a moving human silhouette is employed. Initially proposed by Wang et al., this representation
of the gait is expanded to incorporate modules for individual classification, backpack detection, and silhouette
restoration. Evaluation of these algorithms is conducted on the CASIA Night Gait Database, which includes 10
video sequences for each of 153 unique subjects. The video sequences were captured using a low resolution thermal
camera. Matching performance of the proposed algorithms is evaluated using a nearest neighbor classifier. The
outcome of this work is an efficient algorithm for backpack detection and human identification, and a basis for
further study in silhouette enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a prototype for an automated deception detection system. Similar to polygraph examinations, we
attempt to take advantage of the theory that false answers will produce distinctive measurements in certain physiological
manifestations. We investigate the role of dynamic eye-based features such as eye closure/blinking and lateral movements
of the iris in detecting deceit. The features are recorded both when the test subjects are having non-threatening conversations
as well as when they are being interrogated about a crime they might have committed. The rates of the behavioral
changes are blindly clustered into two groups. Examining the clusters and their characteristics, we observe that the dynamic
features selected for deception detection show promising results with an overall deceptive/non-deceptive prediction
rate of 71.43% from a study consisting of 28 subjects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A
fundamental problem in any recognition system that aims for identification of subjects in a natural scene is
the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even
more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose
estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low
quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image
to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses
novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate
the descriptive strength of the introduced similarity measures by using them directly as a recognition metric.
Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to
be accurate, and robust to lighting changes and image degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increase in twin births has created a requirement for biometric systems to accurately determine the identity
of a person who has an identical twin. The discriminability of some of the identical twin biometric traits,
such as fingerprints, iris, and palmprints, is supported by anatomy and the formation process of the biometric
characteristic, which state they are different even in identical twins due to a number of random factors during
the gestation period. For the first time, we collected multiple biometric traits (fingerprint, face, and iris) of
66 families of twins, and we performed unimodal and multimodal matching experiments to assess the ability
of biometric systems in distinguishing identical twins. Our experiments show that unimodal finger biometric
systems can distinguish two different persons who are not identical twins better than they can distinguish identical
twins; this difference is much larger in the face biometric system and it is not significant in the iris biometric
system. Multimodal biometric systems that combine different units of the same biometric modality (e.g. multiple
fingerprints or left and right irises.) show the best performance among all the unimodal and multimodal biometric
systems, achieving an almost perfect separation between genuine and impostor distributions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present a multibiometric face recognition framework based on combining information from 2D with 3D
facial features. The 3D biometrics channel is protected by a privacy enhancing technology, which uses error correcting
codes and cryptographic primitives to safeguard the privacy of the users of the biometric system at the same time
enabling accurate matching through fusion with 2D. Experiments are conducted to compare the matching performance of
such multibiometric systems with the individual biometric channels working alone and with unprotected multibiometric
systems. The results show that the proposed hybrid system incorporating template protection, match and in some cases
exceed the performance of corresponding unprotected equivalents, in addition to offering the additional privacy
protection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While fusion can be accomplished at multiple levels in a multibiometric system, score level fusion is commonly used as it
offers a good trade-off between fusion complexity and data availability. However, missing scores affect the implementation
of several biometric fusion rules. While there are several techniques for handling missing data, the imputation scheme -
which replaces missing values with predicted values - is preferred since this scheme can be followed by a standard fusion
scheme designed for complete data. This paper compares the performance of three imputation methods: Imputation
via Maximum Likelihood Estimation (MLE), Multiple Imputation (MI) and Random Draw Imputation through Gaussian
Mixture Model estimation (RD GMM). A novel method called Hot-deck GMM is also introduced and exhibits markedly
better performance than the other methods because of its ability to preserve the local structure of the score distribution.
Experiments on the MSU dataset indicate the robustness of the schemes in handling missing scores at various missing data
rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Active learning methods have gained popularity to reduce human effort in annotating examples in order to train a classifier. When faced with large amounts of data, the active learning algorithm automatically selects appropriate data samples that are most relevant to train the classifier. Typical active learning approaches select one data instance (one face image, for example) in one iteration of the algorithm, and the classifier is trained with the selected data instances, one-by-one. Instead, there have been very recent efforts in active learning to select a batch of examples for labeling at each instant rather than selecting a single example and updating the hypothesis. In this work, a novel batch mode active learning scheme based on numerical optimization of an appropriate function has been applied to the biometric recognition problem. In problems such as face recognition, real-world data is often generated in batches, such as frames of video in a capture session. In such scenarios, selecting the most appropriate data instances from these batches (which usually have a high redundancy) to train a classifier is a significant challenge. In this work, the instance selection is formulated as a mathematical optimization problem and the framework is extended to handle learning from multiple sources of information. The results obtained on the widely used NIST Multiple Biometric Grand Challenge (MBGC) and VidTIMIT biometric datasets corroborate the potential of this method in being used for real-world biometric recognition problems, when there are large amounts of data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We assess the impact of the H.264 video codec on the match performance of automated face recognition in
surveillance and mobile video applications. A set of two hundred access control (90 pixel inter-pupilary distance)
and distance surveillance (45 pixel inter-pupilary distance) videos taken under non-ideal imaging and
facial recognition (e.g., pose, illumination, and expression) conditions were matched using two commercial face
recognition engines in the studies. The first study evaluated automated face recognition performance on access
control and distance surveillance videos at CIF and VGA resolutions using the H.264 baseline profile at nine
bitrates rates ranging from 8kbs to 2048kbs. In our experiments, video signals were able to be compressed up to
128kbs before a significant drop face recognition performance occurred. The second study evaluated automated
face recognition on mobile devices at QCIF, iPhone, and Android resolutions for each of the H.264 PDA profiles.
Rank one match performance, cumulative match scores, and failure to enroll rates are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a new biometrics authentication technology, ear recognition remains many unresolved problems, one of them is the
occlusion problem. This paper deals with ear recognition with partially occluded ear images. Firstly, the whole 2D image
is separated to sub-windows. Then, Neighborhood Preserving Embedding is used for feature extraction on each subwindow,
and we select the most discriminative sub-windows according to the recognition rate. Thirdly, a multi-matcher
fusion approach is used for recognition with partially occluded images. Experiments on the USTB ear image database
have illustrated that using only few sub-window can represent the most meaningful region of the ear, and the multimatcher
model gets higher recognition rate than using the whole image for recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.