PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6982, including the Title Page, Copyright
information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Horizontal developments in communication systems have led to the emergence of new wireless technologies like
WiMAX, 3G and 4G. These expansions can provide new opportunities for further advances and exciting applications in
particular if we can integrate different technology standards into heterogeneous wireless networks. WiMAX and WiFi
wireless networks are two examples of different standard technologies that cannot communicate with each other using
existing protocols. These two standards differ in frequency, protocol and management mechanisms, and hence to
construct a heterogeneous network using WiFi and WiMAX devices these differences need to be harmonised and
resolved. Synchronization is the first step towards in such a process. In this paper we propose a private synchronization
technique that enables WiFi and WiMAX devices to communicate with each other. Precise time synchronization in the
micro second resolution range is required. The CPU clock is used as a reference for this private synchronization.
Our private synchronization solution is based on interposing an extra thin layer between MAC and PHY layers in both
WiFi and WiMAX. This extra thin layer will assign alternate synchronization and other duties to the two systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing Ad Hoc routing protocols are mostly based on efficiency-driven protocols. Malicious nodes can easily impair
the performance of wireless ad hoc networks as a result of different actions such as packet dropping or black hole attacks
without being detected. It is virtually impossible to find out these kinds of malicious before they attack, therefore it
would be sensible to base detection of malicious nodes on post route discovery stage, i.e. when packets are transmitted
on discovered routes. In this paper we shall review existing techniques for secure routing and propose to use credibility
based route finding protocols. Each node would monitor its neighbors' pattern of delivering packets and regularly update
their "credibility" according to certain criteria. The level of trust in any route will be based on the credits associated with
the neighbor belonging to the discovered route. We shall evaluate the performance of the proposed scheme by modifying
our simulation system so that each node has a dynamic changing "credit list" for its neighbors' behavior. We shall
conduct a series of simulations with and without the proposed scheme and compare the results. We will demonstrate that
the proposed mechanism is capable of isolating malicious nodes and thereby counteracting black hole attacks. We will
discuss problems we encountered and our solutions. We would also further develop the protocol, to investigate the
possibility of using the unique prime factorization theory to enable nodes acquiring more trust knowledge beyond its
immediate neighborhood. Such an approach helps to further secure route-finding procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
WLAN networks are widely deployed and can be used for testbed and application developments in academic
environments. This paper presents wireless positioning testbed and a related application implementation methodology as
a case study. Nowadays state-of-the-art WLAN positioning systems achieve high location estimation accuracy. In
designated areas the signal profile map can be designed and used for such a positioning. Coverage of WLAN networks is
typically wider than the authorized areas and there might be network intrusion attempts from the vicinity areas such as
parking lots, cafeterias, etc. In addition to conventional verification and authorization methods, the network can locate
the user, verify if his location is in the authorized area and apply additional checks to find the violators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the concept of an intelligent high speed wireless ad-hoc network, which is currently being
developed. The technology aims at, not replacing any of the existing standards, but aims to complement them in urban,
military and hazardous environments. Known as Rhino, the technology is a platform independent, IP based network
which will provide adequate bandwidth for real time video, audio and data traffic. The technology and specifications
described in this paper are based on initial development of the technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual
inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader.
The system identifies electronic tags on the properties and checks the property information in the back-end database
server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end
inventory database and assigns different levels of access privilege according to various user categories. In the back-end
database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user
privilege information, but also keeps track of the user activities in the server including the login and logout time and
location, the records of database accessing, and every modification of the tables. Some experimental results are presented
to verify the applicability of the integrated RFID-based mobile handheld inventory management system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Malicious nodes are mounting increasingly sophisticated attacking operations on the Mobile Ad Hoc Networks
(MANETs). This is mainly because the IP-based MANETs are vulnerable to attacks by various malicious nodes.
However, the defense against malicious attack can be improved when a new layer of network architecture can be
developed to separate true IP address from disclosing to the malicious nodes. In this paper, we propose a new algorithm
to improve the defense against malicious attack (IDMA) that is based on a recently developed Assignment Router
Identify Protocol (ARIP) for the clustering-based MANET management. In the ARIP protocol, we design the ARIP
architecture based on the new Identity instead of the vulnerable IP addresses to provide the required security that is
embedded seamlessly into the overall network architecture. We make full use of ARIP's special property to monitor
gateway forward packets by Reply Request Route Packets (RREP) without additional intrusion detection layer. We name
this new algorithm IDMA because of its inherent capability to improve the defense against malicious attacks. Through
IDMA, a watching algorithm can be established so as to counterattack the malicious node in the routing path when it
unusually drops up packets.
We provide analysis examples for IDMA for the defense against a malicious node that disrupts the route discovery by
impersonating the destination, or by responding with state of corrupted routing information, or by disseminating forged
control traffic. The IDMA algorithm is able to counterattack the malicious node in the cases when the node lunch DoS
attack by broadcast a large number of route requests, or make Target traffic congestion by delivering huge mount of data;
or spoof the IP addresses and send forge packets with a fake ID to the same Target causing traffic congestion at that
destination. We have implemented IDMA algorism using the GloMoSim simulator and have demonstrated its
performance under a variety of operational conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper aims to analyze the QoS performance of two main technology mechanisms, Multi-Protocol
Label Switching (MPLS) and Differentiated Services (DS). The introduction of both mechanisms to
support throughput and delay sensitive real-time media traffic will have an impact on critical applications
with respect to QoS and traffic engineering. MPLS is a traffic forwarding mechanism that allows traffic to
use multiple paths and DS is a mechanism that provides for aggregate traffic to be classified and
conditioned at the edge of the network routers.
The two modeled techniques and their performance will be evaluated with respect to their end-to-end delay.
The QoS will be incorporated in both the MPLS and DS mechanisms while preserving the efficiencies of
the backbone structure of the Internet, and also the performance is compared with the existing IP routing
algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An advanced approach for adaptive nonlinear digital data processing is described in this article. Three primal
computational structures referred to as Q-Measures, Q-Metrics, and Q-Aggregates are introduced and utilized in unison
as highly adaptive data analysis handlers. The proposed approach relies on universal functionals using few parameters to
characterize dynamic system behaviors in broad ranges of unconventional measure, metric, and aggregation spaces. We
present this unique approach in application to real-valued signal processing tasks, with suitable optimization algorithms,
so that the parameters of the proposed models can be tuned automatically. The new approach is tested on real data sets to
enable applications in mobile communication systems and the experiments show promising results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Steganography is the art of hiding information in a cover image so it is not readily apparent to a third party observer.
There exist a variety of fractal steganographic methods for embedding information in an image. The main contribution of
the previous work was to embed data by modifying the original, pre-existing image or embedding an encrypted
datastream in an image. In this paper we propose a new fractal steganographic method. The fractal parameters of the
image are altered by the steganographic data while the image is generated. This results in an image that is generated
with the data already hidden in it and not added after the fact. We show that the input parameters of the algorithm, such
as the type of fractal and number of iterations, will serve as a simple secret key for extracting the hidden information.
We explain how the capacity of the image is affected by the variation of the parameters. We also demonstrate how
standard steganographic detection algorithms perform against the generated images and analyze their capability to detect
information hidden with this new technique
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel spatial data hiding scheme based on the Least Significant Bit insertion. The bitplane
decomposition is obtained by using the (p, r) Fibonacci sequences. This decomposition depends on two
parameters, p and r. Those values increase the security of the whole system; without their knowledge it is
not possible to perform the same decomposition used in the embedding process and to extract the embedded
information. Experimental results show the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For any given digital host image or audio file (or group of hosts) and any (block) transform domain of interest,
we find an orthogonal set of signatures that achieves maximum sum-signal-to-interference-plus-noise ratio (sum-
SINR) spread-spectrum message embedding for any fixed embedding amplitude values. We also find the sumcapacity
optimal amplitude allocation scheme for any given total distortion budget under the assumption of
(colored) Gaussian transform-domain host data. The practical implication of the results is sum-SINR, sumcapacity
optimal multiuser/multisignature spread-spectrum data hiding in the same medium. Theoretically,
the findings establish optimality of the recently presented Gkizeli-Pados-Medley multisignature eigen-design
algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate the use of a family of transform domains in embedding hidden data.A new date
hiding scheme that is based a pair of unmatched orthogonal transforms has been developed. The hidden message
signal is transmitted via the residual channel of two different parameterized slant transforms. The properties of
residual channel of parameterized slant transforms are characterized and performance of proposed scheme have
been analyzed and simulated. We also discuss on the performance of the scheme and basis selection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ever-increasing Internet distribution of video content is echoed in ever-increasing efforts to devise systems balancing
copyright protection and user rights. Watermarking is such an example: by persistently and imperceptibly associating some
data with the host video, it offers at the same time a reliable and user-friendly solution for copyright infringement tracking.
This paper takes a closer look at the apparent contradiction between watermarking (using the visual redundancy of the video
to embed the data) and compression (eliminating the visual redundancy in order to speed up distribution and to alleviate
storage requirements). In this respect, the viability of compressed domain watermarking is evaluated by analysing the visual
effects of the MPEG-4 AVC stream alteration. The corpus consists of 10 video sequences of about 25 minutes each, coded at
256kbps and 64 kbps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Full and partial encryption methods are important for subscription based content providers, such as internet and cable TV
pay channels. Providers need to be able to protect their products while at the same time being able to provide
demonstrations to attract new customers without giving away the full value of the content. If an algorithm were
introduced which could provide any level of full or partial encryption in a fast and cost effective manner, the applications
to real-time commercial implementation would be numerous. In this paper, we present a novel application of alpha
rooting, using it to achieve fast and straightforward scalable encryption with a single algorithm. We further present use
of the measure of enhancement, the Logarithmic AME, to select optimal parameters for the partial encryption. When
parameters are selected using the measure, the output image achieves a balance between protecting the important data in
the image while still containing a good overall representation of the image. We will show results for this encryption
method on a number of images, using histograms to evaluate the effectiveness of the encryption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia scrambling technologies ensure that multimedia content is only used by authorized users by transforming
multimedia data into an unintelligible format. This paper introduces a new P-recursive sequence and two multimedia
scrambling algorithms based on the P-recursive sequence. The P-recursive sequence is a more generalized sequence
which can derive many well-known sequences such as the P-Fibonacci sequence, the P-Lucas sequence and P-Gray
code. The algorithms can be used to scramble two or three dimensional multimedia data in one step. Electronic
signatures, grayscale images and three-color-component images are all examples of 2-D and 3-D multimedia data which
can utilize these algorithms. Furthermore, a security key parameter p may be chosen as different or the same values for
each dimensional component of the multimedia data. Experiments show that the presented algorithms can scramble
multimedia data at different levels of security by partially or fully encrypting multimedia data. They also have been
demonstrated in the experiments to show good performance in known-plain text attack and common image attacks such
as data loss, Gaussian noise, and Salt Pepper noise. The scrambled multimedia data can be completely reconstructed
only by using the correct security keys.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital forensics investigator faces the challenge of reliability of forensic conclusions. Formal automatic analysis method
is helpful to deal with the challenge. The finite state machine analysis method tries to determine all possible sequences of
events that could have happened in a digital system during an incident. Its basic idea is to model the target system using
a finite state machine and then explore its all possible states on the condition of available evidence. Timed mealy finite
state machine is introduced to model the target system, and the formalization of system running process and evidence is
presented to match the system running with possible source evidence automatically. Based on Gladyshev's basic
reasoning method, general reasoning algorithms with multi strategies are developed to find the possible real scenarios.
Case study and experimental results show that our method is feasible and adaptable to possible cases and takes a further
step to practical formal reasoning for digital forensics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel model for the restoration of semitransparent blotches. It is based on two perception
measures that describe a complicated object, like a semi-transparent blotch, on a complicated background, like
textures in real-world images. Experimental results on archive photographs show that the proposed approach is
able to achieve good results with a low computational effort and in a completely automatic way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CCTV is used for an increasing number of purposes, and the new generation of digital systems can be tailored to serve a
wide range of security requirements. However, configuration decisions are often made without considering specific task
requirements, e.g. the video quality needed for reliable person identification. Our study investigated the relationship
between video quality and the ability of untrained viewers to identify faces from digital CCTV images. The task
required 80 participants to identify 64 faces belonging to 4 different ethnicities. Participants compared face images taken
from a high quality photographs and low quality CCTV stills, which were recorded at 4 different video quality bit rates
(32, 52, 72 and 92 Kbps). We found that the number of correct identifications decreased by 12 (~18%) as MPEG-4
quality decreased from 92 to 32 Kbps, and by 4 (~6%) as Wavelet video quality decreased from 92 to 32 Kbps. To
achieve reliable and effective face identification, we recommend that MPEG-4 CCTV systems should be used over
Wavelet, and video quality should not be lowered below 52 Kbps during video compression. We discuss the practical
implications of these results for security, and contribute a contextual methodology for assessing CCTV video quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among all existing biometric techniques, fingerprint-based identification is the oldest method, which has been
successfully used in numerous applications. Fingerprint-based identification is the most recognized tool in biometrics
because of its reliability and accuracy. Fingerprint identification is done by matching questioned and known friction skin
ridge impressions from fingers, palms, and toes to determine if the impressions are from the same finger (or palm, toe,
etc.). There are many fingerprint matching algorithms which automate and facilitate the job of fingerprint matching, but
for any of these algorithms matching can be difficult if the fingerprints are overlapped or mixed. In this paper, we have
proposed a new algorithm for separating overlapped or mixed fingerprints so that the performance of the matching
algorithms will improve when they are fed with these inputs. Independent Component Analysis (ICA) has been used as a
tool to separate the overlapped or mixed fingerprints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and
localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many
techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an
efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide
variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting
with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output
image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain
level will be transformed into a binary image that will be scanned by using a special template to select a number of
possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color
information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We
shall present results of large number of experiments to demonstrate that the proposed face localization method is
efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation
of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an
attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change
completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial
expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier.
According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for
identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this
technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area
of face recognition and will be used as the core method for critical defense security related issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Poor quality can affect iris recognition accuracy. Feature information is an objective measure to evaluate the iris image
quality. By combining Feature Information Measure (FIM), an occlusion measure and a dilation measure, a quality score
is obtained that is well correlated with recognition accuracy. FIM is calculated as the distance between the distribution
of iris features and a uniform distribution. Images of low contrast can appear to lack information from manual inspection,
but actually perform well in iris recognition due to the presence of feature information. However, the FIM score for a
low contrast image could be low. To adjust this affect, this paper developed an information based contrast invariant iris
quality measure. For exhaustive comparison, CASIA 1.0, CASIA 2.0, ICE and WVU databases is used. In addition,
the proposed method is compared to the convolution matrix, spectrum energy and Mexican hat wavelet approaches
which represent a variety of approaches to iris quality measure. The experimental results show that the proposed quality
measure is capable of predicting matching performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris recognition is an important method for identifying a person. Currently, most iris recognition methods are based on
individual images. For non-cooperative user identification, video image based methods can provide more information.
However, the iris image quality may vary from frame to frame. In this paper, we propose a real-time video based iris
image processing method to eliminate the bad quality video image frames. It takes advantage of the correlations among
video frames.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris segmentation is one of the most important steps in an iris recognition system. Its accuracy can directly affect the
recognition accuracy. For non-cooperative users, the obtained images often do not have good quality. Under such
conditions, the iris may be deformed, out-of-focus, or motion blurred. Sometimes, images do not have a valid iris. It is
very challenging to segment the iris efficiently and accurately under a non-cooperative scenario. In this paper, we
proposed a novel segmentation method that uses a coarse to fine approach to extract the iris region. The preliminary
result shows the proposed method is efficient and accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an adaptive feature-based approach to on-line signature verification is presented. Cryptographic
techniques are employed to protect the extracted templates thus making impossible to derive the original biometric
data from the stored information, as well as to generate multiple templates from the same original biometrics.
Our approach allows to obtain, together with protection, also template cancelability thus guaranteeing user's
privacy. The proposed authentication scheme is able to automatically adjust its parameters to the variability of
each user's signature, thus obtaining a user adaptive system with enhanced performances with respect to a nonadaptive
one. Experimental results show the effectiveness of our approach. Also the effects on the recognition performances when using the pen inclination features are investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In mobile applications, computational complexity is an issue that limits sophisticated algorithms from being
implemented on these devices. This paper provides an initial solution to applying pattern recognition systems on mobile
devices by combining existing preprocessing algorithms for recognition. In pattern recognition systems, it is essential to
properly apply feature preprocessing tools prior to training classification models in an attempt to reduce computational
complexity and improve the overall classification accuracy. The feature preprocessing tools extended for the mobile
environment are feature ranking, feature extraction, data preparation and outlier removal. Most desktop systems today
are capable of processing a majority of the available classification algorithms without concern of processing while the
same is not true on mobile platforms. As an application of pattern recognition for mobile devices, the recognition system
targets the problem of steganalysis, determining if an image contains hidden information. The measure of performance
shows that feature preprocessing increases the overall steganalysis classification accuracy by an average of 22%. The
methods in this paper are tested on a workstation and a Nokia 6620 (Symbian operating system) camera phone with similar results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vast libraries of historic photographs are currently degrading due to the effects of semi-transparent water blotches. Most
current restoration techniques involve heavy user interaction and are therefore too expensive to use for large quantities of
images. This paper introduces the Localized Logarithmic Image Restoration Algorithm, which provides an automated
system to restore the water blotches found in old photographs. The Algorithm utilizes a new localized image processing
framework that allows it to improve upon existing restoration methods. This framework can be used for a variety of
image processing applications. In the presented application of blotch removal, new logarithmic and statistical equations
are introduced and used within the localized framework to complete the restoration. As shown by intensive computer
simulations, the Algorithm produces enhanced results when compared to the existing automated algorithm.
Improvements include superior edge removal around the blotch, better local contrast preservation, and expanded
saturation reduction capabilities. In addition to the enhanced restoration quality of the results, the simulations are also
promising with respect to computational time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a novel method of splitting up color spaces into different components and then performing edge detection
on individual color planes. The two general approaches taken for this are monochromatic and vector based. Also a new
color space will be introduced in this paper, which is an improved version of the PCA algorithm. By analyzing the
results of these algorithms we are able to determine which color space and edge detector is best suited for each
algorithm. We test these methods using a number of well known edge detectors and color spaces. All the algorithms are
tested on 17 different color images (12 natural, 5 synthetic). To analyze the results we use Pratt's Figure of Merit and
Bovik's SSIM measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technological efficiency and growing research in digital networks, devices, and transmission has made digital
multi-media an increasingly popular alternative to conventional analog media. With the recent advancements in internet
and multi-media technologies, the need for secured multimedia has increased exponentially. In this paper, we present a
logical subband based novel secured multimedia system for digital images that could be used for both data hiding and
cryptographic applications. It decomposes the cover into various integer valued sub-images that also provides the
location map of flippable subband coefficients. Moreover, this approach enhances the capacity of data hiding system by
a significant amount of data and simultaneously reduces the visible distortions that could occur in the image. In addition,
this technique could also be employed for cryptographic applications, as this framework offers lossless recovery of the
scrambled data and effective scrambling. Simulation results show that the proposed technique limits the changes to
boundary regions of the image. Further, this approach can retrieve the embedded information without prior knowledge
of the original unmarked image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a fast massive information communication system for data collection from distributive sources such as cell
phone users. As a very important application one can mention preventive notification systems when timely notification
and evidence communication may help to improve safety and security through wide public involvement by ensuring
easy-to-access and easy-to-communicate information systems. The technology significantly simplifies the response to
the events and will help e.g. special agencies to gather crucial information in time and respond as quickly as possible.
Cellular phones are nowadays affordable for most of the residents and became a common personal accessory. The paper
describes several ways to design such systems including existing internet access capabilities of cell phones or
downloadable specialized software. We provide examples of such designs. The main idea is in structuring information in
predetermined way and communicating data through a centralized gate-server which will automatically process
information and forward it to a proper destination. The gate-server eliminates a need in knowing contact data and
specific local community infrastructure. All the cell phones will have self-localizing capability according to FCC E911
mandate, thus the communicated information can be further tagged automatically by location and time information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated Explosive Detection Systems (EDS) utilizing Computed Tomography (CT) performs a series of X-ray scans
of the luggage being checked, then various 2D projection images of the luggage are generated from the collected data set
and sometimes 3D volumetric images of the luggage are generated in addition. Automatic explosives determination as to
the presence of an explosive in the luggage is determined through extensive data manipulation of the 2D and 3D image
sets, the results are then forwarded to a human interface for final review.
The final determination as to whether the luggage contains an explosive and needs to be searched manually is
performed by trained TSA (Transportation Security Administration) screeners following an approved TSA protocol. The
TSA protocol has the screeners visually inspect the projection images and the renderings of the automated explosive
results from detection to determine if the luggage needs to be suspected and consequently searched. Unlike conventional
X-ray systems, the user interface for EDS systems are usually designed to display one bag at a time. However, in airport
environments, there is usually more than one bag being processed. Therefore, segmentation is a crucial part of higher
quality screening. If the screeners have to manually manipulate (zoom, pan, separate) the image, this increases overall
screening time and decreases screener efficiency.
This paper presents a novel image segmentation technique that is geared towards, though not exclusive to, automated
explosive detection systems. The goal of this algorithm is to correctly separate each bag image to provide a higher
quality screening process while reducing the overall screening time and luggage search rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of research conducted on the design of an efficient clutter covariance processor for DARPA's
knowledge aided sensor signal processing expert reasoning (KASSPER) program a time-dual for information
theory was discovered and named latency theory, this theory is discussed in this first of a multi-paper series.
While information theory addresses the design of communication systems, latency theory does the same for
recognition systems. Recognition system is the name given to the time dual of a communication system. A
recognition system uses prior-knowledge about a signal-processor's input to enable the sensing of its output
by a processing-time limited sensor when the fastest possible signal-processor replacement cannot achieve
this task. A processor-coder is the time dual of a source coder. While a source coder replaces a signal-source
to yield a smaller sourced-space in binary digits(bits) units a processor coder replaces a signal-processor to
yield a smaller processing-time in binary operators(bors) units. A sensor coder is the time dual of a channel
coder. While a channel coder identifies the necessary overhead-knowledge for accurate communications a
sensor coder identifies the necessary prior-knowledge for accurate recognitions. In the second of this multipaper
series latency theory is successfully illustrated with real-world knowledge aided radar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this second of a multi-paper series latency-information theory (LIT), the integration of information theory
with its time dual, i.e., latency theory, is successfully applied to DARPA's knowledge aided sensor signal
processing expert reasoning (KASSPER) program. LIT encapsulates the concept of the time dual of a lossy
source coder, i.e., a lossy processor coder. A lossy processor coder is a replacement for a signal-processor.
This lossy processor coder is faster, simpler to implement, and yields a better performance than the original
signal-processor when the processor input appears in a highly compressed-decompressed lossy fashion. In
particular, a lossy clutter covariance processor (CCP) architecture is investigated that has successfully
replaced KASSPER's originally advanced lossless CCP and enabled its SAR imagery prior knowledge to be
highly compressed-decompressed. This result is illustrated with a typical SAR image which is compresseddecompressed
by a factor 8,172. Using this image and under severely taxing environmental disturbances
outstanding detections are achieved with the lossy CCP. Furthermore, this result is derived with a lossy CCP
that is at least five orders of magnitude faster and significantly simpler to implement than the corresponding
lossless CCP whose SINR detection performance is nevertheless unsatisfactory. As a final comment it is also
observed that LIT illuminates biological system studies since it provides a lossy mechanism that explains how
outstanding detections may be arrived at by biological systems that use highly lossy compressed prior
knowledge, e.g., when a human expertly detects a face seen only once before even though that face cannot be
accurately described prior to such new viewing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this third of a multi-paper series the discovery of a space dual for the laws of motion is reported and named the laws ofretention. This space-time duality in physics is found to inherently surface from a latency-information theory (LIT) that
is treated in the first two papers of this multi-paper series. A motion-coder and a retention-coder are fundamental
elements of a LIT's recognition-communication system. While a LIT's motion-coder addresses motion-time issues of
knowledge motion, a LIT's retention-coder addresses retention-space issues of knowledge retention. For the design of a
motion-coder, such as a modulation-antenna system, the laws of motion in physics are used while for the design of a
retention-coder, such as a write/read memory, the newly advanced laws of retention can be used. Furthermore, while the
laws of motion reflect a configuration of space certainty, the laws of retention reflect a passing of time uncertainty.
Since the retention duals of motion concepts are too many to cover in a single publication, the discussion will be
centered on the retention duals for Newton's Principia and the gravitational law, Coulomb's electrical law, Maxwell's
equations, Einstein's relativity theory, quantum mechanics, and the uncertainty principle. Furthermore the retention
duals will be illustrated with an uncharged and non-rotating black hole (UNBH). A UNBH is the retention dual of a
vacuum since the UNBH and vacuum offer, from a theoretical perspective, the least resistance to knowledge retention
and motion, respectively. Using this space-time duality insight it will be shown that the speed of light in a vacuum of
cM=2.9979 x 108meters/sec has a retention dual, herein called the pace of dark in a UNBH of cR=6.1123 x 1063 secs/m3
where 'pace' refers to the expected retention-time per retention-space for the 'dark' knowledge residing in a black hole.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.