It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.
Contrast enhancements, such as histogram equalization or gamma correction, are widely used by malicious attackers to conceal the cut-and-paste trails in doctored images. Therefore, detecting the traces left by contrast enhancements can be an effective way of exposing cut-and-paste image forgery. In this work, two improved forensic methods of detecting contrast enhancement in digital images are put forward. More specifically, the first method uses a quadratic weighting function rather than a simple cut-off frequency to measure the histogram distortion introduced by contrast enhancements, meanwhile the averaged high-frequency energy measure of his- togram is replaced by the ratio taken up by the high-frequency components in the histogram spectrum. While the second improvement is achieved by applying a linear-threshold strategy to get around the sensitivity of threshold selection. Compared with their original counterparts, these two methods both achieve better performance in terms of ROC curves and real-world cut-and-paste image forgeries. The effectiveness and improvement of the two proposed algorithms are experimentally validated on natural color images captured by commercial camera.
When an individual carries an object, such as a briefcase, conventional gait recognition algorithms based on average
silhouette/Gait Energy Image (GEI) do not always perform well as the object carried may have the potential of being
mistakenly regarded as a part of the human body. To solve such a problem, in this paper, instead of directly applying
GEI to represent the gait information, we propose a novel dynamic feature template for classification. Based on this
extracted dynamic information and some static feature templates (i.e., head part and trunk part), we cast gait recognition
on the large USF (University of South Florida) database by adopting a static/dynamic fusion strategy. For the
experiments involving carrying condition covariate, significant improvements are achieved when compared with other
classic algorithms.
A removable visible watermarking scheme, which operates in the discrete cosine transform (DCT) domain, is proposed for combating copyright piracy. First, the original watermark image is divided into 16×16 blocks and the preprocessed watermark to be embedded is generated by performing element-by-element matrix multiplication on the DCT coefficient matrix of each block and a key-based matrix. The intention of generating the preprocessed watermark is to guarantee the infeasibility of the illegal removal of the embedded watermark by the unauthorized users. Then, adaptive scaling and embedding factors are computed for each block of the host image and the preprocessed watermark according to the features of the corresponding blocks to better match the human visual system characteristics. Finally, the significant DCT coefficients of the preprocessed watermark are adaptively added to those of the host image to yield the watermarked image. The watermarking system is robust against compression to some extent. The performance of the proposed method is verified, and the test results show that the introduced scheme succeeds in preventing the embedded watermark from illegal removal. Moreover, experimental results demonstrate that legally recovered images can achieve superior visual effects, and peak signal-to-noise ratio values of these images are >50 dB.
We propose a fragile watermarking scheme in the wavelet transform domain that is sensitive to all kinds of manipulations and has the ability to localize the tampered regions. To achieve high transparency (i.e., low embedding distortion) while providing protection to all coefficients, the embedder involves all the coefficients within a hierarchical neighborhood of each sparsely selected watermarkable coefficient during the watermark embedding process. The way the nonwatermarkable coefficients are involved in the embedding process is content-dependent and nondeterministic, which allows the proposed scheme to put up resistance to the so-called vector quantization attack, Holliman-Memon attack, collage attack, and transplantation attack.
KEYWORDS: Digital watermarking, Distortion, Information security, Transplantation, Lithium, Digital imaging, Digital image processing, Optical engineering, Sensors, Binary data
Watermarking schemes for authentication purposes are characterized by three factors: security, resolution of tamper localization, and embedding distortion. Since the requirements of high security, high localization resolution, and low distortion cannot be fulfilled simultaneously, the relative importance of a particular factor is application-dependent. Moreover, blockwise dependence is recognized as a key requirement for fragile watermarking schemes to thwart the Holliman-Memon counterfeiting attack. However, it has also been observed that deterministic dependence is still susceptible to transplantation attack or even simple cover-up attack. This work proposes a fragile watermarking scheme for image authentication, which exploits nondeterministic dependence and provides the users with freedom of making trade-offs among the three factors according to the needs of their applications.
In the field of medical imaging, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar contents. However, CBIR methods are usually developed based on specific features of images so that those methods are not readily inter-applicable among different kinds of medical images. This work proposes a general CBIR framework in attempt to alleviate this limitation. The framework is consisted of two parts: image analysis and image retrieval. In the image analysis part, normal and abnormal regions of interest (ROIs) in a number of images are selected to form a ROI dataset. These two groups of ROIs are used to analyze 11 textural features based on gray level co-occurrence matrices. The multivariate T test is then applied to identify the features with significant discriminating power for inclusion in a feature descriptor. In the image retrieval part, each feature of the descriptor is normalized by clipping the values of the largest 5% of the same feature component, and then projecting each normalized feature onto the unit sphere. The L2 norm is then employed to determine the similarity between the query image and each ROI in the dataset. This system works in the manner of query-by-example (QBE). Query images were selected from different classes of abnormal ROIs. A maximum precision of 51% and a maximum recall of 19% were obtained. The averages of precision and recall are 49% and 18% in this experiment.
As images are commonly transmitted or stored in compressed form such as JPEG, to extend the applicability of our previous work, a new scheme for embedding watermark in compressed domain without resorting to cryptography is proposed. In this work, a target image is first DCT transformed and quantised. Then, all the coefficients are implicitly watermarked in order to minimize the risk of being attacked on the unwatermarked coefficients. The watermarking is done through registering/blending the zero-valued coefficients with a binary sequence to create the watermark and involving the unembedded coefficients during the process of embedding the selected coefficients. The second-order neighbors and the block itself are considered in the process of the watermark embedding in order to thwart different attacks such as cover-up, vector quantisation, and transplantation. The experiments demonstrate the capability of the proposed scheme in thwarting local tampering, geometric transformation such as cropping, and common signal operations such as lowpass filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.