KEYWORDS: Digital watermarking, Databases, Algorithm development, Human-machine interfaces, Digital imaging, Image sensors, Image quality, Sensors, Control systems, Data processing
Robust still image watermarks are evaluated in terms of image fidelity and robustness. We extend this framework and apply reliability testing to robust still image watermark evaluation. Reliability is the probability that a watermarking algorithm will correctly detect or decode a watermark for a specified fidelity requirement under a given set of attacks and images. In reliability testing, a system is evaluated in terms of quality, load, capacity, and performance. To measure quality that corresponds to image fidelity, we compensate for attacks to measure the fidelity of attacked watermarked images. We use a conditional mean of pixel values to compensate for valumetric attacks such as gamma correction and histogram equalization. To compensate for geometrical attacks, we use error concealment and perfect motion estimation assumption. We define capacity to be the minimum embedding strength parameter and the maximum data payload that meet a specified error criteria. Load is then defined to be the actual embedding strength and data payload of a watermark. To measure performance, we use bit error rate (BER) and receiver operating characteristics (ROCs) of a watermarking algorithm for different attacks and images. We evaluate robust watermarks for various loads, attacks, and images.
KEYWORDS: Digital watermarking, Databases, Digital imaging, Human-machine interfaces, Algorithm development, Image sensors, Data storage, Control systems, Software development, Steganography
While Digital Watermarking has received much attention within the academic community and private sector in recent years, it is still a relatively young technology. As such, there are few accepted tools and metrics that can be used to validate the performance claims asserted by members of the research community and evaluate the suitability of a watermarking technique for specific applications. This lack of a universally adopted set of metrics and methods has motivated us to develop a web-based digital watermark evaluation system known as the Watermark Evaluation Testbed or WET. This system has undergone several improvements since its inception. The ultimate goal of this work has been to develop a platform, where any watermarking researcher can test not only the performance of known techniques, but also their own techniques. This goal has been reached by the latest version of the system. New tools and concepts have been designed to achieve the desired objectives. This paper describes the new features of WET. Moreover, we also summarize the development process of the entire project as well as introduce new directions for future work.
Robust watermarks are evaluated in terms of image fidelity and robustness. We extend this framework and apply reliability testing to robust watermark evaluation. Reliability is the probability that a watermarking algorithm will correctly detect or decode a watermark for a specified fidelity requirement under a given set of attacks and images. In reliability testing, a system is evaluated in terms of quality, load, capacity and performance. To measure quality that corresponds to image fidelity, we compensate for attacks to measure the fidelity of attacked watermarked images. We use the conditional mean of pixel values to compensate for valumetric attacks such as gamma correction and histogram equalization. To compensate for geometrical attacks, we use error concealment and perfect motion estimation assumption. We define capacity to be the maximum embedding
strength parameter and the maximum data payload. Load is then defined to be the actual embedding strength and data payload of a watermark. To measure performance, we use bit error rate (BER) and receiver operating characteristics (ROC) and area under the curve (AUC) of the ROC curve of a watermarking algorithm for different attacks and images. We evaluate robust watermarks for various quality, loads, attacks, and images.
While Digital Watermarking has received much attention in recent
years, it is still a relatively young technology. There are few
accepted tools/metrics that can be used to evaluate the suitability
of a watermarking technique for a specific application. This lack of
a universally adopted set of metrics/methods has motivated us to
develop a web-based digital watermark evaluation system called the
Watermark Evaluation Testbed or WET. There have been
more improvements over the first version of WET. We
implemented batch mode with a queue that allows for user submitted
jobs. In addition to StirMark 3.1 as an attack module, we added
attack modules based on StirMark 4.0. For a new image fidelity
measure, we evaluate conditional entropy as an image fidelity
measure for different watermarking algorithms and different attacks.
Also, we show the results of curve fitting the Receiver Operating
Characteristic (ROC) analysis data using the Parzen window density
estimation. The curve fits the data closely while having only two
parameters to estimate.
While digital watermarking has received much attention within the academic community and private sector
in recent years, it is still a relatively young technology. As such there are few widely accepted benchmarks
that can be used to validate the performance claims asserted by members of the research community. This
lack of a universally adopted benchmark has hindered research and created confusion within the general public.
To facilitate the development of a universally adopted benchmark, we are developing at Purdue University a
web-based system that will allow users to evaluate the performance of watermarking techniques. This system
consists of reference software that includes both watermark embedders and watermark detectors, attack scenarios,
evaluation modules and a large image database. The ultimate goal of the current work is to develop a platform
that one can use to test the performance of watermarking methods and obtain fair, reproducible comparisons
of the results. We feel that this work will greatly stimulate new research in watermarking and data hiding by
allowing one to demonstrate how new techniques are moving forward the state of the art. We will refer to this
system as the Watermark Evaluation Testbed or WET.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.