The classical approach to converting colour to greyscale is to code the luminance signal as a grey value image.
However, the problem with this approach is that the detail at equiluminant edges vanishes, and in the worst case
the greyscale reproduction of an equiluminant image is a single uniform grey value. The solution to this problem,
adopted by all algorithms in the field, is to try to code colour difference (or contrast) in the greyscale image. In this
paper we reconsider the Socolinsky and Wolff algorithm for colour to greyscale conversion. This algorithm, which
is the most mathematically elegant, often scores well in preference experiments but can introduce artefacts which
spoil the appearance of the final image. These artefacts are intrinsic to the method and stem from the underlying
approach which computes a greyscale image by a) calculating approximate luminance-type derivatives for the
colour image and b) re-integrating these to obtain a greyscale image. Unfortunately, the sign of the derivative
vector is sometimes unknown on an equiluminant edge and, in the current theory, is set arbitrarily. However,
choosing the wrong sign can lead to unnatural contrast gradients (not apparent in the colour original). Our
contribution is to show how this sign problem can be ameliorated using a generalised definition of luminance and
a Markov relaxation.
In the real world we can find large intensity ranges: the ratio from the brightest to the darkest part of the
scene can be of the order of 10000 to 1. Since most of our electronic displays have a limited range of
around 100 to 1, the last 20 years has seen much work done to develop different algorithms that compress
the actual dynamic range of an image to that available in the display device. These algorithms, known as
tone mappers, attempt to preserve as much of the images characteristics as possible [1]. An increasing
amount of research has also been done to try to evaluate the 'best' tone mapper. Approaches have included
pair wise comparisons of tone mapped images [2], comparison with real scenes [3] or using images
displayed on a High Dynamic Range (HDR) monitor [4]. None of these approaches are entirely satisfactory
and all suffer from potential confounding factors due to participant's interpretation of instructions and
biases.
There is evidence that the spatial and chronological path of fixations made by observers' when viewing an
image (i.e. the scanpath) is repeated to some extent when the same image is again presented to the observer
(e.g. [5]). In this paper we are the first to investigate the potential of using eye movement recordings,
particularly scanpaths, as a discriminatory tool. We propose that if a tone-mapped image gives rise to
scanpaths that are different from those obtained when viewing the original image this might be an
indication of a poor quality tone mapper since it is eliciting eye movements that are different from those
observed when viewing the original image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.