Zhai & Shah [7] The model allows to determine saliency maps using color statis-
tics of images. The algorithm has linear computational complexity
with respect to the number of image pixels and therefore is well
suited for real time image processing. The saliency map of an
image is build upon the color contrast between image pixels. The
saliency value for a single color component (
cc ) for an image
pixel
I k , is deined as:
S cc (I k ) =
�
255
i =0
f cc i ∙
D(i, I k ) (1)
where:
f cc (i) is the histogram for a given color component (red,
green or blue) and
D(i,I k ) is a color distance map. The saliency
value for the image pixel
I k is calculated as a sum of saliency
values for red, green and blue color components:
S(I k ) =
S R (I k ) +
S G (I k ) +
S B (I k ) (2)
The saliency value is higher for pixels which color is rare.
A more detailed description of this model and of its application
(a spatio-temporal model of attention) is available in [7].
SUN ICA & SUN DOG [3] The SUN (Saliency Using Natural statistics) model is based on
a Bayesian probabilistic framework. Zhang et al. assume that the
visual system must actively estimate the probability of a target at
every location given the visual features observed. Let
z denote
a point in the visual ieed (in this application
z corresponds to
a single image pixel), let the binary random variable
C denote
whether or not a point belongs to a target class, let the random
variable
L denote the location (the pixel coordinates) and let the
random variable
F denote the visual features of a point. Saliency
of a point
z is then deined as:
s z =
p (C =1|
F =
f z , L =
l z ) (3)
where
f z represents the feature values obsved at
z and
l z repre-
sents the location of
z .
This probability can be calculated using
Bayes’ rule:
p(F =
f z , L =
l z | C = 1
)p(C =
1) s z =
(4)
p(F =
f z , L =
l z ) When some assumptions are made (all details in [3]) equa-
tion (4) reduces to:
log
s z = − log
p(F =
f z ) (5)
which is the deinition of bottom-up saliency used in the SUN
model. There are two key factors that affect the inal result of
a saliency model when operating on an image: the feature space
and the probability distribution of the features. In SUN the features
are calculated as responses of biologically plausible DoG (Differ
-
ence of Gaussians) linear ilters and responses to ilters learned
from natural images using independent component analysis (ICA).
The probability distribution is calculated in two steps. First,
for a series of natural images the ilter responses (features)
are calculated and an estimate of the probability distribution is
obtained. Then the distribution is parameterized by a zero-mean
generalized Gaussian distribution. This can be viewed as a learn
-
ing mechanism and use of prior knowledge.
Computing the saliency map for a given image consists of
two steps. First, the ilter responses are calculated. Then this
values are compared with the estimation of the probability dis-
tribution; the difference is the measure of saliency. The greater
the difference, the higher the saliency value for the image pixel.