Salience in imaging is defined as the extent to which an object in an image catches the eye of the viewer. Currently, many software packages exist that calculate salience using a wide range of models and implementations. Here we examine four types of salience programs: feature-based programs, convolutional neural networks, principal components analysis programs, and background subtraction programs. In feature-based programs, the software creates a series of maps for individual salience features (e.g., orientation and intensity), and then combines those individual feature maps into an overall map of salience for the entire picture [1] [2] [3]. In other models, convolutional neural networks act as a series of layers, each of which transforms the data and finds the most salient points in an image [6] [9] [10]. In principal components analysis programs, components corresponding to higher eigenvalues are used to separate the background from the salient objects. Lastly, in background subtraction, salient areas are found by comparing the object’s intensity distribution to the background distribution. In total, this paper compares 19 models, including our own algorithm, on a general database of images to determine each model’s accuracy when detecting salience. Additionally, as previous work has shown a correlation between salient points in a mammogram and the presence of a mass in a mammogram, we apply each of these state-of-the-art software packages to a database of mammograms to determine the accuracy of each program when detecting abnormalities in mammograms.
|