Archives

  • 2019-10
  • 2020-03
  • 2020-07
  • 2020-08
  • A23187 scribed in Section From these datasets we extracted

    2020-08-18

    scribed in Section 3.1. From these datasets, we extracted color, tex-ture and shape-based features, detailed in Table 4, using different feature extractors. Each type of feature was compared with several distance functions, in order to obtain the best descriptor (a feature extractor joined with a distance function) to a given image dataset.
    Fig. 6. P&R curves obtained by each approach over the I4 dataset, considering the: (a) first, (b) third, (c) fifth and (d) eighth learning iterations.
    To do so, seven different distance functions were considered: L1, L2, L∞ , X2, Canberra, Jeffrey A23187 (JD), and dLog [21].
    Learning strategies with RF in CBIR have been extensively stud-ied to improve the semantic gap issue [11]. In order to show the ef-ficacy of our approach, besides the Traditional CBIR (CBIR-T) with-out relevance feedback strategies, we presented comparisons with widely and well-known RF techniques, such as: Query Point Move-ment strategy (QPM) [16] and Query Expansion (QEX) [15]. QPM is based on the concept of moving the query center, throughout the iterations, towards more dense and relevant regions of the query space according to the expert intention. QEX promotes the dilation of the query aggregating to it new query centers.
    Moreover, in order to improve the learning e ciency of the rel-evance feedback, active learning has been explored. Many active learning methods have been developed considering different se-lection criteria [18,19], and also applied in different classification tasks and domains. For instance, it is possible to choose samples near the decision boundary of a classifier [18,19]. The insight is to select the most diverse and uncertain samples which are close 
    to the decision boundary of the classifier, demonstrating that they are the most di cult samples and, consequently, providing greater benefit to the model. In [18], it was proposed an active learning method with support vector machines (SVM-AL) for retrieval tasks. The method selects the samples which are closest to the classifica-tion boundary of the SVM classifier. There are also some latter re-search efforts [18,30,31]. However, they require the optimization of an objective function, resulting in high computational complexity. Then, in the present paper, we also presented comparisons with the well-known and pioneering SVM-AL proposed by Kremer et al. [18], which is closer to our proposed approach, since principle of independent assortment fuses the active learning paradigm into the CBIR process.
    Our approach can be instantiated considering any supervised classifier or clustering technique. However, the analysis of differ-ent classifiers and clustering techniques were not the main scope of the present work. Then, to generate the learning model, in our experiments, we used the k-Nearest Neighbor (k-NN) classi-fier. Regarding the clustering process, we considered the k-means technique.
    Fig. 7. P&R curves obtained by each approach over the I5 dataset, considering the: (a) first, (b) third, (c) fifth and (d) eighth learning iterations.
    To evaluate our proposed approach, we generated Precision and Recall (P&R) graphs [32]. As a rule of thumb, the closer the P&R curve to the top of the graphic, the better is the technique. To build the P&R graphs, we performed several similarity queries based on the k-nearest neighbor operator and randomly choosing the query images from the image datasets. The number of images retrieved by each similarity search was defined as 30 (based on daily medi-cal practice routine). When a given image class contains less sam-ples than 30, the value of k is set according to the number of samples of such class. To summarize the results, we employed the mean average precision (MAP), as defined in [32].
    3.3. Results and discussion 
    Table 5
    Best distance function for each feature extractor and dataset I. The highlighted data indicate the best descriptor.
    BIC L∞ L2 Canberra Canberra Canberra
    Edge Histogram JD L2 Canberra Canberra JD
    Norm. Histogram L∞ L2 Canberra Canberra Canberra
    Haralick JD Canberra L2 Canberra Canberra
    Texture Spectrum Canberra Canberra
    Daubechies JD L2 L∞ L∞ L ∞
    Haar L1 Canberra Canberra dLog Canberra
    Zernike L∞ L1 X 2 X 2 L1
    Initially, we performed an analysis of the best descriptors for the mammographic image datasets. Table 5 presents the best dis-tance function for each feature extractor, and the best descrip-tor highlighted for each image dataset. The best descriptors were  LBP-JD, LBP-L2, Zernike-X2, Zernike-X2 and Daubechies-L∞ for the datasets I1 − I5, respectively. Then, these descriptors were consid-ered in the experiments to compare our proposed approach against the state-of-the-art ones.