• 2022-09
  • 2022-08
  • 2022-07
  • 2022-06
  • 2022-05
  • 2022-04
  • 2021-03
  • 2020-08
  • 2020-07
  • 2020-03
  • 2019-11
  • 2019-10
  • 2019-09
  • 2019-08
  • 2019-07
  • Dynasore br Fig Example of an Inception module br Fig The


    Fig. 2. Example of an Inception module.
    Fig. 3. The whole training process of our proposed algorithm.
    problem in medical image analysis research. Moreover, combining the Inception architecture with residual connections can lead to dramatically improved training speeds, and can significantly improve the recognition performance, as explained in [39].
    Among the Inception networks, the Inception-ResNet-v2 network [39] outperforms similarly expensive networks without residual connections. This is because the residual learning framework plays a key role in the improvement of training speed for the Inception architecture. Furthermore, an Inception network using residual connections has deeper convolution layers for effectively extracting high-level features from images.
    In this study, we used Inception networks at different input scales to capture multi-level features of breast tissues. By using these networks, we can capture detailed information pertaining to breast cell types that indicates the similarity of breast cancer Dynasore to normal breast tissues. In addition, the Inception network can extract high-level features in the breast cancer images to evaluate the growth rate of breast cancer cells by estimating the density of breast cells in the image. Moreover, since breast cancers can grow, spread, and invade the surrounding breast tissues, we employed Inception networks with different input scales to extract multi-scale features of different breast cancer types.
    5. Proposed approach
    In this section, we describe our approach of breast cancer detection in detail. The entire training process of our pro-posed algorithm is illustrated as a flow chart in Fig. 3. Our approach include five basic steps: First, we apply a preprocessing method of stain normalization on H&E stained histology images to transform them into a common space and reduce their variances. This essential step is useful in improving the detection performance. In the second step, we present novel aug-mentation methods that are able to increase effectively the amount of training images based on our original limited training dataset. In the third step, we employ this augmented training dataset to train a set of Inception networks with multi-scale input images. After these training processes, the most discriminative deep features of breast tissues can be extracted from
    Fig. 4. Preprocessing steps of our proposed algorithm.
    these Inception models. Similar to the third step, in the fourth step, the discriminative deep features extracted from the training dataset can be used again to train a set of gradient boosting trees classifiers to improve the classification perfor-mance. In the last step, we employ a novel strategy for combining the gradient boosting trees into a stronger boosting classifier that is able to detect breast cancer clues precisely on histology images.
    5.1. Preprocessing
    For an automated breast cancer classifier using deep learning networks, stain normalization is an essential step in im-proving the detection performance. Because the procedures of tissue staining, fixation and cutting are not consistent, the appearance of H&E stained histology images significantly changes across different laboratories. The preprocessing step of stain normalization transforms the histology images into a common space and reduce their variances. In this study, we used the method of stain normalization proposed in [28]. This method uses a logarithmic transformation to compute the optical density of each histology image. The singular value decomposition method was applied to this optical density image to esti-mate the relevant degrees of freedom and construct a 2D projection matrix with a higher variance. We then calculated the intensity histograms for all pixels, whereby the dynamic range of elicited intensities covered the lower 90% of the data.
    Since access to data is limited owing to privacy concerns, breast cancer detectors were often trained with insu cient training datasets. Consequently, the cancer classification performance is hindered by this lack of training data. Recent work has demonstrated the effectiveness of data augmentation in increasing the amount of training data based on our original training dataset that consisted of limited data. By augmenting training data, we can also reduce the overfitting problem on the training models. In this study, we mainly performed geometric augmentations including reflecting, randomly cropping, rotating and translating the image. Since the color of H&E stained histology images significantly varies across laboratories due to various technical skills, we applied an effective color constancy method, namely gray world that assumes the scene in an image, on average, is a neutral gray and the average reflected color is from the color of the light. For this reason, the illu-minant color cast can be well estimated by computing the average color and comparing it to gray values. In this algorithm, the illuminant colors are computed by the mean of each channel of the image. Fig. 4 shows all the basic preprocessing steps of our proposed algorithms.