Breast cancer has become a symbol of tremendous concern in the modern world, as it is one of the major causes of cancer mortality worldwide. In this regard, breast ultrasonography images are frequently utilized by doctors to diagnose breast cancer at an early stage. However, the complex artifacts and heavily noised breast ultrasonography images make diagnosis a great challenge. Furthermore, the ever-increasing number of patients being screened for breast cancer necessitates the use of automated end-to-end technology for highly accurate diagnosis at a low cost and in a short time. In this concern, to develop an end-to-end integrated pipeline for breast ultrasonography image classification, we conducted an exhaustive analysis of image preprocessing methods such as K Means++ and SLIC, as well as four transfer learning models such as VGG16, VGG19, DenseNet121, and ResNet50. With a Dice-coefficient score of 63.4 in the segmentation stage and accuracy and an F1-Score (Benign) of 73.72 percent and 78.92 percent in the classification stage, the combination of SLIC, UNET, and VGG16 outperformed all other integrated combinations. Finally, we have proposed an end to end integrated automated pipelining framework which includes preprocessing with SLIC to capture super-pixel features from the complex artifact of ultrasonography images, complementing semantic segmentation with modified U-Net, leading to breast tumor classification using a transfer learning approach with a pre-trained VGG16 and a densely connected neural network. The proposed automated pipeline can be effectively implemented to assist medical practitioners in making more accurate and timely diagnoses of breast cancer.
Image classification is considered to be one of the critical tasks in hyperspectral remote sensing image processing. Recently, a convolutional neural network (CNN) has established itself as a powerful model in classification by demonstrating excellent performances. The use of a graphical model such as a conditional random field (CRF) contributes further in capturing contextual information and thus improving the classification performance. In this paper, we propose a method to classify hyperspectral images by considering both spectral and spatial information via a combined framework consisting of CNN and CRF. We use multiple spectral band groups to learn deep features using CNN, and then formulate deep CRF with CNN-based unary and pairwise potential functions to effectively extract the semantic correlations between patches consisting of 3-D data cubes. Furthermore, we introduce a deep deconvolution network that improves the final classification performance. We also introduced a new data set and experimented our proposed method on it along with several widely adopted benchmark data sets to evaluate the effectiveness of our method. By comparing our results with those from several state-of-the-art models, we show the promising potential of our method.
In computer vision tasks such as, for example, object recognition, semantically accurate segmentation of a particular object of interest (OOI) is a critical step. Due to the OOI consisting of visually different fragments, traditional segmentation algorithms that are based on the identification of homogeneous regions usually do not perform well. In order to narrow this gap between low-level visual features and high-level semantics, some recent methods employ machine learning to generate more accurate models of the OOI. The main contribution of this paper is the inclusion of spatial relationships among the OOI fragments into the model. For this purpose, we employ Bayesian networks as a probabilistic approach for learning the spatial relationships which, in turn, becomes evidence that is used for the process of segmenting future instances of the OOI. The algorithm presented in this paper also uses multiple instance learning to obtain prototypical descriptions of each fragment of the OOI based on low-level visual features. The experimental results on both artificial and real image datasets indicate that the addition of spatial relationships improves segmentation performance.
Semantically accurate segmentation of a particular Object Of Interest (OOI) in an image is an important but challenging step in computer vision tasks. Our recently proposed object-specific segmentation algorithm learns a model of the OOI which includes information on both the visual appearance of and the spatial relationships among the OOI components. However, its performance heavily depends on the assumption that the visual appearance variability among OOI instances is low. We present an extension to our algorithm that relaxes this assumption by incorporating shape information into the OOI model. Experimental results and an ANOVA-based statistical test confirm that the incorporation of shape has a highly significant positive effect on segmentation performance.