Skip to main content

Texture features in the Shearlet domain for histopathological image classification

Abstract

Background

A various number of imaging modalities are available (e.g., magnetic resonance, x-ray, ultrasound, and biopsy) where each modality can reveal different structural aspects of tissues. However, the analysis of histological slide images that are captured using a biopsy is considered the gold standard to determine whether cancer exists. Furthermore, it can reveal the stage of cancer. Therefore, supervised machine learning can be used to classify histopathological tissues. Several computational techniques have been proposed to study histopathological images with varying levels of success. Often handcrafted techniques based on texture analysis are proposed to classify histopathological tissues which can be used with supervised machine learning.

Methods

In this paper, we construct a novel feature space to automate the classification of tissues in histology images. Our feature representation is to integrate various features sets into a new texture feature representation. All of our descriptors are computed in the complex Shearlet domain. With complex coefficients, we investigate not only the use of magnitude coefficients, but also study the effectiveness of incorporating the relative phase (RP) coefficients to create the input feature vector. In our study, four texture-based descriptors are extracted from the Shearlet coefficients: co-occurrence texture features, Local Binary Patterns, Local Oriented Statistic Information Booster, and segmentation-based Fractal Texture Analysis. Each set of these attributes captures significant local and global statistics. Therefore, we study them individually, but additionally integrate them to boost the accuracy of classifying the histopathology tissues while being fed to classical classifiers. To tackle the problem of high-dimensionality, our proposed feature space is reduced using principal component analysis. In our study, we use two classifiers to indicate the success of our proposed feature representation: Support Vector Machine (SVM) and Decision Tree Bagger (DTB).

Results

Our feature representation delivered high performance when used on four public datasets. As such, the best achieved accuracy: multi-class Kather (i.e., 92.56%), BreakHis (i.e., 91.73%), Epistroma (i.e., 98.04%), Warwick-QU (i.e., 96.29%).

Conclusions

Our proposed method in the Shearlet domain for the classification of histopathological images proved to be effective when it was investigated on four different datasets that exhibit different levels of complexity.

Background

In medical imaging, the study of histology images is considered a significant task [1]. The advancement of technology allows the histological slides to be digitized and stored in digital form [2]. The inspection of histological slides manually by a histopathologist is indispensable. However, computational techniques from image processing and machine learning can be of a great asset in the field of histopathology to assist in applying pre-screening/classification of easy cases. Therefore, more time can be consumed in studying the challenging histological slides. More importantly, computer-assisted diagnosis in histopathology can play a significant role in minimizing (and ultimately eradicating) man-made mistakes, e.g. by the pathologist [3].

Therefore, the early identification of cancer is crucial for the pathologist to propose an appropriate treatment for the patients. The process of histopathological tissue classification is tackled in different ways. We divide our review of such techniques into three groups: texture-based, Shearlet-based, and deep feature-based methods.

Texture-based techniques are frequently investigated for the analysis and classification of histopathological tissues. For instance, Kather et al. [4] proposed computing various texture features and classify colorectal cancer histology using SVM (i.e., using 10-fold cross-validation) into eight classes. The fusion of different texture features delivered an accuracy of 87.4%.

Similarly, Linder et al. [5] investigated a different number of descriptors: LBP, Haralick texture attributes, and Gabor filters to classify their introduced dataset which is called, Epistroma. As such, those extracted descriptors are fed to an SVM model to distinguish between epithelium and stroma tissues. Comparably, Spanhol et al. [3] established a new dataset called, breast cancer histopathology dataset (BreakHis). This dataset consists of benign and malignant tissues. Spanhol et al. used different techniques to classify BreakHis tissues into benign or malignant.

Bruno et al. [6] proposed applying LBP on the curvelet coefficients of the transformed image)(i.e., authors used the Digital Database for Screening Mammography, Breast Cancer Digital Repository, and UCSB biosegmentation benchmark). To reduce the number of descriptors, the authors used statistical analysis of variance (ANOVA). In comparison to Bruno et al., we also use descriptors computed from a directional wavelet transform, but we demonstrate that it is advantageous to integrate various descriptors computed in the Shearlet domain to create the feature space. A more relative idea to our technique is proposed by Ribeiro et al. [7]. Ribeiro et al. computed descriptors from both spatial images and curvelet coefficients to classify colorectal histology tissues. In contrast to Ribeiro et al., we utilized both magnitude and phase coefficients of the complex Shearlet domain. A similar approach to ours is proposed by Vo et al. [8] who extracted both, phase and magnitude descriptors for textured image retrieval, but applied to non-medical texture image samples.

The Shearlet transform has been previously used in different studies. Such a transform has the advantage of constructing an anisotropic system of a wavelet. However, only the magnitude coefficients are utilized for the classification of textured images. For instance, He et al. [9] classified textured images by proposing Shearlet-based descriptors. Authors in this study, quantize and encode the local energy descriptors computed from the Shearlet coefficients. Thereafter, the energy histograms of all levels are cumulated to form the image characteristics. Instead, Zhou et al. [10] utilized only specific levels of the decomposition of the Shearlet domain for breast tumor ultrasound image classification.

Dong et al. [11] suggested a technique for textured images classification and retrieval, where the dependencies of adjacent Shearlet subbands are modeled using linear regression. To represent the Shearlet subbands for classification, energy descriptors are computed. However, the textured image retrieval consists of both statistics in the contourlet and Shearlet domains.

Meshkini and Ghassemian [12] proposed to classify textured images using the inner product of the co-occurrence matrix and magnitude coefficients of Shearlet transform. Different to many published studies, we do not only use the magnitude coefficients, but also the phase coefficients of the Shearlet transform in our work.

Deep feature descriptors which are extracted from a pre-trained deep learning model (particularly, convolutional neural network (CNN)) which are typically trained on non-medical images. As such, these models are either used without fine-tuning (i.e., as unsupervised feature extractors) or fully/partially retrained for biomedical images. For example, Song et al. [13] proposed classifying the BreakHis dataset using feature vectors extracted from a CNN. As such, the extracted descriptors from the CNN are encoded using the Fisher Vector method. Similarly, Gupta et al. [14] extracted deep features from a fine-tuned DenseNet, but from multiple layers to classify BreakHis dataset. Differently, Wang et al. [15] utilized color deconvolution to obtain the hematoxylin and eosin channels separately. Subsequently, one CNN model is trained using hematoxylin components and another CNN trained using eosin components, and finally, the outputs of the two CNNs are fused to obtain the final prediction.

Fig. 1
figure 1

Summary of our work approach

As has been discussed above, computational techniques have been applied previously to predict the class of a tissue type in histological images. As such, conventional and deep learning (DL) techniques have been developed [16]. However, with the scarcity of well-curated histopathological datasets for training/testing deep neural networks  [17, 18], training a deep learning model can be a challenging approach. Therefore, in our study, we present a novel Shearlet-based hand-engineered texture descriptors to classify tissue types. Namely, co-occurrence texture descriptors [19, 20], Local Binary Patterns (LBP) [21], Local Oriented Statistics Information Booster (LOSIB) [22], and Segmentation-based Fractal Texture Analysis (SFTA) [23] are used in our study which each of these set of descriptors are computed in the Shearlet domain [24]. Then, these features are utilized to train/test two classifiers: Support Vector Machine (SVM) [25] and a Decision Tree Bagger (DTB) [26].

Notably, our modeling technique is taking advantage of the directionality in the complex Shearlet transform where we utilized both the magnitude and phase coefficients. As such, those coefficients are summarized using various textural methods that can capture local and global attributes [19, 23] of histopathological tissues. Most interestingly, computing such statistics from the directional sub-bands can potentially lead to capturing significant information that can be missed in the spatial domain because of the complexity of histopathological tissues.

In our research, we investigate both parametric and non-parametric (i.e., robust in classification [27]) classification models. We include DTB because it is a ML method that is considered to lead to explainable decisions, unlike SVM which is considered as a block box classifier [28]. We show that the fusion of some sets of descriptors can result in a vigorous feature representation. Thereafter, we employ principal component analysis to further enhance the classification results while having a reduced set of features.

Fig. 2
figure 2

Samples of each dataset that we have used in our study

This paper is an extension of work previously presented at the Biomedical and Health Informatics (BHI) Workshop 2019 [29]. Our main contributions in this extension are summarized below:

  • We propose and present a comprehensive justification for our feature space, namely, the Shearlet-based texture descriptors for histopathological image classification.

  • We demonstrate that when these attributes are used to train a conventional ML model (i.e., SVM and DTB in this extended version), they provide better classification performance than several existing methods on the four standard datasets used in this research.

  • We present an extended study of our feature representations expressed in principal components that decreases computational cost without significantly compromising accuracy.

Methods

Figure 1 provides an overview of our proposed system for the classification of histopathological images. A detailed description of each component of our method is provided in the following sections. \(\hbox {MATLAB}^{\textregistered }\) 2017b is utilized for the implementation of our techniques.

Our proposed method consists of three steps:

  • Step 1: For a given histopathological image, we apply the complex Shearlet transform. With the complex coefficients, we calculate the magnitude and relative phase (RP).

  • Step 2: We then extract four sets of features from the directional sub-bands of the RP and magnitude: co-occurrence based texture features, local binary pattern, local oriented statistics information, and segmentation-based fractal texture analysis.

  • Step 3: We then apply one of two different classifiers: DTB or SVM.

Datasets

Our proposed technique has been evaluated on four different histopathological datasets that exhibit different levels of complexity:

  1. 1

    Multi-class Kather’s dataset [4] consists of a total of 5000 images (i.e., each image has a size of \(150 \times 150\) pixels). This dataset provides tissue types that belong to 8-types (i.e., tumor epithelium, simple stroma, complex stroma, immune cells, debris, normal mucosal glands, adipose tissue, and background (no tissue)) - there are 625 tissue samples for each type. An example of each tissue type is shown in Fig. 2a.

  2. 2

    Breast Cancer Histopathological dataset (BreakHis) dataset [3] contains tissue from two categories: benign and malignant breast tumors (Examples are shown in Fig. 2b). The total number of samples from two tissue types is 7909 images (i.e., each image has a size of \(700 \times 460\) pixels). It worth noting that each histological slide is captured and stored with various magnification factors: \(40\times\), \(100\times\), \(200\times\), and \(400\times\). The distribution of samples of the benign/malignant in each magnification factor as follows: \(625/1370 (40\times )\), \(644/1437 (100\times )\), \(623/1390 (200\times )\) and \(588/1232 (400\times )\). Spanhol et al. [3] proposed to use each magnification factor as a separate dataset. However, in our study, we combine all magnification factors as one dataset as Jonnalagedda [30]. This is motivated by the fact that each magnification factor captures different information [31].

  3. 3

    Epistroma dataset contains variable size histo-pathology images that belong to two tissue types (as shown in Fig. 2c): stroma (551 samples) and epithelium (825 samples) [5].

  4. 4

    Warwick-QU dataset obtained with a magnification factor of \(20 \times\) from colon histology sections. It is a binary dataset of benign (74 samples) or malignant (91 samples) [32]. Examples of the tissue types are shown in Fig. 2d.

Complex Shearlet transform

In this study, we present a new perspective for computing attributes that summarize the statistical information/distribution of the Shearlet magnitude/phase coefficients for every scale and orientation [24]. Given a complex coefficient, \(C = x + iy\) where first term express the real part and the second term express the imaginary part, then we can compute the magnitude (as \(\rho = \root 2 \of {x^2 + y^2}\)) and phase (as \(\theta = tan^{-1}(y/x)\)) components. In contrast to existing studies [9, 10], we do not only use the energy of the complex Shearlet transform, but we also examine the usefulness of the phase components and their potential for providing more robust characterization for medical image classification. We experimentally verify that using phase alongside with magnitude coefficients can, in fact, boost the classification performance (See Section ).

Our work is motivated by research completed by Vo et al. [8]. Vo et al. build a feature space (i.e., consist of magnitude and relative phase (RP)) that is computed from a complex directional filter bank for textured image retrieval. However, In our study, we acquire such an idea but alternatively compute the relative Shearlet phase components. The complex Shearlet transform can be applied on a histopathological image that has a size of \(M \times M\) to be transformed to S scales where every scale consists of K directionalities. Let \(\theta _{sk}(i, j)\) at location (ij) to represent the phase angle component at at scale s and directionality k, where \(s = 1, 2, ..., S\) and \(k = 1, 2, ..., K\). In our study, we use \(S = 4\) and \(K = 8\) per scale.

Now, we can compute the relative Shearlet phase for a phase component at position (ij) of a directional sub-band in the following manner:

$$\begin{aligned} RP_{sk}(i, j) = {\left\{ \begin{array}{ll} \theta _{sk}(i, j) - \theta _{sk}(i, j + 1), &{} {\text {if } 1 \leqslant k \leqslant \frac{K}{2}}.\\ \theta _{sk}(i, j) - \theta _{sk}(i + 1, j), &{} { \text {if } \frac{K}{2} < k \leqslant K}. \end{array}\right. } \end{aligned}$$
(1)

We choose the differences of vertical and horizontal because of the orientation of shearing in the Shearlet transform. Such a transform has the advantage of being multi-scale and multi-directional which in turn can be a significant tool for a multi-resolution analysis of histopathological tissues. Therefore, we utilize each directional sub-band (i.e., from both magnitude and RP) to calculate statistical attributes.

The Shearlet coefficients localize spatially distributed discontinuities and are contrast invariant [33]. Although curvelets [34] and contourlets [35] have similar properties to Shearlets and have been used for image classification [6, 7]. However, both have certain limitations [24].

In this study, we use a publicly available implementation of complex Shearlet transform, called the ShearLab [36]. After each histology image is transformed, we then summarize the magnitude and RP of the Shearlet components using four various techniques. Each technique is briefly detailed as follows:

Co-occurrence matrix (CM)

The CM was introduced by Haralick et al. [20]. Later, other studies presented other types of statistics that can be computed from the CM [19, 37]. However, given two pixels (i.e., i and j) that are apart from each other by a distance (PD), then the content of a gray-level image can be formulated as a relative frequencies (i.e., \(F_{ij}\)) matrix. Also, there is another hyper-parameter that can be adjusted while computing the CM which is at which orientation to compute the relative frequencies. As a result, we have a CM that consists of relative frequencies for quantized orientation and distance between neighboring pixels.

In our application of CM on the directional sub-bands of Shearlet coefficients, we calculate the CMs using a constant distance of \(PD = 1\), but changing the orientation = (\(0^{\circ }\), \(45^{\circ }\), \(90^{\circ }\), and \(135^{\circ }\)); hence, we have four CMs. To obtain rotation invariant statistics from the CMs, we compute the mean of those four CMs [4]. Thereafter, from this CM, we extract twenty textural features (i.e., contrast, correlation, energy, autocorrelation, cluster prominence, cluster shade, dissimilarity, entropy, homogeneity, maximum probability, sum of squares, variance, sum average, sum variance, sum entropy, difference variance, difference entropy, information measure of correlation, inverse difference normalized, and inverse difference moment normalized). Following such a common practice to compute those statistical information out of CM for each magnitude/RP components, we then have a total of 640 attributes for each magnitude and RP directional sub-bands (i.e., \(20 \times 32 (\#\text { of directional subbands)}\).

In our study, we obtain the CM for the Shearlet magnitude and RP coefficients for each directional sub-band instead of the spatial domain. In the spatial domain, CM analyzes the statistical information of the gray-level image, but in our application, we analyze the crucial directionality characteristics in the Shearlet domain which potentially leading to robust feature space for histopathological image classification.

Local binary pattern (LBP)

The LBP [21] texture attributes are rotationally invariant of the local occurrence of the gray-level of an image. Such, occurrence is naturally classified as ‘uniform’ patterns. Again, we are rather concerned with the Shearlet’s magnitude and RP coefficients of a histopathological image (i.e., can exhibit complex patterns). Therefore, utilizing the Shearlet coefficient which encapsulates field dominant direction chraterstics [38, 39] can potentially lead to robust summarization of such directionalities (i.e., represent crucial structural details, e.g., step edges [40]).

Hence, we apply the LBP on the magnitude and RP of each directional sub-band of the Shearlet coefficients to encode the field dominant directions. For a given Shearlet (SH) coefficient (i.e., representing the magnitude or RP) at the location (ij), the LBP of the SH coefficient is computed as follows:

$$\begin{aligned} LBP_{P,R} = \sum _{p=0}^{P - 1} s(SH_p - SH_c)2^{p}, s(x) = {\left\{ \begin{array}{ll} 1, &{} {x \geqslant 0}.\\ 0, &{} {x<0}. \end{array}\right. } \end{aligned}$$
(2)

\(SH_c\) and \(SH_p\) are the central RP (or magnitude) values, and P represents the surrounding RP (or magnitude) values in the circular neighborhood.

In our application of LBP, we compute a feature vector for every directional sub-band while utilizing a radius of \(R = 2\) and a neighborhood of \(P = 8\), where the window size is set to be equal to the directional sub-band size. Therefore, we obtain a feature vector for each sub-band of length of \((P + 2)\) [21]. After concatenating all feature vectors of all directional sub-bands, we get a feature vector representing a histopathological image of length \(10 \times 32 = 320\) attributes for each magnitude and RP.

Local oriented statistic information booster (LOSIB)

The LOSIB [22] technique first compute the absolute difference \(d_p\) for a given central Shearlet coefficient with P neighboring coefficients (i.e., completed for all central magnitude (or RP) coefficients c of a directional sub-band) as follows: \(d_p(i_c, j_c) = |SH_c - SH_p|\) where \(p \in {0, 1, ..., (P - 1)}\).

Then, the mean of the differences across the same directionality is computed, as follows:

$$\begin{aligned} \mu _p = \frac{{\sum _{x_c = 1}^{M}}{\sum _{y_c = 1}^{N} d_p(x_c, y_c)}}{M \cdot N} \end{aligned}$$
(3)

where N and M represent the height and width of the image, respectively.

In our application of LOSIB, we set the radius \(R = 1\) and neighborhood \(P = 8\). Therefore, we obtain a feature vector for every directional sub-band of length P. However, the total number of descriptors for each of the magnitude and RP is \(8 \times 32 (\#\text { of directional subbands)} = 256\).

Segmentation-based fractal texture analysis (SFTA)

Our application of SFTA on the directional sub-bands is adopted as provided by Costa et al. [23]. As such, SFTA first process the input directional utilizing a Two-Threshold Binary Decompositions (TTBD). As a result, various binary images are generated from the following attributes are computed: the dimension of the fractal boundaries, the average gray level, and the count of pixels belonging to the region.

In our utilization of the technique by Costa et al. [23], we set the number of thresholds to \(n_t = 4\). As a result, for every directional sub-band, we get a 21 attribute. Hence, in total, we have \(21 \times 32 (\#\text { of directional subbands)} = 672\) attributes for each, magnitude and RP.

Fusion of feature sets

The aforementioned set of descriptors are examined individually for their robustness while being used for training a classifier. It is worth noting that each set of descriptors captures different intrinsic statistical information. Therefore, we examine some combinations of our descriptors which we expect to improve the classification performance. We investigate the following combinations:

  • Fusion #1: (CM + LBP + LOSIB + SFTA) descriptors of Shearlet RP and/or magnitude.

  • Fusion #2: (CM + LOSIB) descriptors of Shearlet RP and/or magnitude.

  • Fusion #3: (CM + LBP + LOSIB) of Shearlet RP and (CM + LBP + SFTA + CM Dot Shearlet coefficients [12]) of Shearlet magnitude.

The aforementioned combinations are selected based on the merit of the individual set of features leading to good classification results.

Principal component analysis (PCA) for feature reduction

In the previous section, we investigate feature fusion. Evidently, with the combination of different descriptors, the number of features increases. Therefore, the best achieving fusion strategy is processed with PCA to find a reduced subspace. The PCA coefficients are rotated to maximize the orthomax criterion [41], and to obtain a final basis with simple structure [42]. Being the principal components (PCs) rotated to maximize varimax, then they are utilized to project the combined descriptors to a decorrelated space.

Classification algorithms

In this work, we introduce a new enhanced technique for analysis and feature extraction from the Shearlet transform. We also analyze the performance of two classifiers for different tissue types and show that the classifier has a minimal effect once robust features are extracted. In particular, we consider one of widely used in many of bio-medical research: a Support Vector Machine (SVM). In addition to SVM, we consider one type of decision tree which is a Decision Tree Bagger (DTB) [26] that has the trait of being non-parametric. As such, the distance between feature vectors is not computed in constructing decision trees. In contrast, SVM performance is based on kernels which can influence the classification performance [43]. Both SVM and DTB are briefly described below:

Support vector machine (SVM)

In our study, we utilize a SVM with pairwise classification (i.e., one-versus-one class decisions) [44]. Prior to training, we normalize all feature vectors to have equal mean and variance. The kernel function transforms the input data into a higher-dimensional feature representation, from which a hyperplane is formulated for classifying the input tissue dataset.

SVM training requires the solution of a quadratic programming (QP) problem. To simplify the solution of the problem, a sequential minimal optimization (SMO) solver is used in our study [45]. This solver simplifies the QP into a smaller series of QP for the training of SVM.

SVM training is impacted by the choice of the kernel function. In our study, we choose a radial basis function as a kernel function as it universally approximates the training dataset accurately. Therefore, the regularization parameter (C) is the only remaining parameter that can be optimized. In the attempt of choosing the best value of C, trials of values between [1, 5] with an increment of 1 were conducted. We found that in general \(C = 5\) delivered the best classification performance on the testing dataset. Therefore, the value of \(C = 5\) is set for all of our experiments. More details about SVM can be found in [46].

Decision tree bagger (DTB)

DTB is an ensemble classifier where each classifier in the ensemble is a classic decision tree generated using a random selection of attributes at each node to determine the split as in a random forest [47]. DTB generates multiple bootstraps (i.e., replicas of the training set) to train a decision tree with replacement from the provided training set. Thereafter, a bagging technique is used to combine the results of several decision trees, as an advantage this minimizes the susceptibility to over-fitting.

Fig. 3
figure 3

Classification performance on Kather’s dataset

Fig. 4
figure 4

Classification performance on BreakHis dataset

Cross-validation (CV)

We utilize cross-validation (CV) to obtain robust statistical results and to be able to generalize the classification results with each classification model. This approach is commonly used in the literature to examine a proposed technique. Therefore, our choice of the number of folds to split a dataset is based on previous studies. As such, the following dataset are divided using 10-fold CV: multi-class Kather’s(as Kather et al. [4]), Epistroma(as Ramalho et al. [48]), Warwick-QU(as Ribeiro et al. [7]).

The datasets are divided into mutually exclusive folds with approximately equal size (i.e., some of the datasets are imbalanced): 9-folds are used to build a model, and 1-fold is used to test the model. The process is repeated 10-times, such that the test set is different each time. Finally, the overall performance metrics are estimated by taking the mean from the tested 10 independently built models. However, the BreakHis dataset is split into 7-folds CV for training and testing as in a previous study [30].

Fig. 5
figure 5

Classification performance on epistroma dataset

Fig. 6
figure 6

Classification performance on Warwick-QU dataset

Performance metric

To evaluate the performance of each built model, commonly used measures are computed in our study: accuracy (ACC), AUC (area under the receiver operating characteristic (ROC) curve), sensitivity (Sen), and Precision (Prec). The computation of each evaluation metric is calculated as follows:

  • \(ACC = TP + TN/N\). This metric represents the test examples that are correctly classified over all the number of test examples available. As such, true positive (TP) and true negative (TN) are the correct examples, and \(N = (TP + TN) + (FP + FN)\), where false positive and false negative are denoted as FP and FN, respectively.

  • AUC: This metric relies on the plot of ROC (i.e., false-positive rate (=FP/(TN+FP)) on the x-axis against sensitivity on the y-axis). Therefore, the accumulated area under the ROC curve yields the AUC value (i.e., between 0 & 1, where 1 is a good performing model).

  • \(Sen = TP/(TP+FN)\) measures the ability of a model to correctly predict the positive condition, when it is actually positive.

  • \(Prec = TP/(TP+FP)\) measures the ability of a model to correctly identify positive cases.

Results and discussion

We first apply the Shearlet transform as detailed previously to four different datasets. We compute the RP and magnitude from the complex Shearlet coefficients. Then, to evaluate the strength of our proposed techniques, we utilize four commonly used measures for classification: ACC, AUC, Sen, and Prec.

In the literature, there exist descriptors extracted from multi-directional and multi-resolution wavelets (e.g., Shearlet, contourlet, and others). In this regard and to the best of our knowledge, we have implemented descriptors of previously published studies. These descriptors are applied on the transformed (i.e., using complex Shearlet transform) histopathological dataset:

  • Vo et al. [8]: proposed to compute from the RP the circular mean and circular variance, but computed only the mean from the magnitude of the decomposed textured image while using a complex directional filter bank. However, we adopt the same descriptors, but we compute them from each of the Shearlet directional sub-band and then concatenate them to form the feature vector.

  • Meshkini and Ghassemian [12]: proposed first to find the magnitude of the Shearlet coefficients and gray-level co-occurrence matrix. Then, the inner product of both is used as a feature vector.

  • Zhou et al. [10]: proposed to compute three sets of descriptors from the magnitude coefficients only: (1) the co-occurrence matrix is computed (i.e., from which the following texture features are obtained: entropy, correlation, contrast and, energy) from the first layer only of the horizontal cone of Shearlet transform; (2) the mean, variance, and energy are calculated from the Shearlet transform, only, from the first and third layer in the horizontal and vertical cones; (3) The maximal values are obtained from each column only from the high-frequency of the Shearlet transform. Then, these three sets of descriptors are concatenated to form the feature vector.

  • Dong et al. [11]: proposed to calculate from the Shearlet magnitude coefficients the following statistics: the mean and standard deviation. As such, these statistics are computed from each directional sub-band and concatenated to form the feature vector of an image.

Part (1) in Figs. 3, 4, 5, and 6 presents the classification performance of the techniques as detailed above for Kather, BreakHis, Epistroma, and Warwick-QU datasets, respectively. Further, we provide the classification results in form of tables in the appendix (i.e., the variants shown in italic in Part (1) of the Tables 1, 2, 3, and 4 are not examined in the original research papers but we have included them for comparison). We notice that utilizing both of the magnitude and RP of the Shearlet transform enhance the classification performance irrespective of the classifier (i.e., SVM or DTB) model and enhances the performance of other techniques. As such, the attributes proposed by Vo et al., using both magnitude and RP, attain exceeding performance than other baseline techniques.

When utilizing the multi-class Kather’s dataset, our proposed descriptors computed from the Shearlet coefficients of both magnitude and RP can accomplish reasonable accuracy between  82% to 86% when using an SVM model, but DTB yields accuracy spans only between  79% to 81% as presented in Part (2) of Fig. 3 (See Table 1). The highest AUC value = 0.9773 is obtained while classifying Kather’s dataset utilizing LOSIB attributes coupled with the SVM model. Furthermore, we compute the Sen and Prec which the highest values are achieved using LOSIB coupled with SVM (i.e., 0.8632 & 0.8664, respectively). However, when incorporating various descriptors together, for instance Fusion #3 improves the accuracy about 6.22% points as presented in Part (3) of Fig. 3 & Table 1 when using an SVM model.

We outline the classification performance on the BreakHis dataset, such that we consider all four magnification factors as one dataset (as seen in Fig. 4). We observe that SFTA has attained the highest accuracy of 89.72% on the validation split (with a corresponding AUC = 0.9527, Sen = 0.8040, and Prec = 0.8593) while utilizing both magnitude and RP. As presented in Part (3) of Fig. 4 (See also Table 2), Fusion #1 has led to a higher accuracy of 91.28% (with an AUC = 0.9650, Sen = 0.8391, and Prec = 0.8775). Evidently, due to the high skewness in the BreakHis dataset, the classifier is biased toward the majority class (i.e., malignant cases) as it is observed with lower values in sensitivity and precision.

Similarly, we conduct our technique on the Epistroma dataset (results reported in Fig. 5 & Table 3). We observe when descriptors extracted from both, magnitude and RP, utilizing CM leads to an accuracy of 97.24% with AUC = 0.9917, Sen = 0.9733, and Prec = 0.9807. In Part (3), Fusion #3 enhances the accuracy to 97.46% with a AUC value = 0.9925, Sen = 0.9769, and Prec = 0.9809.

Moreover, the classification performance on Warwick-QU are is presented in Fig. 6 (See also Table 4). Once more, CM textural features of both, magnitude and RP lead to the best accuracy of 95.70% (with corresponding AUC = 0.9860, Sen = 0.9446, and Prec = 0.9589) of our proposed individual feature representation. Part (3) presents that Fusion #2 leads to the highest accuracy of 96.29% with an AUC = 0.9860, Sen = 0.9571, and Prec = 0.9589. It is worth noting that the outlined standard deviations corresponding with classification performance on Warwick-QU dataset are higher than the other datasets because the number of validation examples of this dataset is significantly smaller.

Further, Fig. 7 shows the classification performance while attempting to find a reduced set of our feature space for each dataset. In the previous sections, we have identified the best achieving fusion descriptors. Therefore, we further process these descriptors with PCA. As such, we aim to retrieve the PCs that maintain or improve the classifier performance in comparison to using all attributes. We have established from Tables 1, 2, 3 and 4 that the SVM model has strong capabilities to classify histopathological tissues which therefore coupled with the best fusion of the corresponding dataset to find a reduced set of features. However, we are able to utilize a reduced set of descriptors to classify multi-class Kather’s dataset with an accuracy of \(92.56\% (\pm 1.29\%)\) (with a corresponding AUC = \(0.9905(\pm 0.0020)\), Sen = \(0.9256 (\pm 0.0130)\), and Prec = \(0.9270 (\pm 0.0125)\)) while using only 7500 descriptors (out of 7648) from Fusion #3. For classifying BreakHis dataset, a drastic pruning of descriptors (i.e., only 1750 out of 3776 are needed) while somewhat enhancing the classification accuracy to = \(91.73\% (\pm 00.59\%)\) with AUC = \(0.9654 (\pm 0.0057)\), Sen = \(0.8363 (\pm 0.0188)\), and Prec = \(0.8936 (\pm 0.0169)\). Again, the classification accuracy of Epistroma improves rather to \(98.04\% (\pm 1.03\%)\) with AUC = \(0.9960 (\pm 0.0043)\), Sen = \(0.9867 (\pm 0.0120)\), and Prec = \(0.9809 (\pm 0.0136)\) when reducing the number of descriptors from 7648 to 6400. Furthermore, to classify the Warwick-QU dataset, only 900 (out of 1792 attributes) are needed to achieve the highest accuracy of \(96.29\% (\pm 7.89\%\)) with AUC = \(0.9860 (\pm 0.0353\)), Sen = \(0.9571 (\pm 0.0964)\), and Prec = \(0.9589 (\pm 0.0945)\). For a completion for feature reduction, we provide the confusion matrices of each best performing SVM model while using the reduced set of features as shown in Fig. 8.

Fig. 7
figure 7

Classification performance of SVM with an increment of 50 principal components (PCs) of best fusion till all PCs are used

Fig. 8
figure 8

Confusion matrices of our best performing proposed methods while using reduced set of features (See Fig. 7)

Fig. 9
figure 9

Classification performance of SVM with an increment of 50 PCs of fusion #3 till all PCs are used

Furthermore, the complexity of histopathological images differs from one dataset to another. Although our proposed individual Shearlet-based descriptors are capable of achieving reasonable accuracy across the four histopathological image datasets, the best fusion which can be used to achieve the highest potential accuracy classifying the histological image dataset is not always the same (as shown in Fig. 7). Considering Fusion #3 of our Shearlet-based descriptors expressed in the principal components, we can observe different patterns of classification performance while using an increment of 50 PCs in Fig. 9. Therefore, our Shearlet-based descriptors appear to be general enough for any histological image analysis, and carefully choosing a type of fusion that maximize the recognition of the tissue type is sufficient.

In the literature exist state-of-the-art techniques that use the same datasets that we have used in our study. We report their results in this section to show that our proposed techniques are on par with those techniques and achieve excellent performance when reporting performance in terms of accuracy and AUC. Our Shearlet-based descriptors can attain robust classification performance in different scenarios; hence, it can be an appealing system in the clinical settings for medical image classification. In comparison, Wang et al. [15] proposed to use a bilinear CNN model for classifying multi-class Kather’s dataset. The reported accuracy was 92.6% (and AUC = 0.985). However, our proposed technique can achieve similar accuracy, yet better AUC = 0.9905.

When it comes to the BreakHis dataset, Jonnalagedda et al. [30] trained a CNN model with tumor nuclei information while utilizing data augmentation. Jonnalagedda et al. approach led to an accuracy of 92.2% (and AUC = 0.92). Our approach on BreakHis dataset is efficient to achieve higher AUC value, but with somewhat lower accuracy.

When it comes to Epistroma dataset classification, Ramalho et al. [48] proposed a structural approach using co-occurrence statistics. Their approach led to an accuracy of \(\approx\) 95%; our approach attains better classification accuracy.

In the case of Warwick-QU dataset classification, Ribeiro et al. [7] proposed to extract descriptors from the spatial and curvelet domain (i.e., Fractal measures and Haralick features) for which they report a higher AUC of 0.994 but they did not report accuracy.

Evidently, our proposed descriptors compete with the state-of-the-art including those based on CNN in terms of classification accuracy and AUC. In contrast to the resource extensive CNN-based techniques, we hand-engineer our descriptors, yet realize similar results. Our approach is based on different descriptors in the Shaerlet domain which is capable of handling the scarcity of biomedical datasets. The highest accuracies in our study are obtained using an SVM model which is considered as a black box and the classifications made by such models are difficult to interpret and explain [28]. In addition to SVM, we explored DTB which is capable of achieving reasonable results, and might be a more favorable approach in certain applications as they have higher interpretability [49]. However, navigating through a rule to understand a decision made by a DTB is still a challenge since our main descriptors are computed based on Shearlet coefficients. Therefore, linking the computed statistical information from the Shearlet transform into a particular region of interest in a tissue type can be difficult. It is worth noting that our main concern in this study is to establish a robust approach for classifying histological images.

Conclusion and future work

In this study, we have constructed a novel feature representation for classifying histology tissues. Our technique integrates various sets of textural descriptors which are obtained in the complex Shearlet domain instead of directly computing the descriptors from the gray-level images. Those computed sets of descriptors are based on methods that capture local and global statistics: descriptors from the co-occurrence matrix, local binary patterns, local oriented statistic information booster, and segmentation-based textural features. As a result, we exploit the multi-directionality and multi-resolution of complex Shearlet transform; hence, we investigate the benefits of not only using the magnitude but also the relative phase (RP) components of the complex Shearlet coefficients. We have concluded that in general using both the magnitude and RP can lead to rigorous and effective classification results for histopathological image datasets. We utilize PCA to obtain a reduced set of our proposed integrated feature representation in the Shearlet domain which on some dataset can reduce feature set size while maintaining or increasing classification performance with traditional machine learning. We also show that the machine learning method has only limited influence on the results and hence it is possible to use DTB if desired because decisions of the classifier are considered interpretable. Our proposed method attains state-of-the-art classification results on the four histopathological datasets that we have utilized in this research. Our expectation that our technique is capable to generalize also on other histopathological datasets.

In the future, we plan to further benchmark our proposed techniques with different classification models (e.g., multilayer perceptron, Random Under Sampling Boost decision tree, and others). Additionally, we plan to investigate other feature reduction methodologies to aggressively reduce the number of descriptors without compromising the classification performance. Finally, the problem of a highly imbalanced dataset (i.e., observed in the BreakHis dataset) can be eliminated using sampling methods (e.g., Synthetic minority oversampling (SMOTE) technique) on the training dataset.

Availability of data and materials

Four publicly available histopathological datasets that have been used in our study. Multi-class Kather’s dataset can be accessed via https://zenodo.org/record/53169/. BreakHis dataset can be accessed via https://web.inf.ufpr.br/vri/databases/breast-cancer-histopathological-database-breakhis/. Epistroma dataset can be accessed via http://fimm.webmicroscope.net/Research/Supplements/epistroma/. Warwick-QU dataset can be accessed via https://warwick.ac.uk/fac/sci/dcs/research/tia/glascontest/download/.

Abbreviations

RP:

Relative phase

SVM:

Support vector machine

DTB:

Decision tree bagger

ANOVA:

Analysis of variance

CNN:

Convolutional neural network

DL:

Deep learning

CM:

Co-occurrence matrix

LBP:

Local binary patterns

LOSIB:

Local oriented statistics information booster

SFTA:

Segmentation-based fractal texture analysis

BreakHis:

Breast cancer histopathological dataset

SH:

Shearlet

PD:

Pixel distance

PCA:

Principal component analysis

QP:

Quadratic programming

CV:

Cross-validation

Perf:

Performance

ACC:

Accuracy

AUC:

Area under the receiver operating characteristic curve

Sen:

Sensitivity

Prec:

Precision

References

  1. Irshad H, Veillard A, Roux L, Racoceanu D. Methods for nuclei detection, segmentation, and classification in digital histopathology: a review–current status and future potential. IEEE Rev Biomed Eng. 2014;7:97–114.

    Article  Google Scholar 

  2. Xu J, Xiang L, Liu Q, Gilmore H, Wu J, Tang J, Madabhushi A. Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE Trans Med Imaging. 2016;35(1):119–30.

    Article  Google Scholar 

  3. Spanhol FA, Oliveira LS, Petitjean C, Heutte L. A dataset for breast cancer histopathological image classification. IEEE Trans Biomed Eng. 2016;63(7):1455–62.

    Article  Google Scholar 

  4. Kather JN, Weis C-A, Bianconi F, Melchers SM, Schad LR, Gaiser T, Marx A, Zöllner FG. Multi-class texture analysis in colorectal cancer histology. Sci Reports. 2016;6:27988.

    CAS  Google Scholar 

  5. Linder N, Konsti J, Turkki R, Rahtu E, Lundin M, Nordling S, Haglund C, Ahonen T, Pietikäinen M, Lundin J. Identification of tumor epithelium and stroma in tissue microarrays using texture analysis. Diagn Pathol. 2012;7(1):22.

    Article  Google Scholar 

  6. Bruno DOT, do Nascimento MZ, Ramos RP, Batista VR, Neves LA, Martins AS, Lbp operators on curvelet coefficients as an algorithm to describe texture in breast cancer tissues. Expert Syst Appl. 2016;55:329–40.

  7. Ribeiro MG, Neves LA, do Nascimento MZ, Roberto GF, Martins AS, Tosta TAA. Classification of colorectal cancer based on the association of multidimensional and multiresolution features. Expert Syst Appl. 2019;120:262–78.

    Article  Google Scholar 

  8. Vo AP, Oraintara S, Nguyen TT. Using phase and magnitude information of the complex directional filter bank for texture image retrieval. In: 2007 IEEE International Conference on Image Processing, 2007;4, p. 61. IEEE.

  9. He J, Ji H, Yang X. Rotation invariant texture descriptor using local shearlet-based energy histograms. IEEE Signal Process Lett. 2013;20(9):905–8.

    Article  Google Scholar 

  10. Zhou S, Shi J, Zhu J, Cai Y, Wang R. Shearlet-based texture feature extraction for classification of breast tumor in ultrasound image. Biomed Signal Process Control. 2013;8(6):688–96.

    Article  Google Scholar 

  11. Dong Y, Tao D, Li X, Ma J, Pu J. Texture classification and retrieval using shearlets and linear regression. IEEE Trans Cybernet. 2015;45(3):358–69.

    Article  Google Scholar 

  12. Meshkini K, Ghassemian H. Texture classification using Shearlet transform and GLCM. In: 2017 Iranian Conference on Electrical Engineering (ICEE), 2017;1845–1850. IEEE.

  13. Song Y, Chang, H, Gao Y, Liu S, Zhang D, Yao J, Chrzanowski W, Cai W. Feature learning with component selective encoding for histopathology image classification. In: IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), 2018;257–260. IEEE.

  14. Gupta V, Bhavsar A. Sequential modeling of deep features for breast cancer histopathological image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018;2254–2261.

  15. Wang C, Shi J, Zhang Q, Ying S. Histopathological image classification with bilinear convolutional neural networks. In: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2017;4050–4053. IEEE.

  16. Alinsaif S, Lang J. Histological image classification using deep features and transfer learning. In: 2020 17th Conference on Computer and Robot Vision (CRV), 2020;101–108. IEEE.

  17. Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric pathology image classification using stepwise fine-tuning for deep neural networks. J Healthc Eng. 2018;2018:

  18. Therrien R, Doyle S. Role of training data variability on classifier performance and generalizability. In: Medical Imaging 2018: Digital Pathology, vol. 10581, p. 1058109 (2018). International Society for Optics and Photonics.

  19. Soh L-K, Tsatsoulis C. Texture analysis of SAR sea ice imagery using gray level co-occurrence matrices. IEEE Trans Geosci Remote Sens. 1999;37(2):780–95.

    Article  Google Scholar 

  20. Haralick RM, Shanmugam K. Others: textural features for image classification. IEEE Trans Syst Man Cybern. 1973;6:610–21.

    Article  Google Scholar 

  21. Ojala T, Pietikäinen M, Mäenpää T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell. 2002;7:971–87.

    Article  Google Scholar 

  22. García-Olalla O, Alegre E, Fernández-Robles L, González-Castro V. Local oriented statistics information booster (losib) for texture classification. In: 2014 22nd International Conference on Pattern Recognition, 2014;1114–1119. IEEE.

  23. Costa AF, Humpire-Mamani G, Traina AJM. An efficient algorithm for fractal analysis of textures. In: 2012 25th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), pp. 39–46 (2012). IEEE.

  24. Kutyniok G, Labate D. Introduction to shearlets. Shearlets. 2012;1–38:

  25. Cristianini N, Shawe-Taylor J, et al. An introduction to support vector machines and other kernel-based learning methods. Cambridge: Cambridge University Press; 2000.

    Book  Google Scholar 

  26. Breiman L. Bagging predictors. Mach Learn. 1996;24(2):123–40.

    Google Scholar 

  27. Breiman L, Friedman J, Stone CJ, Olshen RA. Classification and regression trees. Baco Raton: CRC Press; 1984.

    Google Scholar 

  28. Došilović FK, Brčić M, Hlupić N. Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2018;0210–0215. IEEE.

  29. Alinsaif S, Lang J. Shearlet-based techniques for histological image classification. In: 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM); 2019.

  30. Jonnalagedda P, Schmolze D, Bhanu B. [regular paper] mvpnets: Multi-viewing path deep learning neural networks for magnification invariant diagnosis in breast cancer. In: 2018 IEEE 18th International Conference on Bioinformatics and Bioengineering (BIBE), 2018;189–194. IEEE.

  31. Sellaro TL, Filkins R, Hoffman C, Fine JL, Ho J, Parwani AV, Pantanowitz L, Montalto M. Relationship between magnification and resolution in digital pathology systems. J Pathol Inf. 2013;4:21.

    Article  Google Scholar 

  32. Sirinukunwattana K, Pluim JP, Chen H, Qi X, Heng P-A, Guo YB, Wang LY, Matuszewski BJ, Bruni E, Sanchez U, et al. Gland segmentation in colon histology images: the glas challenge contest. Med Image Anal. 2017;35:489–502.

    Article  Google Scholar 

  33. Reisenhofer R. The complex shearlet transform and applications to image quality assessment. Master’s thesis, Technische Universität Berlin 2014.

  34. Candès EJ, Donoho DL. New tight frames of curvelets and optimal representations of objects with piecewise c2 singularities. Commun Pure Appl Math J Issued Courant Inst Math Sci. 2004;57(2):219–66.

    Article  Google Scholar 

  35. Do MN, Vetterli M. The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans Image Process. 2005;14(12):2091–106.

    Article  Google Scholar 

  36. Kutyniok G, Shahram M, Zhuang X. Shearlab: a rational design of a digital parabolic scaling algorithm. SIAM J Imaging Sci. 2012;5(4):1291–332.

    Article  Google Scholar 

  37. Clausi DA. An analysis of co-occurrence texture statistics as a function of grey level quantization. Can J Remote Sens. 2002;28(1):45–62.

    Article  Google Scholar 

  38. Pan Y, Liu L, Yang L, Wang Y. Texture feature extracting method based on local relative phase binary pattern. In: 2016 5th International Conference on Computer Science and Network Technology (ICCSNT), 2016;749–753. IEEE.

  39. Cai L, Wang X, Wang Y, Guo Y, Yu J, Wang Y. Robust phase-based texture descriptor for classification of breast ultrasound images. Biomed Eng Online. 2015;14(1):26.

    Article  Google Scholar 

  40. Oppenheim AV, Lim JS. The importance of phase in signals. Proc IEEE. 1981;69(5):529–41.

    Article  Google Scholar 

  41. Harman HH. Modern factor analysis. Chicago: University of Chicago press; 1976.

    Google Scholar 

  42. Stegmann MB, Sjöstrand K, Larsen R. Sparse modeling of landmark and texture variability using the orthomax criterion. In: Medical Imaging 2006: Image Processing, 2006;6144: 61441. International Society for Optics and Photonics.

  43. Zanaty E. Support vector machines (svms) versus multilayer perception (mlp) in data classification. Egyptian Inf J. 2012;13(3):177–83.

    Article  Google Scholar 

  44. Hastie T, Tibshirani R. Classification by pairwise coupling. In: Advances in Neural Information Processing Systems, 1998;507–513.

  45. Platt J. Sequential minimal optimization: a fast algorithm for training support vector machines; 1998.

  46. Friedman J, Hastie T, Tibshirani R. The elements of statistical learning. Berlin: Springer; 2001.

    Google Scholar 

  47. Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.

    Article  Google Scholar 

  48. Ramalho GLB, Ferreira DS, Rebouças Filho PP, de Medeiros FNS. Rotation-invariant feature extraction using a structural co-occurrence matrix. Measurement. 2016;94:406–15.

    Article  Google Scholar 

  49. Gunning D. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web 2017;2.

Download references

Acknowledgements

We would like very much to thank the reviewers from the Biomedical and Health Informatics Workshop in the IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2019 for their insightful comments, which allowed us to enhance the current manuscript.

About this supplement

This article has been published as part of BMC Medical Informatics and Decision Making Volume 20 Supplement 14, 2020: Special Issue on Biomedical and Health Informatics. The full contents of the supplement are available online at https://0-bmcmedinformdecismak-biomedcentral-com.brum.beds.ac.uk/articles/supplements/volume-20-supplement-14.

Funding

This research (incl. publication costs) is funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Saudi Cultural Bureau in Canada. Both organizations had no role in the study and collection, analysis, and interpretation of data and in writing the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sadiq Alinsaif.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See Tables 1, 2, 3 and 4.

Table 1 Classification performance on Kather dataset
Table 2 Classification performance on BreakHis dataset
Table 3 Classification performance on Epistroma dataset
Table 4 Classification performance on Warwick-QU dataset

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alinsaif, S., Lang, J. Texture features in the Shearlet domain for histopathological image classification. BMC Med Inform Decis Mak 20 (Suppl 14), 312 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12911-020-01327-3

Download citation

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12911-020-01327-3

Keywords