Skip to main content

Table 6 Performance comparison when using each of the 3 AEs — Basic AE, Denoising AE and Sparse AE — and for breast cancer detection, on the UCI’s Wisconsin Breast Cancer dataset

From: Using autoencoders as a weight initialization method on deep neural networks for disease detection

 

Top Layers (AEs)

Accuracy (%)

MCC

Precision (%)

Recall (%)

F1 score

Approach A

AE: Encoding Layers

97.54 ±2.06

0.95 ±0.04

98.67 ±2.92

94.81 ±5.20

96.60 ±2.90

 

AE: Complete Autoencoder

96.49 ±2.62

0.93 ±0.05

96.83 ±3.71

93.83 ±7.12

95.11 ±3.93

 

DAE: Encoding Layers

95.43 ±3.81

0.90 ±0.08

98.38 ±3.48

89.13 ±8.73

93.36 ±6.05

 

DAE: Complete Autoencoder

93.32 ±3.78

0.86 ±0.08

98.19 ±2.93

83.46 ±8.52

90.09 ±5.98

 

SAE: Encoding Layers

97.19 ±2.22

0.94 ±0.05

97.69 ±3.15

94.81 ±5.20

96.14 ±3.13

 

SAE: Complete Autoencoder

97.02 ±2.35

0.94 ±0.05

97.70 ±2.42

94.31 ±7.03

95.80 ±3.64

Approach B

AE: Encoding Layers

99.12 ±1.24

0.98 ±0.03

98.71 ±2.86

99.05 ±2.01

98.84 ±1.59

 

AE: Complete Autoencoder

98.60 ±1.38

0.97 ±0.03

97.75 ±2.38

98.57 ±3.21

98.11 ±1.91

 

DAE: Encoding Layers

97.72 ±2.62

0.95 ±0.06

98.08 ±2.50

95.74 ±6.13

96.81 ±3.83

 

DAE: Complete Autoencoder

97.19 ±2.64

0.94 ±0.06

96.39 ±4.57

96.23 ±4.91

96.22 ±3.62

 

SAE: Encoding Layers

97.19 ±2.22

0.94 ±0.04

96.15 ±5.24

96.71 ±3.19

96.31 ±2.78

 

SAE: Complete Autoencoder

96.66 ±2.10

0.93 ±0.04

97.66 ±3.28

93.44 ±5.47

95.39 ±2.96

  1. All the presented results are the 10-fold cross-validation mean values, at the validation set, by selecting the best performing model according to its F1 score. The first row presents the results for Approach A, where we fix the resulting weights of the AE pre-training; the second one shows the results for Approach B, where we allow the subsequent fine-tuning of all the weights of the model. The highlighted values correspond to the combination that led to the overall best result (importing only the encoding layers a Basic AE into the classification network, and allowing subsequent fine-tune, when training for the classification task.)