Skip to main content

Table 2 Details of the 3-layer network architecture for the PICO recognition model

From: Improving reference prioritisation with PICO recognition

Layer

Size

Source

1a

Word embedding

200

[21], not updated

1b

Character embedding

28

trained from random initialisation

1c

Character-based word representation

2 ×28

biLSTM applied to 1b

1d

Combined embedding

256

concatenation of 1a and 1c

2

Recurrent layer

2 ×128

biLSTM over 1d

3

Linear layer

41

affine projection of 2

 

CRF output

1

most likely sequence of tags