Skip to main content

Table 1 Design options and rationales for main factors to consider for explanation presentation

From: A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare

Factor

Design Options

Rationale

Unit of explanation

Individual features

Lower information granularity can reduce cognitive load and processing time. Evidence supports the use of lower information granularity for non-AI/ML experts via feature groupings or extractions [28]. We can group features by laboratory test/vital sign.

Feature groups

Explanation unit organization

None

Explanations including causes that are abnormal or controllable (i.e., modifiable) might be preferred [11]. Feature influence on risk might differ in abnormality (e.g., feature that increases risk might be considered abnormal). Assessment type groups might differ in controllability (i.e., laboratory tests are modifiable, demographics are not).

Influence groups

Assessment groups

Dimensionality (size & granularity)

Static

Dimensionality can be reduced through information removal (e.g., reducing explanation size) or aggregation (e.g., reducing explanation granularity). The desired dimensionality of an explanation may vary by individual and prediction, [29, 30] suggesting that interactive control over dimensionality could be beneficial. Examples include control over the granularity of explanation units and size (e.g. number of explanation units).

Interactive

Risk representation

Probability

Critical care providers should be comfortable with the risk representation format. Risk information in feature influence explanations has been previously reported in terms of odds and probability, [31, 32] but provider preferences on these representations are unknown.

Odds

Explanation display format

Force plot

Visual representations of risk information may facilitate comprehension of risk [33]. Tornado plots and custom visualizations called force plots have been used for feature influence explanations, [31, 32] but the effectiveness of these visualizations has not been validated in user studies.

Tornado plot