Chart Review

Measure agreement between chart reviewers.

Whether your chart annotations come from humans, machine-learning, or coded data like ICD-10, Chart Review can compare them to reveal interesting statistics like:

Accuracy

Confusion Matrix

  • TP = True Positive (type I error)
  • TN = True Negative (type II error)
  • FP = False Positive
  • FN = False Negative

Power Calculations for sample size estimation

  • Power = 1 - FNR
  • FNR = FN / (FN + TP)

Is This Part of Cumulus?

Chart Review is developed by the same team and is designed to work with the Cumulus project, but Chart Review is useful even outside of Cumulus.

Some features (notably those dealing with external annotations) require Label Studio metadata that Cumulus ETL creates when it pushes notes to Label Studio using its upload-notes feature.

But calculating accuracy between human annotators can be done entirely without the use of Cumulus.

Installing & Using

pip install chart-review
chart-review --help

Read the first-time setup docs for more.

Example

$ chart-review
╭───────────┬─────────────┬───────────╮
│ Annotator │ Chart Count │ Chart IDs │
├───────────┼─────────────┼───────────┤
│ jane      │ 3           │ 1, 3–4    │
│ jill      │ 4           │ 1–4       │
│ john      │ 3           │ 1–2, 4    │
╰───────────┴─────────────┴───────────╯

Pass --help to see more options.

Source Code

Chart Review is open source. If you’d like to browse its code or contribute changes yourself, the code is on GitHub.


Table of contents