The accuracy of prediction models used in clinical decision-making deteriorates in the course of time as new practice patterns emerge and the patient mix changes. Research has shown that common strategies to overcome this calibration drift are inefficient and sometimes detrimental, relying as they do on predefined schedules for model recalibration or refitting.
In a paper in the Journal of the American Medical Informatics Association, Sharon Davis, PhD, MS, Michael Matheny, MD, MS, MPH, and colleagues at Vanderbilt University Medical Center report a system they devised to alert health IT teams to deteriorating performance in clinical prediction models.
In a computationally efficient way, the system continuously compares predictions to observed outcomes as they become available, looking for any slippage between real world probabilities and the outcome probabilities reflected by predictive models.
What’s more, the system identifies data that can be used to update models. The all-purpose customizable system proved highly accurate in a variety of tests using simulated data.
Other members of the project team include Robert Greevy Jr., PhD, MA, Thomas Lasko, MD, PhD, and Colin Walsh, MD, MA. The project was funded in part by the National Institutes of Health (BCHI130828).