There is increasing interest in incorporating prediction models into clinical practice. Because these models use information about the patient population and practice patterns to inform their predictions, changes in any of these aspects of care resulting from prior or concurrent implementation of prediction models could potentially lead to decreased prediction performance. The objective of this simulation study was to assess how common approaches to implementing clinical prediction models could influence the predictive performance of current or future models. Visual Abstract. Implications of the Use of Artificial Intelligence Predictive Models in Health Care Settings: There is increasing interest in incorporating prediction models into clinical practice. Because these models use information about the patient population and practice patterns to inform their predictions, changes in any of these aspects of care resulting from prior or concurrent implementation of prediction models could potentially lead to decreased prediction performance. The objective of this simulation study was to assess how common approaches to implementing clinical prediction models could influence the predictive performance of current or future models. Background: Substantial effort has been directed toward demonstrating uses of predictive models in health care. However, implementation of these models into clinical practice may influence patient outcomes, which in turn are captured in electronic health record data. As a result, deployed models may affect the predictive ability of current and future models. Objective: To estimate changes in predictive model performance with use through 3 common scenarios: model retraining, sequentially implementing 1 model after another, and intervening in response to a model when 2 are simultaneously implemented. Design: Simulation of model implementation and use in critical care settings at various levels of intervention effectiveness and clinician adherence. Models were either trained or retrained after simulated implementation. Setting: Admissions to the intensive care unit (ICU) at Mount Sinai Health System (New York, New York) and Beth Israel Deaconess Medical Center (Boston, Massachusetts). Patients: 130 000 critical care admissions across both health systems. Intervention: Across 3 scenarios, interventions were simulated at varying levels of clinician adherence and effectiveness. Measurements: Statistical measures of performance, including threshold-independent (area under the curve) and threshold-dependent measures. Results: At fixed 90% sensitivity, in scenario 1 a mortality prediction model lost 9% to 39% specificity after retraining once and in scenario 2 a mortality prediction model lost 8% to 15% specificity when created after the implementation of an acute kidney injury (AKI) prediction model; in scenario 3, models for AKI and mortality prediction implemented simultaneously, each led to reduced effective accuracy of the other by 1% to 28%. Limitations: In real-world practice, the effectiveness of and adherence to model-based recommendations are rarely known in advance. Only binary classifiers for tabular ICU admissions data were simulated. Conclusion: In simulated ICU settings, a universally effective model-updating approach for maintaining model performance does not seem to exist. Model use may have to be recorded to maintain viability of predictive modeling. Primary Funding Source: National Center for Advancing Translational Sciences. [ABSTRACT FROM AUTHOR]