Dynamic treatment regimes (DTRs) formalize medical decision-making as a sequence of rules for different stages, mapping patient-level information to recommended treatments. In practice, estimating an optimal DTR using observational data from electronic medical record (EMR) databases can be complicated by covariates that are missing not at random (MNAR) due to informative monitoring of patients. Since complete case analysis can result in consistent estimation of outcome model parameters under the assumption of outcome-independent missingness \citep{Yang_Wang_Ding_2019}, Q-learning is a natural approach to accommodating MNAR covariates. However, the backward induction algorithm used in Q-learning can introduce complications, as MNAR covariates at later stages can result in MNAR pseudo-outcomes at earlier stages, leading to suboptimal DTRs, even if outcome variables are fully observed. To address this unique missing data problem in DTR settings, we propose two weighted Q-learning approaches where inverse probability weights for missingness of the pseudo-outcomes are obtained through estimating equations with valid nonresponse instrumental variables or sensitivity analysis. Asymptotic properties of the weighted Q-learning estimators are derived and the finite-sample performance of the proposed methods is evaluated and compared with alternative methods through extensive simulation studies. Using EMR data from the Medical Information Mart for Intensive Care database, we apply the proposed methods to investigate the optimal fluid strategy for sepsis patients in intensive care units.
翻译:暂无翻译