Covariate Adjustment and Balancing under Interference
Speaker: Prof. Shuangning Li
Time: 16:00 (GMT), Feb 25, 2026.
Abstract
Covariates are routinely used to improve precision in randomized experiments, yet their role becomes subtle when interference is present, that is, when the outcome of one unit may depend on the treatment assignments of other units. This talk will study how covariate information can be used, both at the analysis stage and the design stage, to improve inference under interference.
In the first part, I will discuss recent work on covariate adjustment for estimating global treatment effects under network interference. Unlike the classical no-interference setting, direct regression adjustment can increase the asymptotic variance of estimators in the presence of interference. Building on a low-order interaction outcome model, we construct covariate-adjusted estimators that remain asymptotically unbiased and achieve variance no larger than their unadjusted counterparts under sparsity conditions on the interference network.
In the second part, I will turn to covariate balancing through rerandomization as a design-stage tool for experiments with interference. I will discuss how rerandomization can be used to enforce balance on pre-treatment covariates or on constructed exposure-related features. We show that, under mild assumptions, rerandomization yields asymptotic variance reductions for standard estimators in a model-agnostic manner, without requiring correct specification or even knowledge of the underlying interference structure.
Our Speaker
I am an Assistant Professor of Econometrics and Statistics at the University of Chicago Booth School of Business. Before joining Booth, I was a postdoctoral fellow in the Department of Statistics at Harvard University. I received my Ph.D. in Statistics from Stanford University, where I was advised by Professors Emmanuel Candès and Stefan Wager. Prior to my doctoral studies, I earned a Bachelor of Science degree from the University of Hong Kong. My research interests include causal inference, selective inference, and reinforcement learning.