Using Auxiliary Data to Boost Precision in the Analysis of A/B Tests on an Online Educational Platform: New Data and New Results

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Jun 21, 2023
Adam C. Sales Ethan B. Prihar Johann A. Gagnon-Bartsch Neil T. Heffernan

Abstract

Randomized A/B tests within online learning platforms represent an exciting direction in learning sciences.
With minimal assumptions, they allow causal effect estimation without confounding bias and
exact statistical inference even in small samples. However, often experimental samples and/or treatment
effects are small, A/B tests are underpowered, and effect estimates are overly imprecise. Recent
methodological advances have shown that power and statistical precision can be substantially boosted
by coupling design-based causal estimation to machine-learning models of rich log data from historical
users who were not in the experiment. Estimates using these techniques remain unbiased and inference
remains exact without any additional assumptions. This paper reviews those methods and applies them
to a new dataset including over 250 randomized A/B comparisons conducted within ASSISTments, an
online learning platform. We compare results across experiments using four novel deep-learning models
of auxiliary data and show that incorporating auxiliary data into causal estimates is roughly equivalent to
increasing the sample size by 20% on average, or as much as 50-80% in some cases, relative to t-tests,
and by about 10% on average, or as much as 30-50%, compared to cutting-edge machine learning unbiased
estimates that use only data from the experiments. We show that the gains can be even larger for
estimating subgroup effects, hold even when the remnant is unrepresentative of the A/B test sample, and
extend to post-stratification population effects estimators.

How to Cite

Sales, A. C., Prihar, E. B., Gagnon-Bartsch, J. A., & Heffernan, N. T. (2023). Using Auxiliary Data to Boost Precision in the Analysis of A/B Tests on an Online Educational Platform: New Data and New Results. Journal of Educational Data Mining, 15(2), 53–85. https://doi.org/10.5281/zenodo.8016854
Abstract 430 | PDF Downloads 297

##plugins.themes.bootstrap3.article.details##

Keywords

A/B tests, deep learning, evaluation

References
ARONOW, P. M. AND MIDDLETON, J. A. 2013. A class of unbiased estimators of the average treatment effect in randomized experiments. Journal of Causal Inference 1, 1, 135–154

BENJAMINI, Y. AND HOCHBERG, Y. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological) 57, 1, 289–300.

BENJAMINI, Y. AND YEKUTIELI, D. 2001. The control of the false discovery rate in multiple testing under dependency. The Annals of Statistics 29, 4, 1165 – 1188.

CHERNOZHUKOV, V., CHETVERIKOV, D., DEMIRER, M., DUFLO, E., HANSEN, C., NEWEY, W., AND ROBINS, J. 2018. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal 21, 1, C1–C68.

COOK, L. D., LOGAN, T. D., AND PARMAN, J. M. 2014. Distinctively black names in the american past. Explorations in Economic History 53, 64–82.

DENTON, E., HANNA, A., AMIRONESEI, R., SMART, A., NICOLE, H., AND SCHEUERMAN, M. K. 2020. Bringing the people back in: Contesting benchmark machine learning datasets. Proceedings of ICML Workshop on Participatory Approaches to Machine Learning.

DING, P., LI, X., AND MIRATRIX, L. W. 2017. Bridging finite and super population causal inference. Journal of Causal Inference 5, 2.

FISHER, R. A. 1935. Design of experiments. Oliver and Boyd, Edinburgh.

FREEDMAN, D. A. 2008. On regression adjustments to experimental data. Advances in Applied Mathematics 40, 2, 180–193.

GAGNON-BARTSCH, J. A., SALES, A. C., WU, E., BOTELHO, A. F., ERICKSON, J. A., MIRATRIX, L. W., AND HEFFERNAN, N. T. Forthcoming. Precise unbiased estimation in randomized experiments using auxiliary observational data. Journal of Causal Inference. https://arxiv.org/abs/2105.03529.

GERS, F. A., SCHMIDHUBER, J., AND CUMMINS, F. 2000. Learning to forget: Continual prediction with lstm. Neural Computation 12, 10, 2451–2471.

HARRISON, A., SMITH, H., HULSE, T., AND OTTMAR, E. R. 2020. Spacing out! manipulating spatial features in mathematical expressions affects performance. Journal of Numerical Cognition 6, 2, 186– 203.

HEFFERNAN, N. T. AND HEFFERNAN, C. L. 2014. The assistments ecosystem: building a platform that brings scientists and teachers together for minimally invasive research on human learning and teaching. International Journal of Artificial Intelligence in Education 24, 4, 470–497.

HELLER, R., ROSENBAUM, P. R., AND SMALL, D. S. 2009. Split samples and design sensitivity in observational studies. Journal of the American Statistical Association 104, 487, 1090–1101.

IMBENS, G. W. 2004. Nonparametric estimation of average treatment effects under exogeneity: A review. Review of Economics and statistics 86, 1, 4–29.

KINGMA, D. P. AND BA, J. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds.

MCDERMOTT, R. 2011. Internal and external validity. In Cambridge Handbook of Experimental Political Science, J. N. Druckman, D. P. Greene, J. H. Kuklinski, and A. Lupia, Eds. Cambridge University Press, 27–40.

MIRATRIX, L. W., SEKHON, J. S., AND YU, B. 2013. Adjusting treatment effect estimates by poststratification in randomized experiments. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 75, 2, 369–396.

NEYMAN, J. 1923. On the application of probability theory to agricultural experiments. essay on principles. section 9. Statistical Science 5, 463–480. 1990; transl. by D.M. Dabrowska and T.P. Speed.

OSTROW, K. S., SELENT, D., WANG, Y., VAN INWEGEN, E. G., HEFFERNAN, N. T., AND WILLIAMS, J. J. 2016. The assessment of learning infrastructure (ali): The theory, practice, and scalability of automated assessment. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge. LAK ’16. Association for Computing Machinery, New York, NY, USA, 279–288.

RUBIN, D. B. 1978. Bayesian Inference for Causal Effects: The Role of Randomization. The Annals of Statistics 6, 1, 34 – 58.

SALES, A. C., BOTELHO, A., PATIKORN, T. M., AND HEFFERNAN, N. T. 2018. Using big data to sharpen design-based inference in a/b tests. In Proceedings of the 11th International Conference on Educational Data Mining. (EDM 2018), K. E. Boyer and M. Yudelson, Eds. International Educational Data Mining Society, 479–486.

SALES, A. C., PRIHAR, E., GAGNON-BARTSCH, J., GURUNG, A., AND HEFFERNAN, N. T. 2022. More powerful a/b testing using auxiliary data and deep learning. In Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium: 23rd International Conference, AIED 2022, Durham, UK, July 27–31, 2022, Proceedings, Part II (AIED 2022), M. M. Rodrigo, N. Matsuda, A. I. Cristea, and V. Dimitrova, Eds. Springer Cham, 524–527.

SCHOCHET, P. Z. 2015. Statistical theory for the RCT-YES software: Design-based causal inference for RCTs. U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance.

VAN DER LAAN, M. J. AND ROSE, S. 2011. Targeted learning: causal inference for observational and experimental data. Springer Series in Statistics. Springer Science & Business Media, New York, NY.

WAGER, S., DU, W., TAYLOR, J., AND TIBSHIRANI, R. J. 2016. High-dimensional regression adjustments in randomized experiments. Proceedings of the National Academy of Sciences 113, 45, 12673–12678.

WU, E. AND GAGNON-BARTSCH, J. A. 2018. The LOOP estimator: Adjusting for covariates in randomized experiments. Evaluation Review 42, 4, 458–488.
Section
EDM 2023 Journal Track