When Probabilities Are Not Enough - A Framework for Causal Explanations of Student Success Models

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Dec 18, 2022
Lea Cohausz

Abstract

Student success and drop-out predictions have gained increased attention in recent years, connected to
the hope that by identifying struggling students, it is possible to intervene and provide early help and
design programs based on patterns discovered by the models. Though by now many models exist achieving
remarkable accuracy-values, models outputting simple probabilities are not enough to achieve these
ambitious goals. In this paper, we argue that they can be a first exploratory step of a pipeline aiming to
be capable of reaching the mentioned goals. By using Explainable Artificial Intelligence (XAI) methods,
such as SHAP and LIME, we can understand what features matter for the model and make the assumption
that features important for successful models are also important in real life. By then additionally
connecting this with an analysis of counterfactuals and a theory-driven causal analysis, we can begin to
reasonably understand not just if a student will struggle but why and provide fitting help. We evaluate
the pipeline on an artificial dataset to show that it can, indeed, recover complex causal mechanisms and
on a real-life dataset showing the method’s applicability. We further argue that collaborations with social
scientists are mutually beneficial in this area but also discuss the potential negative effects of personal
intervention systems and call for careful designs.

How to Cite

Cohausz, L. (2022). When Probabilities Are Not Enough - A Framework for Causal Explanations of Student Success Models. Journal of Educational Data Mining, 14(3), 52–75. https://doi.org/10.5281/zenodo.7304800
Abstract 562 | PDF Downloads 423

##plugins.themes.bootstrap3.article.details##

Keywords

dropout prediction, XAI, Explainability, Interpretability

References
ADADI, A. AND BERRADA, M. 2018. Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE access 6, 52138–52160.

ALAMRI, R. AND ALHARBI, B. 2021. Explainable student performance prediction models: a systematic review. IEEE Access 9, 33132–33143.

BARANYI, M., NAGY, M., AND MOLONTAY, R. 2020. Interpretable deep learning for university dropout prediction. In Proceedings of the 21st Annual Conference on Information Technology Education. Association for Computing Machinery, 13–19.

CHEN, R. 2012. Institutional characteristics and college student dropout risks: A multilevel event history analysis. Research in Higher Education 53, 5, 487–505.

CHITTI, M., CHITTI, P., AND JAYABALAN, M. 2020. Need for interpretable student performance prediction. In 2020 13th International Conference on Developments in eSystems Engineering (DeSE). IEEE, 269–272.

COHAUSZ, L. 2022. Towards real interpretability of student success prediction combining methods of XAI and social science. In Proceedings of the 15th International Conference on Educational Data Mining, A. Mitrovic and N. Bosch, Eds. International Educational Data Mining Society, 361–367.

CONATI, C., BARRAL, O., PUTNAM, V., AND RIEGER, L. 2021. Toward personalized xai: A case study in intelligent tutoring systems. Artificial Intelligence 298, 103503.

DEL BONIFRO, F., GABBRIELLI, M., LISANTI, G., AND ZINGARO, S. P. 2020. Student dropout prediction. In International Conference on Artificial Intelligence in Education, I. I. Bittencourt, M. Cukurova, K. Muldner, R. Luckin, and E. Mill´an, Eds. Springer, 129–140.

EMMERT-STREIB, F., YLI-HARJA, O., AND DEHMER, M. 2020. Explainable artificial intelligence and machine learning: A reality rooted perspective. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10, 6, e1368.

EVANS, S. AND MORRISON, B. 2011. Meeting the challenges of english-medium higher education: The first-year experience in hong kong. English for Specific Purposes 30, 3, 198–208.

HUR, P., LEE, H., BHAT, S., AND BOSCH, N. 2022. Using machine learning explainability methods to personalize interventions for students. In Proceedings of the 15th International Conference on Educational Data Mining, A. Mitrovic and N. Bosch, Eds. International Educational Data Mining Society, 438–445.

KEANE, M. T., KENNY, E. M., DELANEY, E., AND SMYTH, B. 2021. If only we had better counterfactual explanations: Five key deficits to rectify in the evaluation of counterfactual xai techniques. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Z.-H. Zhou, Ed. International Joint Conferences on Artificial Intelligence Organization, 4466–4474. Survey Track.

KEANE, M. T. AND SMYTH, B. 2020. Good counterfactuals and where to find them: A casebased technique for generating counterfactuals for explainable ai (xai). In International Conference on Case-Based Reasoning, I. Watson and R. Weber, Eds. Springer, 163–178.

KHOSRAVI, H., SHUM, S. B., CHEN, G., CONATI, C., TSAI, Y.-S., KAY, J., KNIGHT, S., MARTINEZ-MALDONADO, R., SADIQ, S., AND GAŠEVÍC , D. 2022. Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence 3, 100074.

LANGER, M., OSTER, D., SPEITH, T., HERMANNS, H., K¨A STNER, L., SCHMIDT, E., SESING, A., AND BAUM, K. 2021. What do we want from explainable artificial intelligence (xai)?–a stakeholder perspective on xai and a conceptual model guiding interdisciplinary xai research. Artificial Intelligence 296, 103473.

LUNDBERG, S. M. AND LEE, S.-I. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, U. von Luxburg, I. Guyon, S. Bengio, H. Wallach, and R. Fergus, Eds. Vol. 30. Curran Associates Inc., 4768–4777.

MANRIQUE, R., NUNES, B. P., MARINO, O., CASANOVA, M. A., AND NURMIKKO-FULLER, T. 2019. An analysis of student representation, representative features and classification algorithms to predict degree dropout. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge. Association for Computing Machinery, 401–410.

MESKE, C., BUNDE, E., SCHNEIDER, J., AND GERSCH, M. 2022. Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Information Systems Management 39, 1, 53–63.

MILLER, T. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267, 1–38.

MORRICE, L. 2013. Refugees in higher education: Boundaries of belonging and recognition, stigma and exclusion. International Journal of Lifelong Education 32, 5, 652–668.

MU, T., JETTEN, A., AND BRUNSKILL, E. 2020. Towards suggesting actionable interventions for wheel-spinning students. In Proceedings of the 13th International Conference on Educational Data Mining, A. N. Rafferty, J. Whitehill, C. Romero, and V. Cavalli-Sforza, Eds. International Educational Data Mining Society, 183–193.

OSWALD, M. E. AND GROSJEAN, S. 2012. Confirmation bias. In Cognitive Illusions, R. Pohl, Ed. Psychology Press, 91–108.

PRENKAJ, B., VELARDI, P., STILO, G., DISTANTE, D., AND FARALLI, S. 2020. A survey of machine learning approaches for student dropout prediction in online courses. ACM Computing Surveys (CSUR) 53, 3, 1–34.

QIU, L., LIU, Y., HU, Q., AND LIU, Y. 2019. Student dropout prediction in massive open online courses by convolutional neural networks. Soft Computing 23, 20, 10287–10301.

RIBEIRO, M. T., SINGH, S., AND GUESTRIN, C. 2016. Model-agnostic interpretability of machine learning. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), B. Kim, D. M. Malioutov, and K. R. Varshney, Eds. arXiv:1606.05386.

SPITZER, T. M. 2000. Predictors of college success: A comparison of traditional and nontraditional age students. Journal of Student Affairs Research and Practice 38, 1, 99–115.

TEXTOR, J., VAN DER ZANDER, B., GILTHORPE, M. S., LI´S KIEWICZ, M., AND ELLISON, G. T. 2016. Robust causal inference using directed acyclic graphs: the r package ‘dagitty’. International Journal of Epidemiology 45, 6, 1887–1894.

XING, W. AND DU, D. 2019. Dropout prediction in moocs: Using deep learning for personalized intervention. Journal of Educational Computing Research 57, 3, 547–570.

YASMIN, D. 2013. Application of the classification tree model in predicting learner dropout behaviour in open and distance learning. Distance Education 34, 2, 218–231.

ZEINEDDINE, H., BRAENDLE, U., AND FARAH, A. 2021. Enhancing prediction of student success: Automated machine learning approach. Computers & Electrical Engineering 89, 106903.
Section
Extended Articles from the EDM 2022 Conference

Most read articles by the same author(s)