When Probabilities Are Not Enough - A Framework for Causal Explanations of Student Success Models
##plugins.themes.bootstrap3.article.main##
##plugins.themes.bootstrap3.article.sidebar##
Abstract
Student success and drop-out predictions have gained increased attention in recent years, connected to
the hope that by identifying struggling students, it is possible to intervene and provide early help and
design programs based on patterns discovered by the models. Though by now many models exist achieving
remarkable accuracy-values, models outputting simple probabilities are not enough to achieve these
ambitious goals. In this paper, we argue that they can be a first exploratory step of a pipeline aiming to
be capable of reaching the mentioned goals. By using Explainable Artificial Intelligence (XAI) methods,
such as SHAP and LIME, we can understand what features matter for the model and make the assumption
that features important for successful models are also important in real life. By then additionally
connecting this with an analysis of counterfactuals and a theory-driven causal analysis, we can begin to
reasonably understand not just if a student will struggle but why and provide fitting help. We evaluate
the pipeline on an artificial dataset to show that it can, indeed, recover complex causal mechanisms and
on a real-life dataset showing the method’s applicability. We further argue that collaborations with social
scientists are mutually beneficial in this area but also discuss the potential negative effects of personal
intervention systems and call for careful designs.
How to Cite
##plugins.themes.bootstrap3.article.details##
dropout prediction, XAI, Explainability, Interpretability
ALAMRI, R. AND ALHARBI, B. 2021. Explainable student performance prediction models: a systematic review. IEEE Access 9, 33132–33143.
BARANYI, M., NAGY, M., AND MOLONTAY, R. 2020. Interpretable deep learning for university dropout prediction. In Proceedings of the 21st Annual Conference on Information Technology Education. Association for Computing Machinery, 13–19.
CHEN, R. 2012. Institutional characteristics and college student dropout risks: A multilevel event history analysis. Research in Higher Education 53, 5, 487–505.
CHITTI, M., CHITTI, P., AND JAYABALAN, M. 2020. Need for interpretable student performance prediction. In 2020 13th International Conference on Developments in eSystems Engineering (DeSE). IEEE, 269–272.
COHAUSZ, L. 2022. Towards real interpretability of student success prediction combining methods of XAI and social science. In Proceedings of the 15th International Conference on Educational Data Mining, A. Mitrovic and N. Bosch, Eds. International Educational Data Mining Society, 361–367.
CONATI, C., BARRAL, O., PUTNAM, V., AND RIEGER, L. 2021. Toward personalized xai: A case study in intelligent tutoring systems. Artificial Intelligence 298, 103503.
DEL BONIFRO, F., GABBRIELLI, M., LISANTI, G., AND ZINGARO, S. P. 2020. Student dropout prediction. In International Conference on Artificial Intelligence in Education, I. I. Bittencourt, M. Cukurova, K. Muldner, R. Luckin, and E. Mill´an, Eds. Springer, 129–140.
EMMERT-STREIB, F., YLI-HARJA, O., AND DEHMER, M. 2020. Explainable artificial intelligence and machine learning: A reality rooted perspective. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 10, 6, e1368.
EVANS, S. AND MORRISON, B. 2011. Meeting the challenges of english-medium higher education: The first-year experience in hong kong. English for Specific Purposes 30, 3, 198–208.
HUR, P., LEE, H., BHAT, S., AND BOSCH, N. 2022. Using machine learning explainability methods to personalize interventions for students. In Proceedings of the 15th International Conference on Educational Data Mining, A. Mitrovic and N. Bosch, Eds. International Educational Data Mining Society, 438–445.
KEANE, M. T., KENNY, E. M., DELANEY, E., AND SMYTH, B. 2021. If only we had better counterfactual explanations: Five key deficits to rectify in the evaluation of counterfactual xai techniques. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Z.-H. Zhou, Ed. International Joint Conferences on Artificial Intelligence Organization, 4466–4474. Survey Track.
KEANE, M. T. AND SMYTH, B. 2020. Good counterfactuals and where to find them: A casebased technique for generating counterfactuals for explainable ai (xai). In International Conference on Case-Based Reasoning, I. Watson and R. Weber, Eds. Springer, 163–178.
KHOSRAVI, H., SHUM, S. B., CHEN, G., CONATI, C., TSAI, Y.-S., KAY, J., KNIGHT, S., MARTINEZ-MALDONADO, R., SADIQ, S., AND GAŠEVÍC , D. 2022. Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence 3, 100074.
LANGER, M., OSTER, D., SPEITH, T., HERMANNS, H., K¨A STNER, L., SCHMIDT, E., SESING, A., AND BAUM, K. 2021. What do we want from explainable artificial intelligence (xai)?–a stakeholder perspective on xai and a conceptual model guiding interdisciplinary xai research. Artificial Intelligence 296, 103473.
LUNDBERG, S. M. AND LEE, S.-I. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, U. von Luxburg, I. Guyon, S. Bengio, H. Wallach, and R. Fergus, Eds. Vol. 30. Curran Associates Inc., 4768–4777.
MANRIQUE, R., NUNES, B. P., MARINO, O., CASANOVA, M. A., AND NURMIKKO-FULLER, T. 2019. An analysis of student representation, representative features and classification algorithms to predict degree dropout. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge. Association for Computing Machinery, 401–410.
MESKE, C., BUNDE, E., SCHNEIDER, J., AND GERSCH, M. 2022. Explainable artificial intelligence: objectives, stakeholders, and future research opportunities. Information Systems Management 39, 1, 53–63.
MILLER, T. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267, 1–38.
MORRICE, L. 2013. Refugees in higher education: Boundaries of belonging and recognition, stigma and exclusion. International Journal of Lifelong Education 32, 5, 652–668.
MU, T., JETTEN, A., AND BRUNSKILL, E. 2020. Towards suggesting actionable interventions for wheel-spinning students. In Proceedings of the 13th International Conference on Educational Data Mining, A. N. Rafferty, J. Whitehill, C. Romero, and V. Cavalli-Sforza, Eds. International Educational Data Mining Society, 183–193.
OSWALD, M. E. AND GROSJEAN, S. 2012. Confirmation bias. In Cognitive Illusions, R. Pohl, Ed. Psychology Press, 91–108.
PRENKAJ, B., VELARDI, P., STILO, G., DISTANTE, D., AND FARALLI, S. 2020. A survey of machine learning approaches for student dropout prediction in online courses. ACM Computing Surveys (CSUR) 53, 3, 1–34.
QIU, L., LIU, Y., HU, Q., AND LIU, Y. 2019. Student dropout prediction in massive open online courses by convolutional neural networks. Soft Computing 23, 20, 10287–10301.
RIBEIRO, M. T., SINGH, S., AND GUESTRIN, C. 2016. Model-agnostic interpretability of machine learning. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), B. Kim, D. M. Malioutov, and K. R. Varshney, Eds. arXiv:1606.05386.
SPITZER, T. M. 2000. Predictors of college success: A comparison of traditional and nontraditional age students. Journal of Student Affairs Research and Practice 38, 1, 99–115.
TEXTOR, J., VAN DER ZANDER, B., GILTHORPE, M. S., LI´S KIEWICZ, M., AND ELLISON, G. T. 2016. Robust causal inference using directed acyclic graphs: the r package ‘dagitty’. International Journal of Epidemiology 45, 6, 1887–1894.
XING, W. AND DU, D. 2019. Dropout prediction in moocs: Using deep learning for personalized intervention. Journal of Educational Computing Research 57, 3, 547–570.
YASMIN, D. 2013. Application of the classification tree model in predicting learner dropout behaviour in open and distance learning. Distance Education 34, 2, 218–231.
ZEINEDDINE, H., BRAENDLE, U., AND FARAH, A. 2021. Enhancing prediction of student success: Automated machine learning approach. Computers & Electrical Engineering 89, 106903.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who publish with this journal agree to the following terms:
- The Author retains copyright in the Work, where the term “Work” shall include all digital objects that may result in subsequent electronic publication or distribution.
- Upon acceptance of the Work, the author shall grant to the Publisher the right of first publication of the Work.
- The Author shall grant to the Publisher and its agents the nonexclusive perpetual right and license to publish, archive, and make accessible the Work in whole or in part in all forms of media now or hereafter known under a Creative Commons 4.0 License (Attribution-Noncommercial-No Derivatives 4.0 International), or its equivalent, which, for the avoidance of doubt, allows others to copy, distribute, and transmit the Work under the following conditions:
- Attribution—other users must attribute the Work in the manner specified by the author as indicated on the journal Web site;
- Noncommercial—other users (including Publisher) may not use this Work for commercial purposes;
- No Derivative Works—other users (including Publisher) may not alter, transform, or build upon this Work,with the understanding that any of the above conditions can be waived with permission from the Author and that where the Work or any of its elements is in the public domain under applicable law, that status is in no way affected by the license.
- The Author is able to enter into separate, additional contractual arrangements for the nonexclusive distribution of the journal's published version of the Work (e.g., post it to an institutional repository or publish it in a book), as long as there is provided in the document an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post online a pre-publication manuscript (but not the Publisher’s final formatted PDF version of the Work) in institutional repositories or on their Websites prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access). Any such posting made before acceptance and publication of the Work shall be updated upon publication to include a reference to the Publisher-assigned DOI (Digital Object Identifier) and a link to the online abstract for the final published Work in the Journal.
- Upon Publisher’s request, the Author agrees to furnish promptly to Publisher, at the Author’s own expense, written evidence of the permissions, licenses, and consents for use of third-party material included within the Work, except as determined by Publisher to be covered by the principles of Fair Use.
- The Author represents and warrants that:
- the Work is the Author’s original work;
- the Author has not transferred, and will not transfer, exclusive rights in the Work to any third party;
- the Work is not pending review or under consideration by another publisher;
- the Work has not previously been published;
- the Work contains no misrepresentation or infringement of the Work or property of other authors or third parties; and
- the Work contains no libel, invasion of privacy, or other unlawful matter.
- The Author agrees to indemnify and hold Publisher harmless from Author’s breach of the representations and warranties contained in Paragraph 6 above, as well as any claim or proceeding relating to Publisher’s use and publication of any content contained in the Work, including third-party content.