Collaboration-Type Identification in Educational Datasets

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Jun 29, 2014
Andrew Waters Christoph Studer Richard Baraniuk

Abstract

Identifying collaboration between learners in a course is an important challenge in education for two reasons: First, depending on the courses’ rules, collaboration can be considered a form of cheating. Second, it helps one to more accurately evaluate each learner’s competence. While such collaboration identification is already challenging in traditional classroom settings consisting of a small number of learners, the problem is greatly exacerbated in the context of both online courses or massively open online courses (MOOCs) where potentially thousands of learners have little or no contact with the course instructor. In this work, we propose a novel methodology for collaboration-type identification, which both identifies learners who are likely collaborating and also classifies the type of collaboration employed. Under a fully Bayesian setting, we infer the probability of learners’ succeeding on a series of test items solely based on graded response data. We then use this information to jointly compute the likelihood that two learners were collaborating and what collaboration model (or type) was used. We demonstrate the efficacy of the proposed methods on both synthetic and real-world educational data; for the latter, the proposed methods find strong evidence of collaboration among learners in two non-collaborative takehome exams.

How to Cite

Waters, A., Studer, C., & Baraniuk, R. (2014). Collaboration-Type Identification in Educational Datasets. JEDM | Journal of Educational Data Mining, 6(1), 28-52. https://doi.org/10.5281/zenodo.3554681
Abstract 507 | PDF Downloads 195

##plugins.themes.bootstrap3.article.details##

Keywords

collaboration-type identification, Bayesian Rasch, sparse factor analysis, collaboration

References
BECKER, W. E. AND JOHNSTON, C. 1999. The relationship between multiple choice and essay response questions in assessing economics understanding. Economic Record 75, 4 (Dec.), 348–357.

BERGNER, Y., DROSCHLER, S., KORTEMEYER, G., RAYYAN, S., SEATON, D., AND PRITCHARD, D. 2012. Model-based collaborative filtering analysis of student response data: Machine-learning item response theory. In Proc. 5th Intl. Conf. Educational Data Mining. Chania, Greece, 95–102.

BUTLER, A. C. AND ROEDIGER, H. L. 2008. Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing. Memory & Cognition 36, 3 (Apr.), 604–616.

CHAFFIN, W. W. 1979. Dangers in using the Z index for detection of cheating on tests. Psychological Reports 45, 776–778.

CHIB, S. AND GREENBERG, E. 1998. Analysis of multivariate probit models. Biometrika 85, 2 (June), 347–361.

FRARY, R. B. 1993. Statistical detection of multiple-choice answer copying: Review and commentary. Applied Measurement in Education 6, 2, 153–165.

GELMAN, A., ROBERT, C., CHOPIN, N., AND ROUSSEAU, J. 1995. Bayesian Data Analysis. CRCPress.

HALADYNA, T. M., DOWNING, S. M., AND RODRIGUEZ, M. C. 2002. A review of multiple-choice item-writing guidelines for classroom assessment. Applied measurement in education 15, 3, 309– 333.

HOFF, P. D. 2009. A First Course in Bayesian Statistical Methods. Springer Verlag.

LAN, A. S., WATERS, A. E., STUDER, C., AND BARANIUK, R. G. 2013. Sparse factor analysis for learning and content analytics. submitted to Journal of Machine Learning Research,.

LEVINE, M. V. AND DONALD, B. R. 1979. Measuring the appropriatemess of multiple-choice test scores. Journal of Educational Statistics 4, 5 (Winter), 269–290.

NWANA, H. S. 1990. Intelligent tutoring systems: an overview. Artificial Intelligence Review 4, 4, 251– 277.

PAPPANO, L. 2012. The year of the MOOC. The New York Times.

RASCH, G. 1960. Studies in Mathematical Psychology: I. Probabilistic Models for Some Intelligence and Attainment Tests. Nielsen & Lydiche.

RASCH, G. 1993. Probabilistic Models for Some Intelligence and Attainment Tests. MESA Press.

RODRIGUEZ, M. C. 1997. The art & science of item writing: A meta-analysis of multiple-choice item format effects. In annual meeting of the American Education Research Association, Chicago, IL.

SCHMIDT, M. N., WINTHER, O., AND HANSEN, L. K. 2009. Bayesian non-negative matrix factorization. In Independent Component Analysis and Signal Separation. Vol. 5441. 540–547.

SOTARIDONA, L. AND MEIJER, R. 2002. Statistical properties of the k-index for detecting answer copying. Journal of Educational Measurement 39, 2, 115–132.

SOTARIDONA, L. S. AND MEIJER, R. R. 2003. Two new statistics to detect answer copying. Journal of Educational Measurement 40, 1, 53–69.

WATERS, A. E., STUDER, C., AND BARANIUK, R. G. 2013. Bayesian pairwise collaboration detection in educational datasets. In Proc. IEEE Global Conf. on Sig. and Info. Proc. (GlobalSIP). Austin, TX.

WESOLOWSKY, G. O. 2000. Detection excessive similarity in answers on multiple choice exams. Journal of Applied Statistics 27, 7 (Aug.), 909–921.

WESTFALL, P. H., JOHNSON, W. O., AND UTTS, J. M. 1997. A bayesian perspective on the bonferroni adjustment. Biometrika 84, 2, 419–427.

WOLLACK, J. A. 2003. Comparison of answer copying indices with real data. Journal of Educational Measurement 40, 3, 189–205.
Section
Articles