WordBytes: Exploring an Intermediate Constraint Format for Rapid Classification of Student Answers on Constructed Response Assessments

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Dec 23, 2017
Kerry J. Kim Denise S Pope Daniel Wendel Eli Meir

Abstract

Computerized classification of student answers offers the possibility of instant feedback and improved learning. Open response (OR) questions provide greater insight into student thinking and understanding than more constrained multiple choice (MC) questions, but development of automated classifiers is more difficult, often requiring training a machine learning system with many human-classified answers. Here we explore a novel intermediate constraint question format called WordBytes (WB) where students assemble one-sentence answers to two different college evolutionary biology questions by choosing, then ordering, fixed tiles containing words and phrases. We found WB allowed students to construct hundreds to thousands of different answers (≤20 tiles), with multiple ways to express correct and incorrect answers with different misconceptions. We found humans could specify rules for an automated WB grader that could accurately classify answers as correct/incorrect with Cohen's kappa ≥ 0.88, near the measured intra-rater reliability of two human graders and the performance of machine classification of OR answers (Nehm et al., 2012). Finer-grained classification to identify the specific misconception had lower accuracy (Cohen's kappa < 0.75), which could be improved either by using a machine learner or revising the rules, but both would require considerably more development effort. Our results indicate that WB may allow rapid development of automated correct/incorrect answer classification without collecting and hand-grading hundreds of student answers.

How to Cite

Kim, K. J., Pope, D. S., Wendel, D., & Meir, E. (2017). WordBytes: Exploring an Intermediate Constraint Format for Rapid Classification of Student Answers on Constructed Response Assessments. Journal of Educational Data Mining, 9(2), 45–71. https://doi.org/10.5281/zenodo.3554721
Abstract 1520 | PDF Downloads 1182

##plugins.themes.bootstrap3.article.details##

Keywords

student answers, assessment, constructed response, reliability

References
AMERICAN ASSOCIATION FOR THE ADVANCEMENT OF SCIENCE (AAAS). 2011. Vision and change in undergraduate biology education. AAAS, Washington, DC.

BEGGROW E. P., HA M., NEHM R. H., PEARL D., AND BOONE W. J. 2014. Assessing scientific practices using machine-learning methods: How closely do they match clinical interview performance? Journal of Science Education and Technology, 23, 160-182.

BEJAR I. I. 1991. A methodology for scoring open-ended architectural design problems. Journal of Applied Psychology, 76, 4, 522-532.

BENNETT R. E. 1993. On the meaning of constructed response. In Bennett R. E., Ward W. C. (Eds.), Construction versus choice in cognitive measurement: Issues in constructed response, performance testing, and portfolio assessment. Lawrence Erlbaum Associates. Hillsdale NJ. 1-27.

BLACK P. AND WILLIAM D. 1998. Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5, 1, 7-74. 64 Journal of Educational Data Mining, Volume 9, No 2, 2017

CHANG C. C. AND LIN C. J. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2, 3, 27.

COHEN J. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 1, 37-46.

HA M., NEHM R. H., URBAN-LURAIN M., AND MERRILL J. E. 2011. Applying Computerized Scoring Models of Written Biological Explanations across Courses and Colleges: Prospects and Limitations. CBE Life Science Education, 10, 379.

HA M. AND NEHM R. H. 2016. The impact of misspelled words on automated computer scoring: a case study of scientific explanations. Journal of Science Education and Technology, 25, 3, 358.

HERRON J., ABRAHAM J., AND MEIR E. 2014. Mendelian Pigs. Simbio.com.

HERRON J. AND MEIR E. 2014. Darwinian Snails. Simbio.com.

HOFMANN M. AND KLINKENBERG R. (eds) 2013. RapidMiner: Data mining use cases and business analytics applications (Chapman & Hall/CRC Data Mining and Knowledge Discovery Series), CRC Press.

HSU C. W., CHANG C. C., AND LIN C. J. 2003. A practical guide to support vector classification. https://www.cs.sfu.ca/people/Faculty/teaching/726/spring11/svmguide.pdf

KLEIN S. P. 2008. Characteristics of hand and machine-assigned scores to college students’ answers to open-ended tasks. In Nolan D. Speed T. (Eds.) Probability and statistics: Essays in Honor of David A. Freedman. Beachwood, OH. 76-89.

KLOPFER E. 2008. Augmented learning: Research and design of mobile educational games. MIT Press, Cambridge, MA.

KRIPPENDORFF K. 1980. Content analysis: An introduction to its methodology. Sage Publications.

LANDIS J. R. AND KOCH G. G. 1977. The measurement of observer agreement for categorical data. Biometrics, 33, 159-174.

LEELAWONG K. AND BISWAS G. 2008. Designing learning by teaching agents: The Betty’s Brain system. International Journal of Artificial Intelligence in Education, 18, 3, 181-208.

LUKHOFF B. 2010. The design and validation of an automatically-scored constructed-response item type for measuring graphical representation skill. Doctoral dissertation, Stanford University, Stanford, CA.

LUCKIE D. B., HARRISON S. H., WALLACE J. L., AND EBERT-MAY D. 2008. Studying C-TOOLS: Automated grading for online concept maps. Conference Proceedings from Conceptual Assessment in Biology II, 2, 1, 1-13.

MAYFIELD E., ADAMSON D., AND ROSE C. P. 2014. LightSide researcher’s workbench user manual.

MOHARRERI K., HA M., AND NEHM R. H. 2014. EvoGrader: an online formative assessment tool for automatically evaluating written evolutionary explanations. Evolution: Education and Outreach, 7, 15.

National Research Council. 2001. Knowing what students know: The science and design of educational assessment, Washington DC: National Academies Press.

NEHM R. H., HA M., AND MAYFIELD E. 2012. Transforming Biology Assessment with Machine Learning: Automated Scoring of Written Evolutionary Explanations. Journal of Science Education Technology, 21, 183.

NEHM R. H. AND HAERTIG H. 2012. Human vs. computer diagnosis of students’ natural selection knowledge: testing the efficacy of text analytic software. Journal of Science Education and Technology, 21, 1, 56-73. 65 Journal of Educational Data Mining, Volume 9, No 2, 2017

QUINLAN R. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Mateo, CA.

RITTHOFF O., KLINKENBERG R., MIERSWA I., AND FELSKE S. 2001. YALE: Yet Another Learning Environment. LLWA’01-Tagungsband der GI-Workshop-Woche Lehren-Lehren-Wissen Adaptivitat. University of Dortmund, Dortmund, Germany. Technical Report, 763, 84-92.

ROMERO C., VENTURA S., PECHENIZKLY M., AND BAKER R. S. 2010. Handbook of Educational Data Mining. CRC Press.

SCALISE K. AND GIFFORD B. 2006. Computer based assessment in E-Learning: A framework for constructing “intermediate constraint” questions and tasks for technology platforms. Journal of Technology, Learning, and Assessment, 4, 6, 4-44.

SHUTE V. J. 2008. Focus on Formative Feedback. Review of Education Research, 78, 1, 153- 189.

SMITH M. K., WOOD W. B., AND KNIGHT J. K. 2008. The genetics concept assessment: A new concept inventory for gauging student understanding of genetics. CBE Life Sciences Education, 7, 4, 422-430.

SPSS INC. 2006. SPSS text analysis for surveys™ 2.0 user’s guide. SPSS Inc, Chicago, IL.

THE CARNEGIE CLASSIFICATION OF INSTITUTIONS OF HIGHER EDUCATION. n.d. About Carnegie Classification. Retrieved (Dec 15, 2016) from http://carnegieclassifications.iu.edu/ .

VOSNIADOU S. 2008. Conceptual Change Research: An Introduction. In Stella Vosniadou, ed. International Handbook of Research on Conceptual Change. (first ed). New York/Abingdon: Routeledge, xiii-xxviii.

YANG Y., BUCKENDAHL C. W., JUSZKIEWICZ P. J., AND BHOLA D. S. 2002. A review of strategies for validating computer automated scoring. Applied Measurement of Education, 15, 4, 391- 412.
Section
Articles