Unraveling Students’ Interaction Around a Tangible Interface using Multimodal Learning Analytics



Published Oct 18, 2015
Bertrand Schneider Paulo Blikstein


In this paper, we describe multimodal learning analytics (MMLA) techniques to analyze data collected around an interactive learning environment. In a previous study (Schneider & Blikstein, submitted), we designed and evaluated a Tangible User Interface (TUI) where dyads of students were asked to learn about the human hearing system by reconstructing it. In the current study, we present the analysis of the data collected in the form of logs, both from students’ interaction with the tangible interface and as well as from their gestures, and we describe how we extracted meaningful predictors for student learning from these two datasets. First we show how Natural Language Processing (NLP) techniques can be used on the tangible interface logs to predict learning gains. Second, we explored how KinectTM data can inform “in-situ” interactions around a tabletop by using clustering algorithms to find prototypical body positions. Finally, we fed those features to a machine-learning classifier (Support Vector Machine) and divided students in two groups after performing a median split on their learning scores. We found that we were able to predict students’ learning gains (i.e. being above or belong the median split) with very high accuracy. We discuss the implications of these results for analyzing rich data from multimodal learning environments.

How to Cite

Schneider, B., & Blikstein, P. (2015). Unraveling Students’ Interaction Around a Tangible Interface using Multimodal Learning Analytics. JEDM | Journal of Educational Data Mining, 7(3), 89-116. Retrieved from https://jedm.educationaldatamining.org/index.php/JEDM/article/view/JEDM102
Abstract 381 | PDF Downloads 129


Abrahamson, D., Trninic, D., Gutiérrez, J.F., Huth, J., and Lee, R.G. Hooks and Shifts: A Dialectical Study of Mediated Discovery. Technology, Knowledge and Learning 16, 1 (2011), 55–85.

Anderson, M.L. Embodied cognition: A field guide. Artificial intelligence 149, 1 (2003), 91–130.

Bachour, K., Kaplan, F., & Dillenbourg, P. Reflect: An interactive table for regulating face-to-face collaborative learning. Times of Convergence. Technologies Across Learning Contexts. Springer Berlin Heidelberg, 2008. 39-48.

Blikstein, P. Multimodal Learning Analytics. Proceedings of the Third International Conference on Learning Analytics and Knowledge, ACM (2013), 102–106.

Blikstein, P., Worsley, M., Piech, C., Sahami, M., Cooper, S., & Koller, D. Programming Pluralism: Using Learning Analytics to Detect Patterns in the Learning of Computer Programming. Journal of the Learning Sciences just-accepted (2014).

Bransford, J. and Schwartz, D. Rethinking Transfer: A Simple Proposal with Multiple Implications. Review of Research in Education 24, (1999).

Chartrand, T.L. and Bargh, J.A. The chameleon effect: The perception–behavior link and social interaction. Journal of Personality and Social Psychology 76, 6 (1999), 893–910.

Church, R. Breckinridge. Using gesture and speech to capture transitions in learning. Cognitive Development 14, no. 2 (1999): 313-342.

Escalera, S., Gonzàlez, J., Baró, X., Reyes, M., Lopes, O., Guyon, I., & Escalante, H. (2013). Multi-modal gesture recognition challenge 2013: Dataset and results. In Proceedings of the 15th ACM on International conference on multimodal interaction (pp. 445-452). ACM.

Hall, E.T. The hidden dimension (1st ed.). Doubleday & Co, New York, NY, US, 1966.

Hatano, G. and Inagaki, K. Two courses of expertise. In H.W. Stevenson, H. Azuma and K. Hakuta, eds., Child development and education in Japan. W H Freeman/Times Books/ Henry Holt & Co, New York, NY, US, 1986, 262–272.

Manning, C.D., Raghavan, P., and Schütze, H. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA, 2008.

Richardson, D.C. and Dale, R. Looking To Understand: The Coupling Between Speakers’ and Listeners’ Eye Movements and Its Relationship to Discourse Comprehension. Cognitive Science 29, 6 (2005), 1045–1060.

Roth, W.-M. Gestures: Their Role in Teaching and Learning. Review of Educational Research 71, 3 (2001), 365–392.

Rothman, K. J. (1990). No adjustments are needed for multiple comparisons. Epidemiology, 1(1), 43-46.

Schneider, B., and Blikstein, P. Discovery Versus Direct Instruction: Learning Outcomes of Two Pedagogical Models Using Tangible Interfaces. International Conference on Computer-Supported Collaborative Learning, CSCL'2015.

Schneider, B. and Pea, R. Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning 8, 4 (2013), 375–397.

Schwartz, D.L. and Arena, D. Measuring What Matters Most: Choice-Based Assessments for the Digital Age. MIT Press, 2013.

Shaer, O., Strait, M., Valdes, C., Feng, T., Lintz, M., and Wang, H. Enhancing genomic learning through tabletop interaction. Proceedings of the 2011 annual conference on Human factors in computing systems, ACM (2011), 2817–2826.

Sherin, B. Using Computational Methods to Discover Student Science Conceptions in Interview Data. Proceedings of the 2Nd International Conference on Learning Analytics and Knowledge, ACM (2012), 188–197.

Siegler, R. S., and K. Crowley. The microgenetic method: A direct means for studying cognitive development. American Psychologist 46.6 (1991): 606.

Tschan, F. Ideal Cycles of Communication (or Cognitions) in Triads, Dyads, and Individuals. Small Group Research 33, 6 (2002), 615 –643.

Worsley, M. and Blikstein, P. Towards the Development of Multimodal Action Based Assessment. Proceedings of the Third International Conference on Learning Analytics and Knowledge, ACM (2013), 94–101.