SIGCSE '24 Poster

Using Notional Machines to Automatically Assess Students' Comprehension of Their Own Code

Portland, Oregon, USA
Fri, Mar 22, 2024


Code comprehension has been shown to be challenging and important for a positive learning outcome. Students don’t always understand the code they write. This has been exacerbated by the advent of large language models that automatically generate code that may or may not be correct. Now students don’t just have to understand their own code, but they have to be able to critically analyze automatically generated code as well.

To help students with code comprehension, instructors often use notional machines. Notional machines are used not only by instructors to explain code, but also in activities or exam questions given to students. Traditionally, these questions involve code that was not written by students. However, asking questions to students about their own code (Questions on Learners’ Code, QLCs) has been shown to strengthen their code comprehension.

This poster presents an approach to combine notional machines and QLCs to automatically generate personalized questions about learners’ code based on notional machines. Our aim is to understand whether notional machine–based QLCs are effective. We conducted a pilot study with 67 students to test our approach, and we plan to conduct a comprehensive empirical evaluation to study its effectiveness.

External Link

SIGCSE '24