PhD Dissertation Defence of Andrea Adamoli

An Agile Concept Inventory Methodology to Accurately and Efficiently Measure Student Programming Language Misconceptions

USI, Lugano, Switzerland
Tue, Nov 14, 2023


Abstract

While learning new subjects, students often develop misconceptions that negatively affect their results and even their academic success. Among suitable methodologies, concept inventories (CIs) – collections of multiple-choice questions – are the most widely used tools for spotting misconceptions and allowing instructors to correct them promptly. While the main advantage of CIs is responsiveness in analyzing answers, their measurement speed (the number of misconceptions that can be tested in a time frame) and reliability are still poor: this often hinders the concrete application of misconception diagnostic tools in classrooms. This thesis introduces an improved CI methodology that allows for efficient and accurate detections of large sets of misconceptions in classes and, thus, to obtain very detailed pictures of the student difficulties. The methodology, called AMiCI (Agile Misconception Concept Inventory), is based on observing the answers to the individual options of the multiple-choice questions, rather than to the question as a whole. We are therefore able to get more information from an item and thus improve the measurement speed. To ensure the accuracy of our methodology, we have introduced a collaborative validation phase based on expert and student feedback, analyzed with a rigorous statistical approach. We integrated AMiCI in several CS1/2 and advanced Java programming courses, and we tested up to 89 distinct misconceptions in weekly 30-minutes sessions within a semester. Our methodology showed a 4x speed up in the measurement speed compared to state-of-the-art CIs, while preserving satisfactory accuracy. Thanks to this extensive coverage of misconceptions, we built a new metric, the “knowledge fitness”, to objectively assess student difficulties in programming and better support their learning process. We were thus able to reliably measure the dissemination of misconceptions, discuss them in classes, and hypothesize their origins. In our investigation, we further analyzed in detail the source of the inaccuracy of the questions, discovering how it does not depend only on their content but also on their “type”. Questions based on code interpretation surprisingly showed lower accuracy than questions based on textual descriptions or other abstract representations (notional machines). The “response (or measurement) bias” also played an important role in the overall accuracy. Indeed, some of the student answers to quiz items proved to be so contradictory as to question their coherence. To limit these sources of inaccuracies, we have developed a web platform for the management and administration of multiple-choice questions, which implements our methodology: the AMiCI-Platform. Although with its use a decrease in the response bias compared to generic (online and paper) forms was observed, the overall accuracy does not seem to have had a significant improvement. Nevertheless, the platform has proved to be a very useful tool to manage the CI and analyze the results quickly, significantly contributing to reduce costs (time and resources) for validation. In conclusion, AMiCI proved to be an efficient and reliable methodology to build CIs for detecting programming language misconceptions and better understand the individual and the class difficulties. It therefore provides a means to improve the communication between instructors and students, with benefit for both. Thanks to AMiCI, new research possibilities also open up, for example on the origins of misconceptions or on the effectiveness of the various didactic approaches, measured through the knowledge fitness. AMiCI also constitutes an important element for the construction of an online tutoring system, capable of verifying the quality of students’ knowledge and automatically direct them towards targeted activities.

External Link

PhD Dissertation