Recent Publications

See Google Scholar for a complete publication list.

Understanding program execution is a challenging task for novice programmers. The semantic rules which determine how execution affects the program state are numerous and complex, and students frequently hold fundamental misconceptions about these rules. If students do not build a correct mental model of program execution early on, they will face substantial hurdles as they try to develop and debug their code. This paper presents VizQuiz, a tool for auto-generating multiple choice quizzes designed to help students gain insight into the semantic rules which govern program execution. VizQuiz provides students with an initial state and a piece of code, and tasks them with mentally tracing the execution of that code and selecting the correct final state. Reference diagrams are used to depict the initial and final states, and as feedback to help students visualize the correct behavior if they select a wrong answer. Feedback is auto-generated, so students can immediately correct their misconceptions and re-attempt.

The software development process often follows a circuitous path, littered with mistakes and backtracks. This is particularly true for novice programmers, who typically navigate through a variety of errors en route to their final solution. This paper presents a quantitative analysis of a large dataset of Python programs written by novice students. The analysis paints a multifaceted picture of the errors that students encounter, providing insight into the distribution, duration, and evolution of these errors. Ultimately, this paper aims to incite further conversation on the mistakes made by novice programmers, and to inform the decisions instructors make as they help students overcome these mistakes.

To maximize the effectiveness of modern virtualization systems, resources must be allocated fairly and efficiently amongst virtual machines (VMs). However, current policies for allocating memory are relatively static. As a result, system-wide memory utilization is often sub-optimal, leading to unnecessary paging and performance degradation. To better utilize the large-scale memory resources of modern machines, the virtualization system must allow virtual machines to expand beyond their initial memory reservations, while still fairly supporting concurrent virtual machines. This paper presents a system for dynamically allocating memory amongst virtual machines at runtime, as well as an evaluation of six allocation policies implemented within the system. The system allows guest VMs to expand and contract according to their changing demands by uniquely improving and integrating mechanisms such as memory ballooning, memory hotplug, and hypervisor paging. Furthermore, the system provides fairness by guaranteeing each guest a minimum reservation, charging for rentals beyond this minimum, and enforcing timely reclamation of memory.

Testing is an important, time-consuming, and often difficult part of the software development process. It is therefore critical to introduce testing early in the computer science curriculum, and to provide students with frequent opportunities for practice and feedback. This paper presents an automated system to help introductory students learn how to test software. Students submit test cases to the system, which uses a large corpus of buggy programs to evaluate these test cases. In addition to gauging the quality of the test cases, the system immediately presents students with feedback in the form of buggy programs that nonetheless pass their tests. This enables students to understand why their test cases are deficient and gives them a starting point for improvement. The system has proven effective in an introductory class: students that trained using the system were later able to write better test cases – even without any feedback – than those who were not. Further, students reported additional benefits such as improved ability to read code written by others and to understand multiple approaches to the same problem.

Building high-quality test cases for programming problems is an important part of any well-built Automated Programming Assessment System. Traditionally, test cases are created by human experts or using machine auto-generation methods based on the problem definition and sample solutions. Unfortunately, the human approach can not anticipate the numerous ways that programmers can construct erroneous solutions. The machine auto-generation methods are complex, problem-specific, and time-consuming. This paper proposes a fast, simple method for generating high-quality test sets for a programming problem from an existing collection of student solutions for that problem. This paper demonstrates the effectiveness of the proposed method in online programming course assessments. The experiments showed that, when applied to large collections of such programs, the method produces concise, human-understandable test sets that provide better coverage than test sets built by experts with rich teaching experience.

Theses