See Google Scholar for a complete publication list.
Advances in PCI-Express and optical interconnects are making rack-scale computers possible, but these computers will undoubtedly exhibit Non-Uniform Memory Access (NUMA) latencies. Ideally, a hypervisor for rack-scale computers should be able to dynamically reconfigure a virtual machine’s processing and memory resources, i.e., its NUMA topology, to satisfy each application’s evolving demands. Unfortunately, current hypervisors lack support for such dynamic reconfiguration. To that end, this paper introduces Virtflex, a multilayered system for enabling unmodified OpenMP applications to adapt automatically to NUMA topology changes. Virtflex provides a novel NUMA page placement reset mechanism within the guest OS and a novel NUMA-aware superpage ballooning mechanism that spans the guest OS-hypervisor boundary. The evaluation shows that Virtflex enables applications to adapt efficiently to NUMA topology changes. For example, adding resources incurs an average runtime overhead of only 7.27%.
Superpages (2MB pages) can reduce the address translation overhead for large-memory workloads in modern computer systems. This paper clearly outlines the sequence of events in the life of a superpage and explores the design space of when and how to trigger and respond to those events. This provides a framework that enables better understanding of superpage management and the trade-offs involved in different design decisions. Under this framework, this paper discusses why state-of-the-art designs exhibit different performance characteristics in terms of runtime, latency and memory consumption. This paper illuminates the root causes of latency spikes and memory bloat and introduces Quicksilver, a novel superpage management design that addresses these issues while maintaining address translation performance.
It is essential that students learn to write code that is not only correct, but also efficient. To that end, algorithmic complexity analysis techniques, such as Big-O analysis, are typically an important part of courses on algorithm design. However, students often hold fundamental misconceptions about how Big-O analysis works. This paper presents Compigorithm, an interactive tool for helping students practice Big-O analysis. Compigorithm scaffolds student learning by breaking down the analysis process into five concrete steps and walking students through each of these steps. When students make mistakes, they are provided with automated hints and allowed to re-attempt until they get the correct answer. Compigorithm was piloted in an introductory algorithms course and evaluated using a controlled experiment. The experimental group trained by analyzing algorithms using Compigorithm, while the control group analyzed the same algorithms by hand. On the subsequent post-test, the experimental group outperformed the control group by a significant margin (p < 0.00001; Cohen’s d = 0.84).
For online courses to be an effective alternative to face-to-face courses, they must provide equivalent levels of interaction, engagement, supervision, and support. This paper analyzes and compares the experiences of students in face-to-face and online sections of the same introductory course taught by the same instructor. The course heavily utilizes team-based active learning, and effort was put into maintaining comparable group experiences in the online section. An analysis of student opinions and objective outcomes revealed only minor differences between the two sections. Notably, there were no statistically significant variations in students’ overall grades, self reported confidence, or course evaluation ratings, indicating that high quality student experiences and outcomes can be achieved when migrating active learning to an online format.
Understanding program execution is a challenging task for novice programmers. The semantic rules which determine how execution affects the program state are numerous and complex, and students frequently hold fundamental misconceptions about these rules. If students do not build a correct mental model of program execution early on, they will face substantial hurdles as they try to develop and debug their code. This paper presents VizQuiz, a tool for auto-generating multiple choice quizzes designed to help students gain insight into the semantic rules which govern program execution. VizQuiz provides students with an initial state and a piece of code, and tasks them with mentally tracing the execution of that code and selecting the correct final state. Reference diagrams are used to depict the initial and final states, and as feedback to help students visualize the correct behavior if they select a wrong answer. Feedback is auto-generated, so students can immediately correct their misconceptions and re-attempt.