Modeling human learning on visualmotor tasks


This project is funded by the ONR.

Introduction

This project, funded by the ONR, addresses the problem of characterizing human learning on tasks with significant strategic and visualmotor components. Our goal is to track the evolution of learning over time in order to custom design training protocols for humans. Examples of such tasks include submarine navigation and flight control. They are difficult for humans to learn because they require the coordinated acquisition of a task strategy (e.g., an evasive maneuver) and the skills to implement that strategy (e.g., a visualmotor servo-loop).

The affordability of computing power and the ready availability of non-invasive recordings of visualmotor activity during task performance has brought a brand new paradigm for task training within our reach. We have developed algorithms that allow machines to learn models of humans acquiring a task, based on real-time analysis of their visualmotor activity. For example, our models track shift in strategy based purely on the visualmotor performance data. Indeed, our larger goal is to provide a computational microscope for human learning by extracting and interpreting objective sources of performance data.

Our approach offers a scalable solution that harnesses the power of computing to fundamentally change engineering practice in training, and to increase our scientific understanding of human learning.