Parallel computing has historically played a vital role in addressing the performance demands of high-end engineering and scientific applications. However, it has now moved to center stage in light of current hardware trends and device power efficiency limits. All computer systems --- embedded, game consoles, laptop, desktop, high-end supercomputers, and large-scale data center clusters --- are being built using chips with an increasing number of processor cores, with little or no increase in clock speed per core. Unlike previous generations of hardware evolution, this shift will impact all segments of the IT industry and all areas of Computer Science.
The goal of COMP 422 is to introduce you to the foundations of parallel computing including the principles of parallel algorithm design, analytical modeling of parallel programs, OpenMP and MPI programming models for shared- and distributed-memory systems, parallel computer architectures, along with numerical and non-numerical algorithms for parallel systems. This year's course will also include new material on homogeneous & heterogeneous multicore hardware, theoretical foundations of task scheduling, new shared-memory programming models (Java Concurrency Utilities, Intel Thread Building Blocks, .Net Task Parallel Library & PLINQ, Cilk, X10), programming models for GPUs, and problem-solving on large-scale clusters using Google's MapReduce. A key aim of the course is for you to gain a hands-on knowledge of the fundamentals of parallel programming by writing efficient parallel programs in some of the programming models that you learn in class.