Description
The primary goal of COMP 322 is to introduce you to the fundamentals of parallel programming and parallel algorithms, by following a pedagogic approach that exposes you to the intellectual challenges in parallel software without enmeshing you in the jargon and lower-level details of today's parallel systems. A strong grasp of the course fundamentals will enable you to quickly pick up any specific parallel programming system that you may encounter in the future, and also prepare you for studying advanced topics related to parallelism and concurrency in courses such as COMP 422. The desired learning outcomes fall into three major areas (course modules):
1) Fundamentals of Parallelism: creation and coordination of parallelism (async, finish), abstract performance metrics (work, critical paths), Amdahl's Law, weak vs. strong scaling, data races and determinism, data race avoidance (immutability, futures, accumulators, dataflow), deadlock avoidance, abstract vs. real performance (granularity, scalability), collective & point-to-point synchronization (phasers, barriers), parallel algorithms, systolic algorithms.
2) Fundamentals of Concurrency: critical sections, atomicity, isolation, high level data races, nondeterminism, linearizability, liveness/progress guarantees, actors, request-response parallelism, Java Concurrency, locks, condition variables, semaphores, memory consistency models.
3) Fundamentals of Distributed-Memory Parallelism: memory hierarchies, locality, cache affinity, data movement, message-passing (MPI), communication overheads (bandwidth, latency), MapReduce, accelerators, GPGPUs, CUDA, OpenCL, energy efficiency, resilience.
1) Fundamentals of Parallelism: creation and coordination of parallelism (async, finish), abstract performance metrics (work, critical paths), Amdahl's Law, weak vs. strong scaling, data races and determinism, data race avoidance (immutability, futures, accumulators, dataflow), deadlock avoidance, abstract vs. real performance (granularity, scalability), collective & point-to-point synchronization (phasers, barriers), parallel algorithms, systolic algorithms.
2) Fundamentals of Concurrency: critical sections, atomicity, isolation, high level data races, nondeterminism, linearizability, liveness/progress guarantees, actors, request-response parallelism, Java Concurrency, locks, condition variables, semaphores, memory consistency models.
3) Fundamentals of Distributed-Memory Parallelism: memory hierarchies, locality, cache affinity, data movement, message-passing (MPI), communication overheads (bandwidth, latency), MapReduce, accelerators, GPGPUs, CUDA, OpenCL, energy efficiency, resilience.
General Information
Course web site
Name | Office Hours | |
---|---|---|
Vivek Sarkar | When? Where? | |
Max Grossman | When? Where? | |
Bing Xue | When? Where? | |
Nicholas Hanson-Holtry | When? Where? | |
Shams Imam | When? Where? | |
Ayush Narayan | When? Where? | |
Prudhvi Boyapalli | When? Where? | |
Thomas Roush | When? Where? | |
Hunter Tidwell | When? Where? | |
Alitha Partono | When? Where? | |
Jonathan Sharman | When? Where? | |
Arghya (Ronnie) Chatterjee | When? Where? | |
Yuhan Peng | When? Where? | |
Prasanth Chatarasi | When? Where? | |
Peter Elmers | When? Where? |