CS 631      

      Advanced Parallel Computing      

      Winter 2024      

    Information     Overview     Logistics     Materials     Announcements

Information

Class meeting (lecture)

Monday/Wednesday, 16:00 - 17:20, January 8 - March 15    
200 Deschutes Hall

Final period

Monday, 14:45-16:45, March 18
200 Deschutes Hall

Instructor

Dr. Allen D. Malony
Office: 300 Deschutes

Overview

Parallel computing is a broad field of computer science concerned with the architecture, HW/SW systems, languages, programming paradigms, algorithms, and theoretical models that make it possible to compute in parallel. Parallel computing is also an old field, there at the very beginning of computing invention. While parallel programming and execution can be studied in the abstract, performance is parallelism's raison d'etre. Charles Babbage once said of the Difference Engine design, ``The most constant difficulty in contriving the engine has arisen from the desire to reduce the time in which the calculations were executed to the shortest which is possible.'' He used parallel ideas to address this. Parallelism continues to be the path to performance in modern day and future computing. Indeed, parallel computing is the name of the game in high-performance computing (HPC). Large-scale parallelism (> 100000 processors) lies at the heart of the fastest, most powerful computing systems in the world today.

At the same time, multicore technology is everywhere, which means that parallelism is ubiquitous. Small-scale, low-end parallelism is the driving force behind affordable scientific computing and the growing success of computational science. With the advent of computational accelerators, most notably GPGPUs, things are getting a lot more interesting. Parallelism is also in cell phones, PDA, tables, and laptops. Parallel computing is everywhere!

This course will be directed towards advanced topics in parallel computing. It is assumed that graduate students have some academic experience in the subject and hopefully some experience in parallel programming. An important part of the course will be to get training in modern parallel programming languages, but at an advanced pace. In addition, students will learn about parallel hardware (including multicore and manycore processors and accelerators), systems software, runtimes systems, performance tools, and much more.




Announcements

This is an announcement.