Goals
Describe what an optimizing compiler can optimize for you
Describe what an optimizing compiler can't optimize for you
Understand how linear algebra libraries acheive peak performance through memory access pattern
Describe interaction between global variables and type stability of functions
Explain why parallelism in increasingly important
Describe function overloading & multiple dispatch
Describe benefits of use abstract types and containers of abstract types
Describe benefits of an Application Programming Interface
Lab
Lab 7: Parallel Programming II: Batch Jobs & Distributed Memory Systems (due Oct 31)
Exercise 1: Submitting Batch Jobs to Lynx Cluster
Exercise 2: Parallelization for Distributed-memory Systems (e.g., Clusters, Cloud)
Exercise 3: Run code using a container
Exercise 4: Run your project code as a batch job on Lynx
Readings
Distributed Processing with Julia (stop after Parralel Map and Loops)
Additional Resources
Week 9 Class Discussion: Using the Lynx & Roar Collab Clusters