OpenMP*: The Once and Future API
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
October 2018 marked the 21st birthday of OpenMP*—an API for writing multithreaded applications that has evolved into a preeminent parallel programming model.
According to Tim Mattson—one of the OpenMP founders—the reason why is simple. “[OpenMP] is a safe and gentle way to get into parallel computing. Developers can quickly go from ground zero to writing parallel algorithms.”
Additionally, as an open standard, OpenMP gives everyone an equal chance in a competitive league of hardware vendors (software applications always last longer than any particular hardware product), allowing developers to create portable code that withstands the continuous evolution of hardware revisions.
So where is OpenMP going next? Are there another 21 years in its future? Can it remain relevant in a future where the pace of hardware complexity and heterogeneity goes off the charts?
Tune in to hear Tim discuss these very issues with Tech.Decoded, including his prediction about parallel programming’s future.
Get the Software
OpenMP 5.0 support is available in Intel® oneAPI DPC++/C++ Compiler, Intel® C++ Compiler Classic, and Intel® Fortran Compiler. Get all three in the Intel® oneAPI HPC Toolkit.
Tim Mattson
Senior principal engineer, Intel Corporation
Tim Mattson is a parallel programmer whose 24/7 obsession is science. With Intel since 1993, Tim's contributions span a brilliant array of globe-changing efforts. These efforts include (and this is the short list) the first TeraFLOP (TFLOP) computer, the OpenMP and OpenCL™ code, the first TFLOP chip from Intel, and the 48 core SCC, Polystore data management systems (in collaboration with the Massachusetts Institute of Technology [MIT]), and the GraphBLAS API for expressing graph algorithms as sparse linear algebra.
Tim leads a programming systems research group and collaborates with researchers at MIT on the intersection of AI and data systems (dsail.csail.mit.edu). Tim earned a bachelor degree in chemistry from the University of California, Riverside, and a master's degree in chemistry and a PhD in quantum scattering theory from the University of California, Santa Cruz.
Henry A. Gabb, PhD
Senior principal engineer in the Intel® Software and Services Group, Developer Products Division, and editor of The Parallel Universe, Intel’s quarterly magazine for software innovation
Henry joined Intel in 2000 to help drive parallel computing inside and outside the company. He transferred to Intel Labs in 2010 to become the program manager for various research programs in academia, including the Universal Parallel Computing Research Centers at the University of California at Berkeley and the University of Illinois at Urbana-Champaign. Prior to joining Intel, Henry was director of scientific computing at the US Army Engineer Research and Development Center MSRC, a Department of Defense high-performance computing facility. Henry holds a bachelor in biochemistry from Louisiana State University, a master of science in medical informatics from the Northwestern Feinberg School of Medicine, and a PhD in molecular genetics from the University of Alabama at Birmingham School of Medicine. He has published extensively in computational life science and high-performance computing (HPC). Henry rejoined Intel after spending four years working on a second PhD in information science at the University of Illinois at Urbana-Champaign, where he established his expertise in applied informatics and machine learning for problems in healthcare and chemical exposure.
Deliver fast applications that scale across clusters with tools and libraries for vectorization, multi-node parallelization, memory optimization, and more.