Data Parallel C++: An Open Alternative for Cross-Architecture Development
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Data Parallelism—also known as data parallel compute—is no longer a new thing. It is the programming model for most compute-intense applications and solutions running on multicore systems, including those that promote AI, machine learning, and video processing.
And according to Intel senior fellow Geoff Lowney, it will likely remain the dominant compute pattern for the next ten years.
The challenge, then, is helping developers express parallelism more easily across the expanse of hardware architectures—CPUs for sure, but also GPUs, FPGAs, VPUs, IPUs, and more. You get the picture. To do this, a new language is needed.
That language is Data Parallel C++ (DPC++), a key part of the oneAPI initiative by Intel and an extension of familiar C++ that enables new ways to express parallelism for cross-architecture development.
In this 12-minute video, Geoff discusses DPC++ and what you need to know, including:
- Does DPC++ require separate host and kernel code?
- Why use DPC++ for heterogeneous parallelism versus adopting OpenCL™ code or CUDA*?
- Do my legacy C++ programs need updating to take advantage of DPC++? If so, how much?
- Can I combine DPC++, Threading Building Block, Parallel STL, and OpenMP* in the same program?
- Will DPC++ features eventually become part of the C++ standard?
Watch.
Get the Software
- Explore this initiative led by Intel, including DPC++, the download of free software toolkits, a cloud-based development sandbox, training, industry partners, and more. Learn More
- Sign up for an Intel® Developer Cloud account—a free development sandbox with access to the latest Intel® hardware and oneAPI software. No downloads. No configuration steps. No installations.
Geoff Lowney
Intel senior fellow and chief technology officer for Compute Performance and Developer Products at Intel Corporation
As a CTO, P. Geoffrey Lowney directs the development of compilers, runtime systems, and programming tools for Intel® platforms. Before joining Intel in 2001, Geoff was a fellow in microprocessor engineering and design at Compaq Computer Corporation, including serving as the company’s director of compiler and architecture advance development. Additionally, his career includes being a member of the Alpha microprocessor group at Digital Equipment Corporation (DEC), a consulting engineer at HP, leader of the compiler team at Miltiflow Computer, and assistant professor at the Courant Institute of Mathematical Sciences at New York University.
Lowney earned his bachelor degree in mathematics from Yale University and his master degree and PhD in computer science, also from Yale. He has been granted nearly 20 patents in computer architecture and compiler technology, with additional patents pending.
Henry Gabb
PhD, senior principal engineer, Intel Corporation
Henry is part of the Intel Software and Services Group, Developer Products Division, and is the editor of The Parallel Universe, the quarterly magazine for software innovation from Intel. He first joined Intel in 2000 to help drive parallel computing inside and outside the company. He transferred to Intel Labs in 2010 to become the program manager for various research programs in academia, including the Universal Parallel Computing Research Centers at the University of California at Berkeley and the University of Illinois at Urbana-Champaign. Before joining Intel, Henry was director of scientific computing at the U.S. Army Engineer Research and Development Center MSRC, a Department of Defense high-performance computing (HPC) facility. Henry holds a bachelor of science degree in biochemistry from Louisiana State University, a master of science degree in medical informatics from the Northwestern Feinberg School of Medicine, and a PhD in molecular genetics from the University of Alabama at Birmingham School of Medicine. He has published extensively in computational life science and HPC. Henry recently rejoined Intel after spending four years working on a second PhD in information science at the University of Illinois at Urbana-Champaign, where he established an expertise in applied informatics and machine learning for problems in healthcare and chemical exposure.
Develop high-performance, data-centric applications for CPUs, GPUs, and FPGAs with this core set of tools, libraries, and frameworks including LLVM*-based compilers.
You May Also Like
Related Video