Get Started with Intel® Integrated Performance Primitives for macOS*

ID 772300
Date 11/07/2023
Public

Get Started with Intel® Integrated Performance Primitives for Intel® oneAPI Base Toolkit for macOS*

Intel® Integrated Performance Primitives (Intel® IPP) is a software library that provides a broad range of functionality, including general signal and image processing, computer vision, data compression, and string manipulation.

The library is delivered as a part of Intel® oneAPI Base Toolkit. You may install specific library version as well. This get started guide assumes you have installed Intel IPP library as a part of the toolkit.

Prerequisites (macOS*)

Starting with the 2024.0 release, macOS is no longer supported in Intel® oneAPI Toolkits and components. Several Intel-led open source developer tool projects will continue supporting macOS on Apple Silicon including oneAPI Threading Building Blocks (oneTBB) and Intel® Implicit SPMD Program Compiler and we welcome the opportunity to work with contributors to expand support to additional tools in the future. All macOS content will be removed from technical documentation in the 2024.1 release. If you need a copy of the documentation, click the Download button in the upper right or download it from the Downloadable Documentation site.

Set Environment Variables

After installing Intel IPP, set the IPPROOT, LD_LIBRARY_PATH environment variables by running the script appropriate to your target platform architecture. The scripts are available in <install dir>/env.

By default, the <install dir> is /opt/intel/oneapi. See Intel IPP high-level directories structure.

Build and Run Your First Intel® IPP Application (macOS*)

The code example below represents a short application to help you get started with Intel IPP:

#include <stdio.h>
#include "ipp.h"

#define PRINT_INFO(feature, text) printf("  %-30s= ", #feature); \
      printf("%c\t%c\t", (cpuFeatures & feature) ? 'Y' : 'N', (enabledFeatures & feature) ? 'Y' : 'N'); \
      printf( #text "\n")

int main(int argc, char* argv[])
{
      const       IppLibraryVersion *libVersion;
      IppStatus   status;
      Ipp64u      cpuFeatures, enabledFeatures;

      ippInit();                      /* Initialize Intel® IPP library */
      libVersion = ippGetLibVersion();/* Get Intel® IPP library version info */
      printf("%s %s\n", libVersion->Name, libVersion->Version);

      status = ippGetCpuFeatures(&cpuFeatures, 0);/* Get CPU features and features enabled with selected library level */
      if (ippStsNoErr != status) return status;
      enabledFeatures = ippGetEnabledCpuFeatures();
      printf("Features supported: by CPU\tby Intel® IPP\n");
      printf("------------------------------------------------\n");
      PRINT_INFO(ippCPUID_MMX,        Intel® Architecture MMX technology supported);
      PRINT_INFO(ippCPUID_SSE,        Intel® Streaming SIMD Extensions);
      PRINT_INFO(ippCPUID_SSE2,       Intel® Streaming SIMD Extensions 2);
      PRINT_INFO(ippCPUID_SSE3,       Intel® Streaming SIMD Extensions 3);
      PRINT_INFO(ippCPUID_SSSE3,      Supplemental Streaming SIMD Extensions 3);
      PRINT_INFO(ippCPUID_MOVBE,      Intel® MOVBE instruction);
      PRINT_INFO(ippCPUID_SSE41,      Intel® Streaming SIMD Extensions 4.1);
      PRINT_INFO(ippCPUID_SSE42,      Intel® Streaming SIMD Extensions 4.2);
      PRINT_INFO(ippCPUID_AVX,        Intel® Advanced Vector Extensions instruction set);
      PRINT_INFO(ippAVX_ENABLEDBYOS,  Intel® Advanced Vector Extensions instruction set is supported by OS);
      PRINT_INFO(ippCPUID_AES,        Intel® AES New Instructions);
      PRINT_INFO(ippCPUID_CLMUL,      Intel® CLMUL instruction);
      PRINT_INFO(ippCPUID_RDRAND,     Intel® RDRAND instruction);
      PRINT_INFO(ippCPUID_F16C,       Intel® F16C new instructions);
      PRINT_INFO(ippCPUID_AVX2,       Intel® Advanced Vector Extensions 2 instruction set);
      PRINT_INFO(ippCPUID_ADCOX,      Intel® ADOX/ADCX new instructions);
      PRINT_INFO(ippCPUID_RDSEED,     Intel® RDSEED instruction);
      PRINT_INFO(ippCPUID_PREFETCHW,  Intel® PREFETCHW instruction);
      PRINT_INFO(ippCPUID_SHA,        Intel® SHA new instructions);
      PRINT_INFO(ippCPUID_AVX512F,    Intel® Advanced Vector Extensions 512 Foundation instruction set);
      PRINT_INFO(ippCPUID_AVX512CD,   Intel® Advanced Vector Extensions 512 CD instruction set);
      PRINT_INFO(ippCPUID_AVX512ER,   Intel® Advanced Vector Extensions 512 ER instruction set);
      PRINT_INFO(ippCPUID_AVX512PF,   Intel® Advanced Vector Extensions 512 PF instruction set);
      PRINT_INFO(ippCPUID_AVX512BW,   Intel® Advanced Vector Extensions 512 BW instruction set);
      PRINT_INFO(ippCPUID_AVX512VL,   Intel® Advanced Vector Extensions 512 VL instruction set);
      PRINT_INFO(ippCPUID_AVX512VBMI, Intel® Advanced Vector Extensions 512 Bit Manipulation instructions);
      PRINT_INFO(ippCPUID_MPX,        Intel® Memory Protection Extensions);
      PRINT_INFO(ippCPUID_AVX512_4FMADDPS,    Intel® Advanced Vector Extensions 512 DL floating-point single precision);
      PRINT_INFO(ippCPUID_AVX512_4VNNIW,      Intel® Advanced Vector Extensions 512 DL enhanced word variable precision);
      PRINT_INFO(ippCPUID_KNC,        Intel® Xeon Phi™ Coprocessor);
      PRINT_INFO(ippCPUID_AVX512IFMA, Intel® Advanced Vector Extensions 512 IFMA (PMADD52) instruction set);
      PRINT_INFO(ippAVX512_ENABLEDBYOS,       Intel® Advanced Vector Extensions 512 is supported by OS);
      return 0;
}

This application consists of three sections:

  1. Initialize the Intel IPP library. This stage is required to take advantage of full Intel IPP optimization. With ippInit(), the best implementation layer is dispatched at runtime; otherwise, the least optimized implementation is chosen. If the Intel IPP application runs without ippInit(), the Intel IPP library is auto-initialized with the first call of the Intel IPP function from any domain that is different from ippCore. In certain debugging scenarios, it is helpful to force a specific implementation layer using ippSetCpuFeatures(), instead of the best as chosen by the dispatcher.

  2. Get the library layer name and version. You can also get the version information using the ippversion.h file located in the <install_dir>/include directory.

  3. Show the hardware optimizations used by the selected library layer and supported by CPU.

To build the code example above, follow the steps:

  1. Paste the code into the editor of your choice.

  2. Make sure the compiler and Intel IPP variables are set in your shell.

  3. Compile with the following command:

 icc ipptest.cpp -o ipptest -I$IPPROOT/include -L$IPPROOT/lib/<arch> -lippcore``

For more information about which Intel IPP libraries you need
to link to, see the `Intel IPP Developer Guide and Reference
<https://www.intel.com/content/www/us/en/docs/ipp/developer-guide-reference/current/overview.html>`__.
  1. Run the application.

Training and Documentation

Document

Description

Online Training

Intel® IPP training resources.

Intel® IPP Developer Guide and Reference

Provides detailed guidance on Intel IPP library configuration, development environment, linkage modes, and Custom Library Tool use as well as detailed descriptions of the Intel IPP functions and interfaces for signal, image processing, and computer vision.

Tutorial: Image Blurring and Rotation with Intel® IPP

Demonstrates how to implement box blurring of an image using Intel IPP image processing functions.

Integration Wrappers for Intel® IPP

Contains detailed descriptions of the Intel IPP Integration Wrappers C and C++ application programming interfaces and provides guidance on how to use them in your code.

Intel® IPP Examples

Include a collection of example programs that demonstrate the various features of the Intel IPP library. These programs are located in the components_and_examples_<os>_<target>.zip archive at the <install_dir>/components subdirectory. The archive also includes the ipp-examples.html documentation file at the documentation subdirectory.

Intel® Integrated Performance Primitives

Intel® IPP product page. See this page for support and online documentation.

Layers for Yocto Project

Add oneAPI components to a Yocto* project build using the meta-intel layers.

Notices and Disclaimers

Intel, the Intel logo, Intel Atom, Intel Core, Intel Xeon Phi, VTune and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© Intel Corporation.

This software and the related documents are Intel copyrighted materials, and your use of them is governed by the express license under which they were provided to you (License). Unless the License provides otherwise, you may not use, modify, copy, publish, distribute, disclose or transmit this software or the related documents without Intel’s prior written permission.

This software and the related documents are provided as is, with no express or implied warranties, other than those that are expressly stated in the License.

Product and Performance Information

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.

Notice revision #20201201