Driving PyTorch* & AI Everywhere: Intel Joins the PyTorch Foundation

Get the Latest on All Things CODE

author-image

By

Intel is honored to join the PyTorch* Foundation as a premier member. We look forward to engaging with other industry leaders to collaborate on the open source PyTorch framework and ecosystem. We believe that PyTorch holds a pivotal place in accelerating AI—it allows for fast application development to promote experimentation and innovation. Joining the PyTorch Foundation underscores Intel’s commitment to accelerate enhancements to the machine learning framework with technical contributions and nurture its ecosystem.

Our contributions to PyTorch started in 2018. The vision: democratize access to AI through ubiquitous hardware and open software. In this blog, we highlight our ongoing efforts to advance PyTorch and its ecosystem, thus further enabling an AI Everywhere future that prioritizes innovation. We appreciate collaborating with our colleagues at Meta* and other contributors from the open source community.

Advance PyTorch 2.0 Features through Intel Optimizations

PyTorch benefits from substantial Intel optimizations for x86, including accelerating PyTorch using Intel® oneAPI Deep Neural Network Library (oneDNN), optimizations for aten operators, bfloat16, and automatic mixed precision support. We also actively participated in the design and implementation of general PyTorch features such as quantization and compiler by contributing four significant performance features to PyTorch 2.0:

  1. Optimized TorchInductor CPU FP32 inference
  2. Improved Graph Neural Network (GNN) in PyTorch Geometric (PyG) for inference and training performance
  3. Optimized int8 inference with a unified quantization back end for x86 CPU platforms
  4. Took advantage of the oneDNN Graph API to accelerate inference on CPUs

We are also proposing new features to include in the framework’s next release.

PyTorch Maintainers

Intel has four PyTorch maintainers (three active, one emeritus) who maintain the CPU performance modules and the compiler front end. They are proactive in triaging issues and reviewing pull requests (PRs) from the community and landing hundreds of PRs in PyTorch upstream, which is quite an impressive feat. The maintainers for CPU performance include:

  • Mingfei Ma (mingfeima), deep learning software engineer
  • Jiong Gong (Jgong5), principal engineer and compiler front-end maintainer
  • Xiaobing Zhang (XiaobingSuper), deep learning software engineer
  • Jianhui Li (Jianhui-Li), senior principal engineer, now emeritus; recognized by the PyTorch community for his past contributions and expertise in AI

Collaborating with the PyTorch Community

Our maintainers actively engage with the PyTorch community to foster collaboration and innovation among AI developers, researchers, and industry experts. Key activities include:

Furthering a PyTorch Open Ecosystem

Intel releases its newest optimizations and features in Intel® Extension for PyTorch* before they are ready to land in upstream PyTorch, giving users early access to accelerations and other benefits. This extension is based on the oneAPI multiarchitecture programming model. With a few lines of code, you can take advantage of the most up-to-date Intel software and hardware optimizations for PyTorch.

In addition, Intel Extension for PyTorch for GPU extends PyTorch with up-to-date features and optimizations for an extra performance boost on Intel graphics cards. It is released in an open source project on the xpu-master branch on GitHub. For further details, see the release notes.

Intel also provides technical contributions to libraries in the PyTorch ecosystem such as TorchServe, PyTorch Geometric, DeepSpeed, and Hugging Face* Transformers (such as Accelerate and Optimum).

Intel Joins The Linux Foundation* AI & Data Foundation

Earlier this month, Intel also joined The Linux Foundation* AI and Data Foundation as a premier member. By joining the governing board, Intel has the opportunity to contribute its rich experience with leading open innovation and nurturing developer communities. This helps shape the strategic direction of this foundation’s AI and data work to accelerate the development of open source AI projects and technologies.

Get the Software

An open ecosystem drives industry innovation and acceleration, and Intel provides an expansive portfolio of AI-optimized hardware and software to empower AI Everywhere. We look forward to continued collaborations with partners to advance the PyTorch community and ecosystem.

Try PyTorch 2.0 and realize the performance benefits for yourself.

Check out GitHub for tutorials and the latest Intel Extension for PyTorch release.

PyTorch Resources