Write Training Scripts to Run on GPU, CPU, or Intel® Gaudi® AI Accelerators

Optimize with Intel® Gaudi® AI Accelerators

  • Create new deep learning models or migrate existing code in minutes.

  • Deliver generative AI performance with simplified development and increased productivity.

author-image

By

Sometimes we want to run the same model code using different types of AI accelerators. For example, this can be required if your development laptop has a GPU but your training server is using an Intel Gaudi AI accelerator. Another situation could be if you would like to dynamically choose between a local GPU server and an Amazon EC2* DL1 instance that is powered by an Intel Gaudi AI accelerator. While writing code for each type of AI accelerator is easy, we need to pay attention to the details when we try to enable multiple hardware platforms from a common code base.

We will use the Getting Started with Training on Intel Gaudi AI Accelerator MNIST example and show how to write cross-hardware code with a few tweaks.

If you run the code in a non-SynapseAI and Intel Gaudi software environment, the following error occurs:

ModuleNotFoundError: No module named 'habana_frameworks'

We get this error because the module habana_frameworks is not installed on this machine. To overcome this obstacle we need to wrap the code with a try/except block as follows:

try:
import habana_frameworks.torch.core as htcore
import habana_frameworks.torch.hpu as hthpu
except:
htcore = None
hthpu = None

If we catch the exception, we assign to both htcore and hthpu a None value. This will help us later determine that SynapseAI is not installed on the machine.

Run the code again.

The original code is trying to move the model to the Intel Gaudi accelerator back end, which is not supported without SynapseAI and Intel Gaudi software. Replace the original line

device = torch.device("hpu")

with a code that will dynamically use the best available hardware. See the following code:

if hthpu and hthpu.is_available():
      target = "hpu";
      print("Using HPU")
elif torch.cuda.is_available():
      target = "cuda";
      print("Using GPU")
else:
      target = "cpu"
      print("Using CPU")

device = torch.device(target)

 

Remember that we assigned None to hthpu when the habana_frameworks module was not installed. Now we can use it to identify that SynpaseAI is installed. If it is installed, we are using the is_available() API to dynamically identify that the server has an Intel Gaudi accelerator. If an Intel Gaudi accelerator is not available, we will use the similar CUDA* API to check if the server has a GPU, and if nothing is there, we will just use a CPU.

Next, we make sure to invoke the mark_step() calls only when Intel Gaudi software is present.

Same as before, we use the is_available API to make sure we call the SynapseAI APIs only when Intel Gaudi software is present. We will use the following code:

if hthpu and hthpu.is_available():
htcore.mark_step()

Make sure to replace all the occurrences of mark_step() in the code.

That’s it! The code will work nicely on all three different types of hardware: GPU, CPU, and Intel Gaudi AI accelerators.

Related Article