Deploy Intel® Optimization for TensorFlow* Using AWS*

ID 729807
Updated 4/20/2022
Version Latest
Public

author-image

By

Intel® Optimization for TensorFlow* is a ready-to-run optimized solution that uses oneAPI Deep Neural Network Library (oneDNN) primitives. This solution works with Intel® Advanced Vector Extensions instructions on 3rd generation Intel® Xeon® Scalable processors (formerly code named Ice Lake) for performance improvement. 

This quick start guide provides instructions for deploying Intel Optimization for TensorFlow to Docker* containers. The containers are packaged by Bitnami on AWS* for the Intel® processors. 

What's Included

Intel Optimization for TensorFlow includes the following precompiled binaries: 

Intel®-Optimized Library

 Minimum Version

oneDNN 2.5

 

Prerequisites

  • An AWS account with Amazon EC2*. For more information, see Get Started with AWS EC2.
  • 3rd generation Intel Xeon Scalable processor (formerly code named Ice Lake) 
  • oneDNN (included in this package)

Deploy Intel Optimization for TensorFlow

  1. Sign into the AWS console.
  2. To launch an Amazon* Machine Image (AMI) instance:
        a) Go to your EC2 Dashboard, and then select Launch Instances. 
        b) In the search box, enter Ubuntu. The search results appear.
        c) Select the appropriate Ubuntu* instance, and then select Next. The Step 2: Choose an Instance Type page appears. 
    Step 1
  3. Select the region and any of the processors.  
    Note For the latest information on the Intel Xeon Scalable processor, see Amazon EC2 M6i Instances
  4. Select Next: Configure Instance Details. The Step 3: Configure Instance Details page appears.  
  5. To designate where your instance launches, do the following:
    • Network: Select the VPN.
    • Subnet: Select the IP address.
      Step 3
  6. Select Add Storage. The Step 4: Storage page appears.
  7. Adjust the storage size as needed, and then select Add a Tag. The Step 5: Add a Tag page appears. 
  8. (Optional) To create a tag, select Adds a Tag, and then select Configure Security Group. The Step 6: Configure Security Group page appears. 
  9. To create a security group: 
    1. Select Create a new security group. 
    2. In Security group name, enter the name. 
    3. In Description, enter the security group description. 
    4. In Port Range, enter 22.
      Step 6 add ssh aws
  10. Select Review and Launch. The Review Instance Launch page appears.
  11. Review your entries. If you need to make edits, select the appropriate Edit link.
  12. When you're done, select Launch. The Select an existing key pair or create a new key pair page appears. 
    Step 7
  13. Select the key pair that you created, and then select Launch Instances. The Instances page appears and shows the status of your launch.
    Step 8 aws
  14. To launch your instance, in the left column, select the appropriate checkbox.
  15. In the upper-right corner, select Connect. The Connect to instance page appears.
    Step 9 aws
  16. Select the SSH client tab,  copy the command under Connect to your instance using its Public DNS.
    Step 10 aws
  17. To connect to the instance, open a terminal window, and then enter the following:
    • SSH command
    • The path and file name of the private key (.pem)
    • The username for your instance
    • The public DNS name
      After the SSH connects to the virtual machine, you can deploy Intel Optimization for TensorFlow to a Docker container.
  18. If needed, install Docker on Ubuntu.
  19. To use the latest Intel Optimization for TensorFlow image, open a terminal window, and then enter this command: docker pull bitnami/tensorflow-intel:latest 
    Note Intel recommends using the latest image. If needed, you can find all versions in the Docker Hub Registry. 
  20. To test Intel Optimization for TensorFlow, start the container with this command: docker run -it --name tensorflow-intel bitnami/tensorflow-intel

    Note tensorflow-intel is the container name for the bitnami/tensorflow-intel image. 
    For more information on the docker run command (which starts a Python* session), see Docker Run
    The container is now running.

  21. Import Intel Optimization for TensorFlow into your program. The following image demonstrates commands you can use to import and use the TensorFlow API.
    Docket step 3

For more information about using TensorFlow and its API, see TensorFlow Introduction and TensorFlow API.

Connect

To ask product experts at Intel questions, go to the Intel Collective at Stack Overflow, and then post your question with the intel-cloud tag.

For queries about the Intel Optimization for TensorFlow image on Docker Hub, see the Bitnami Community

To file a Docker issue, see the Issues section.

For more information, see Intel Optimization for TensorFlow Packaged by Bitnami