AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers
ID
848984
Date
4/07/2025
Public
Visible to Intel only — GUID: xrc1739928148662
Ixiasoft
2. Preparing LiteRT Inference Model
The LiteRT development workflow involves identifying a Machine Learning (ML) problem, choosing a model that solves that problem, and implementing the model on embedded devices. LiteRT is designed to run machine learning models on embedded devices with only a few kilobytes of memory. It doesn't require operating system support, any standard C or C++ libraries, or dynamic memory allocation.
The following example illustrates how to prepare a LiteRT model for digit classification. It outlines the steps needed to prepare the model in a TensorFlow Python environment before converting it into a LiteRT model.
Import the following Python libraries at the start of the Python script:
import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import random