AN 1011: TinyML Applications in Altera FPGAs Using LiteRT for Microcontrollers

ID 848984
Date 4/07/2025
Public

Visible to Intel only — GUID: trg1739952809869

Ixiasoft

Document Table of Contents

2.4.1. Converting into LiteRT Model

A good starting point is converting a TensorFlow model to a LiteRT model without quantization, which generates a 32-bit floating-point LiteRT model.
# Convert the model from Tensorflow to LiteRT model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

Alternatively, you can use full integer-only quantization to reduce the model size and increase processing speed. However, this may impact the model's accuracy.