We optimize machine learning inference workloads for multiple applications in cloud or enterprise data centers and in edge applications. Our products, expertise and IP ensure all available compute resources are optimized for cost, throughput, latency and energy.
Offerings
Offering
VOLLO is an ML inference accelerator for the finance industry. VOLLO is designed to achieve the best latency, throughput, quality and energy- and space-efficiency metrics for the STAC-ML Markets (Inference) benchmarks. VOLLO can accelerate a wide range of similar models developed by financial companies themselves. VOLLO runs on an industry-standard FHFL PCIe accelerator card. The IA-840f card is powered by an Intel® Agilex™ FPGA and built by BittWare, a Molex company. High accuracy is achieved through the use of floating point format in all operations. Models can be trained in FP32 or bfloat16 and run on VOLLO without the need for retraining or accuracy compromises. Designed to be installed in a server co-located in a stock exchange, VOLLO achieves very high throughput and low energy consumption in a 1U server. This significantly reduces the costs incurred in running co-located servers. Models can be trained in PyTorch or TensorFlow before being exported in ONNX format into the VOLLO tool suite, making it simple to program from your existing ML development environment. The flexibility of FPGA technology ensures that not only can VOLLO be software-configured with users’ LSTM model configurations, but significant architectural innovations can also be adopted quickly with optimal compute structures.