Tokyo Electron Device Boosts AI Inference Speeds

Leveraging Intel's OpenVINO™ toolkit integrated with PCs, TED accelerated AI inference without external applications.

At a glance:

  • Tokyo Electron Device (TED), a member of the Tokyo Electron Group, specializes in semiconductor devices, electronic components, and networking devices.

  • TED delivers semiconductor solutions leveraging Intel's OpenVINO™ toolkit, when integrated into industrial PCs for medical and manufacturing applications, this developer toolset accelerates AI inference without the need for external accelerators.

author-image

By

Executive Summary

Recent advances in AI research using camera images have spurred rapid development of AI solutions by device manufacturers. These applications require significant computational power for AI inference, typically processed with external accelerators. This approach introduces a series of issues accompanied by additional GPU cards, such as size constraints, higher costs, increased power consumption, as well as concerns about long-term availability.

This is where OpenVINO™ toolkit comes in—a solution to accelerate AI inference at the edge. OpenVINO toolkit is a developer's toolset provided by Intel at no cost to optimize inference performance on Intel CPUs, integrated GPUs (iGPUs), and NPUs.

Using the OpenVINO toolkit enables high-speed AI inference on both standard laptops and industrial PCs.

Read the white paper – Leveraging OpenVINO™ Toolkit for AI Inference in Medical and Industrial Imaging to Overcome Size and Cost Challenges