DEEPX M.2 AI Accelerator is a compact, high‑performance AI module that delivers exceptional inference accuracy, ultra‑low power operation, and a minimal spatial footprint. It maintains GPU‑level precision while consuming under 5W and operates reliably across an industrial temperature range of -25°C to 85°C.
Integrating ISP and encoder, the DEEPX module enables end‑to‑end vision‑pipeline optimization under 7W, reducing CPU load for direct camera processing in security and retail.

Featuring a dedicated AI accelerator integrated into a standard M.2 2280 form factor, this module delivers INT8 TOPS-class computing power, enabling high-performance inference in a compact design

With a universal PCIe M.2 interface supporting plug-and-play connectivity, the accelerator can be quickly embedded into existing x86/ARM-based devices, significantly lowering the hardware integration threshold and retrofitting costs.

Designed with a high-efficiency architecture, the module achieves high inference throughput at just a few watts, leading in TOPS/Watt metrics and making it ideal for fanless and space-constrained edge environments.

A full suite SDK and toolchain—covering model compilation, optimization, and deployment—is provided, with support for mainstream AI frameworks, dramatically shortening the development cycle from model to product.
| Processor | INT8 Performance | 25 TOPS (=200 eTOPS / INT8) |
| Signal Interface | PCI Express | PCIe Gen.3 x4 / Bandwidth: 4GB/s |
| Compatible to PCIE x1 | ||
| Power | Power Consumption | 2W min., 5W max. for DEEPX supported models |
| Operating | Temperature | -25 ~ 85°C (Throttling) |
| -25 ~ 65°C (Non_Throttling) | ||
| Environment | Humidity | 40 °C @ 85% relative humidity (non-condensing) |
| Thermal Solution | Cooling | Heatsink (Option) |
| Physical | Form Factor | M.2 2280 (Key M) |
| Dimensions | 22mm x 80mm x 4.1mm | |
| Power Range | 3.3V ± 5% | |
| Software Support | Windows | Windows 11, 10 64 bit |
| Linux | Ubuntu 22.04, 20.04 LTS | |
| Support Yocto Project and Docker | ||
| Framework Support | Support TensorFlow, TensorFlow Lite, ONNX, Keras, PyTorch by Dataflow compiler converted | |
| System Support | CPU Platform | x86, ARM Based Architecture |