Intel Deep Learning Deployment Toolkit Link

Ditch the Complexity: Supercharge Inference with the Intel Deep Learning Deployment Toolkit

The easiest way to get the runtime is via pip, though for the full Model Optimizer, download the full OpenVINO toolkit. intel deep learning deployment toolkit

mo --input_model my_model.onnx --output_dir ./optimized_model Here is a Python snippet to run your newly minted IR model: Ditch the Complexity: Supercharge Inference with the Intel

If you are deploying to CPUs (and let's be honest, 90% of inference still happens on CPUs), you are leaving performance on the table by not using DLDT. though for the full Model Optimizer