ONNX, short for Open Neural Network Exchange, is an open standard for ML models that enables ML developers to work in multiple frameworks without suffering from interoperability issues. ONNX abstracts hardware by offering out of the box support for common ML accelerators, including some of the latest ML-specific silicon entering the market for the first time.
Gradient natively integrates ONNX for deploying models and includes a pre-built ONNX image out of the box which is updated regularly. Alternatively, customers can use a customized version of ONNX by using their own Docker image hosted on a public or private Docker registry.
When creating a Deployment, you can select the prebuilt image or bring your own custom image. These options are possible via the web UI, the CLI, or defined as a step within an automated pipeline.
When using the CLI, the command would like something like this:
Learn more about the Gradient ONNX integration in the docs.