Deployments

Gradient Deployments helps you perform effortless model serving.

Easily deploy your machine learning model as an API endpoint in a few simple steps. Stop worrying about Kubernetes, Docker, and framework headaches.

Gradient makes model inference simple and scalable.

move from R&D into production with Deployments.

01

Select Model

Select an existing model or upload a new model from the interface or CLI.

02

Choose Runtime

Choose from your preferred runtime eg TensorFlow Serving, Flask, etc.

03

Serve Model

Set instance, types, autoscaling behavior, and other parameters. Click deploy!

Gradient Deployments are in private beta. Complete the form to join the waitlist.