Workflows

Gradient Workflows provides a simple way to automate machine learning tasks.

Supercharge your workflow with a CI/CD approach for machine learning. Install Gradient on any repo and train models directly from pull requests or commits. Build reproducible, maintainable, and deterministic models without ever configuring servers.

Run a workflow when you push code.

Link your source code with Gradient and trigger training from Git commits.

01

Connect

Install the GitHub app on your repo and connect to a Gradient project.

02

Train

Every time you push code a Gradient Workflow will be triggered.

03

Repeat

Iterate quickly and in parallel. Create continuously updated ML models.

Connect your account
Install the app and create a Gradient project.

Linking your Git repo takes just a few seconds. Once connected, your ML training will be tightly coupled with your source code.

Define your pipeline
Add a simple .yaml file to define your pipeline steps.
defaults:  
env: apiKey: secret:api_key  resources:    instance-type: C3jobs:  CloneRepo:    outputs:      repo:        type: volume    uses: git-checkout@v1    with:      url: https://github.com/gradient-ai/fashionmnist.git  TrainModel:    env:      MODEL_DIR: /my-trained-model    needs:      - CloneRepo    inputs:      repo: CloneRepo.outputs.repo    outputs:      trained-model:        type: dataset        with:          ref: demo-dataset     uses: container@v1    with:      args:        - bash        - "-c"        - >-          cd /inputs/repo/train && python train.py && cp -R /my-trained-model /outputs/trained-model      image: 'tensorflow/tensorflow:1.9.0'
Push to train
Make changes to your code, then push. Our built-in CI/CD system triggers on every code change.
▲ ~ fashion-app/ git push
Version your models
Use the Create Model Gradient Action to automatically capture and save models to the model repository.

The Gradient model repository is a hub for importing, managing, and deploying ML models.

Deploy as an API endpoint
🚀 Ship
Serve your trained model 🎉

Gradient makes model inference simple and scalable. Deploy any model as a high-performance, low-latency micro-service with a RESTful API. Easily monitor, scale, and version deployments.

"Our partnership with Paperspace will boost our system’s advanced analytics so that we can better enable cities to remotely and continuously control their wastewater quality. Accordingly, we will begin to see greater wastewater reuse, cleaner environments, and healthier communities."

Ari Goldfarb, Kando CEO

Perfect for ML developers. A powerful no-fuss environment with loads of features that "just works."

Free signup
Easy setup
End to End

Start in seconds

Go from signup to training a model in seconds. Leverage pre-configured templates & sample projects.

Infrastructure abstraction

Job scheduling, resource provisioning, cluster management, and more without ever managing servers.

Scale instantly

Scale up training with a full range of GPU options with no runtime limits.

Full reproducibility

Automatic versioning, tagging, and life-cycle management. Develop models and compare performance over time.

Collaboration

Say goodbye to black-boxes. Gradient provides a unified platform designed for your entire team.

Insights

Improve visibility into team performance. Invite collaborators or leverage public projects.

And much more...

  • Persistent storage
  • Terminals
  • System metrics
  • Versioning
  • Dataset tracking
  • Run anywhere
  • Tag management
  • Log streaming
  • Python CLI and SDK

Run on any ML framework. Choose from wide selection of pre-configured templates or bring your own.

Write once, run anywhere

Run Gradient as a managed service or deploy in your cloud or on-premise cluster
Managed Service

Join over 500K developers. Run notebooks, workflows, and more without any setup.

Get Started
Self-Hosted
New

On-prem, cloud, or hybrid environment support with NVIDIA DGX integration.

Download

Questions about which option is best for you?

Add speed and simplicity to your workflow today