Skip to main content

What You’ll Need

Quick Start Steps

StepActionGuide
1Launch your first Job using a Recipe (easiest)Quick Start: Job Recipe
2Learn to authenticate API requestsAuthenticate Requests
3Create custom Jobs with the APICreate a Job

Two Ways to Launch Jobs

Job Recipes are pre-configured templates that you can launch repeatedly with minimal effort. Use recipes for workloads you expect to run more than once. Why use Job Recipes?
  • Easy re-launches — Run the same workload again without resubmitting the full configuration
  • Iterative development — Tweak inputs and re-launch quickly
  • Pre-tested configs — Hardware requirements and container settings already validated
  • Forkable — Fork official recipes to create your own customized versions with different hardware specs, Docker images, or pre-filled inputs
Available recipe types include:
  • Base: Linux with SSH access (ubuntu, debian)
  • Inference: GenAI model inference environments (comfyui, pytorch)
  • Training: Fine-tuning and reinforcement learning (unsloth, jupyter)
Quick Start with Job Recipes →

2. Direct API (One-Off Workloads)

Create Jobs directly via the API when you need full control or have a one-off workload that you don’t expect will be repeated.
  • Specify exact hardware requirements
  • Configure custom Docker images
  • Full control over container parameters
If you find yourself submitting the same Job payload repeatedly, consider creating a Job Recipe instead — it’s much easier to re-launch a recipe than to manage full Job payloads.
Create Jobs via API →

How Job Execution Works

  1. Submit Job specifying hardware requirements and container config
  2. Network Matches your Job to an available Node
  3. Node Executes your Job in an isolated Docker container
  4. Connect to the running container (e.g. SSH)
  5. Stop Job when finished (or let it complete for BATCH jobs)
Jobs are billed based on compute time. PERSISTENT jobs bill hourly until stopped. BATCH jobs bill for the configured execution duration.

Monitor Your Jobs

Track job status, view logs, and manage active job runs through the Dispersed Console.