Skip to main content
Direct Job creation is best for one-off workloads or quick experiments. For workloads you plan to run repeatedly, consider creating a Job Recipe instead — recipes save your configuration and make re-launching easier.

Understanding Job Types

Jobs can be either BATCH or PERSISTENT:
BATCH jobs run until completion or timeout.
PropertyValue
taskBATCH
max_timeout_run_msRequired — maximum runtime in milliseconds
BillingCharged for actual execution time
Use casesRendering, data processing, inference tasks
The job automatically stops when:
  • Your container exits
  • The timeout is reached
{
  "task": "BATCH",
  "max_timeout_run_ms": 3600000,
  "title": "Render Scene"
}

Create Job Request

Endpoint

POST /v1/jobs

Request Body

{
  "task": "PERSISTENT",
  "title": "My GPU Instance",
  "cpu_count": 1,
  "gpu_count": 1,
  "min_ram_gb": 16,
  "min_storage_gb": 32,
  "min_vram_gb": 8,
  "max_timeout_run_ms": null,
  "parameters": {
    "type": "docker",
    "parameters": {
      "image": "otoy/dispersed-base",
      "tag": "latest",
      "sshkey": "ssh-ed25519 AAAA... user@host",
      "allowed_ips": ["0.0.0.0/0"]
    }
  }
}

Hardware Requirements

FieldTypeDescription
cpu_countnumberNumber of CPU cores
gpu_countnumberNumber of GPU devices
min_ram_gbnumberMinimum system RAM in GB
min_storage_gbnumberMinimum available storage in GB
min_vram_gbnumberMinimum GPU VRAM in GB

Container Parameters

FieldTypeDescription
imagestringDocker image name
tagstringDocker image tag
sshkeystring(Optional) SSH public key for access
allowed_ipsstring[](Optional) IP addresses allowed to connect

Example: Create a BATCH Job

All API requests require HMAC signature authentication. See Authenticate Requests for the generateAuthHeaders helper function used below.
// Using the generateAuthHeaders helper from Authenticate Requests guide
const jobData = {
  task: "BATCH",
  title: "Process Dataset",
  cpu_count: 4,
  gpu_count: 1,
  min_ram_gb: 32,
  min_storage_gb: 100,
  min_vram_gb: 16,
  max_timeout_run_ms: 7200000, // 2 hours
  parameters: {
    type: "docker",
    parameters: {
      image: "myregistry/processor",
      tag: "v1.0",
    },
  },
};

const headers = generateAuthHeaders("POST", "/v1/jobs", {}, jobData);

const response = await fetch("https://api.compute.x.io/v1/jobs", {
  method: "POST",
  headers,
  body: JSON.stringify(jobData),
});

const job = await response.json();
console.log("Job UUID:", job.uuid);
console.log("Status:", job.status);

Example: Create a PERSISTENT Job with SSH

const jobData = {
  task: "PERSISTENT",
  title: "Development Environment",
  cpu_count: 2,
  gpu_count: 1,
  min_ram_gb: 16,
  min_storage_gb: 50,
  min_vram_gb: 8,
  max_timeout_run_ms: null, // Required for PERSISTENT
  parameters: {
    type: "docker",
    parameters: {
      image: "otoy/dispersed-base",
      tag: "latest",
      sshkey: "ssh-ed25519 AAAA... user@host",
      allowed_ips: ["0.0.0.0/0"],
    },
  },
};

const headers = generateAuthHeaders("POST", "/v1/jobs", {}, jobData);

const response = await fetch("https://api.compute.x.io/v1/jobs", {
  method: "POST",
  headers,
  body: JSON.stringify(jobData),
});

const job = await response.json();
console.log("Job created:", job.uuid);

Query Job Runs

After creating a Job, query for its Job Runs to get connection information:
GET /v1/job-runs?filter[job_uuid]={job_uuid}
Response:
{
  "data": [
    {
      "uuid": "run-uuid-here",
      "status": "ASSIGNED",
      "started_at_ms": 1705123456789,
      "node_urls": [
        {
          "description": "ssh",
          "hostname": "123.45.67.89",
          "port": 22,
          "protocol": "tcp"
        }
      ]
    }
  ],
  "page": 1,
  "limit": 20,
  "total": 1
}

View Job Run Logs

The list endpoint (GET /v1/job-runs) does not include logs to keep responses small. To retrieve logs via the API, request a specific Job Run:
GET /v1/job-runs/{job_run_uuid}
The response includes the logs field with container output.
You can always view Job Run logs in the Dispersed Console on the Job Run Detail page.

Job Lifecycle

PENDING → ASSIGNED → RUNNING → COMPLETED
       ↘           ↘        
    CANCELLED     FAILED   
StatusDescription
PENDINGWaiting for node assignment
ASSIGNEDNode selected, container starting
RUNNINGContainer executing
COMPLETEDFinished successfully
FAILEDError occurred
CANCELLEDCancelled by user
CANCELLINGCancel requested

Stop a Job

To stop a RUNNING job (or cancel a PENDING job):
PUT /v1/jobs/{job_uuid}/cancel
{
  "reason": "Optional reason string"
}

Best Practices

  1. Use Job Recipes for repeated workloads — If you’ll run this configuration again, create a recipe instead of submitting the full payload each time
  2. Use BATCH for finite tasks — Set realistic timeouts to prevent runaway costs
  3. Use PERSISTENT for interactive work — SSH access, development, debugging
  4. Stop PERSISTENT jobs promptly — Set calendar reminders if needed

API Reference

For complete API documentation including all fields and response schemas, see the API Reference.