Problem
Today, CPU and RAM utilization data for individual jobs is only visible in the "Resources" tab of the job UI. There is no public API endpoint that exposes this data programmatically.
The closest alternative — the Usage Export API — returns a full org-level CSV dump with no way to filter by project, branch, or pipeline ID. For teams that want to automate resource analysis (e.g. comparing resource consumption between a feature branch and
main
), this means pulling and processing a large dataset for the entire organization just to get data on a small subset of jobs.
---
Requested changes
  1. Expose per-job CPU/RAM usage via a public API endpoint — make the data powering the job "Resources" tab accessible programmatically, scoped to a single job ID.
  2. Add filtering to the Usage Export API — allow the org-level usage export to be filtered by one or more of:
    project_id
    ,
    branch
    ,
    pipeline_id
    ,
    workflow_id
    , or
    job_name
    . This would make the API usable for targeted analysis without requiring consumers to download and process the full org dataset.
---
Use case
Engineering teams want to automate resource rightsizing — comparing CPU/RAM utilization of jobs on a feature branch versus the default branch to identify over-provisioned resource classes before merging. This is currently impossible without either scraping the UI or processing a full org-level export.
---
Impact
  • Enables automated resource optimization workflows and cost analysis scripts
  • Reduces unnecessary data transfer for customers with large orgs who only need project- or branch-scoped data
  • Unlocks use cases like pre-merge resource regression detection and per-team cost attribution