Special "final" job to run at end of pipeline
K
Kyle Tryon
Issue: 1. Imagine a pipeline that spins up 3rd party services or containers that would usually be "cleaned" up at the end of a pipeline, but if that pipeline fails, the final clean up job will never be given the chance to run.
2. Currently sending out any kind of notification data to third-party services while running in a pipeline can be difficult, both for the reason already described above, as well as the fact that the results and data for that pipeline will not be available to the API until after the pipeline has run, limiting the amount of information that can be exported. Solution: The ability to specify that a job will execute after the pipeline has run. This should run regardless of how the pipeline ended (or should be configurable). This job should have access to all of the pipeline data for that pipeline (where currently the API does not update until the pipeline has finished).
This feature will enable a number of notification, stats, and reporting orbs and greatly improve interactivity with external services utilized within pipelines.Related: https://ideas.circleci.com/ideas/CCI-I-1227
CCI-I-1300
F
Fernando Abreu
We’re planning to extend our requires syntax to support requiring any job that’s in a terminal state, which will now also include skipped (i.e., jobs that will never run because a dependency wasn’t met). This would allow queuing multiple jobs as part of the same logical group.
Example:
jobs:
- build
- test:
requires:
- build
- deploy:
requires:
- test
- release:
requires:
- deploy
- send_stats:
requires:
- release: terminal
In this setup,
send_stats
will run regardless of the outcome of release. Even if something fails upstream and release is skipped, send_stats
will still execute as long as release reaches a terminal state.Would this be helpful in your workflows?
S
Shachar Or
Seems like a good step in the right direction
T
TylerBoddySparg
Upvoted; If the final job can be used to reflect the end result of the entire pipeline, then it could be a very simple way to use GitHub's "Require status checks to pass before merging" and only require a single "summary" check. The final job result would be relevant no matter which workflows and jobs were executed within the CircleCI pipelines. This would be a major win for people who use CircleCI's dynamic configuration.
S
Shachar Or
Major upvote. Especially for dynamic configuration with post build step that needs to run after everything finishes - it's somewhat hard to put ALL the previous jobs in the require field.
It would be great to have
final_jobs_always
, final_jobs_success
, and final_jobs_failure
attributes under workflow to define post workflow behavior.z
zach david
Upvote - same issue but with running different suites of tests in parallel, and then wanting to aggregate test results and send one slack notificationPossible solution: allow passing a flag to requires "allow_failed_jobs": true
J
Jones Trevor
Same but for sending result data to sonarqube since all inputs must be available in a single scan invocation.
A
Ashley Shea
This would be just great. +1 on this
K
Kyle Tryon
Slack Orb request: https://github.com/CircleCI-Public/slack-orb/issues/73
This would also greatly benefit the SumoLogic orb and similar stats reporting utilities and orbs.