Option to allow failures in Fan-In/Out Workflow
in progress
B
Brandon Page
The requires tag in workflows really limits what is possible. Requires makes a lot of sense for the deployment scenario used to demo it, but waiting for a group of jobs to complete (pass or fail) should be an option.
Scenario: Setup -> Run multiple sets of tests in parallel -> Combine test results/coverage results/artifacts and report to PR
That scenario above isn’t possible because if a single test fails in any of the jobs running in parallel the the fan-in step won’t run.
CCI-I-344
S
Sherif
any status on this feature? our org is checking this page daily. we need a job to run after parallelized tests are completed regardless of their outcome.
Vikram Krishnamurthy
Hey, we could also use a clean up step in our flow when dealing with cleaning up held deployment locks during "fan in". Or some kind of "finally" step that happens no matter what at the end
seaona
hi there! What's the status on the Requires solution proposed by Nathan?
P
Peter Darton
Another usecase is CircleCI orb testing.
At present, it isn't possible to write a CircleCI CI build to fully built-and-test a CircleCI orb because it isn't possible to "check that a command or job will fail in <situation>" as part of the automated test.
A circleci orb build needs to test that the orb code functions correctly, and that means calling the code and checking that it works as expected, which requires you to check not only "does this work when it should" but also "does it fail when it should" too.
I want to be able to "run this (downstream) job regardless of whether or not the previous (required) job passed or not" so that my orb build can check "did that work? If so,
actually
fail the build for real because that job-under-test should've failed".The easier you can make that, the better.
We would also benefit from this "run this job even if the previous one failed" facility for test-result combining (which we want to do but can't do as-is), but I would've thought that the inability for CircleCI to do CI build-and-test of CircleCI orbs is a bigger issue.
The difference in these two usecases is how the workflow's status is calculated though.
In the case of test-combining, we'd still want the workflow to be declared an overall failure if a fanned-out test failed even if the test-result-combining job succeeded.
In the case of orb job testing, we'd need the workflow to be declared a success if the expected-to-fail job failed and the workflow to fail if the expected-to-fail job succeeded.
TL;DR: We want to be able to say "this job failing doesn't mean the workflow ends here" somehow ... but sometimes we need the workflow to declare a failure and sometimes a success.
Nathan Fish
Interested in everyones thoughts on the following workflow graph when using "Requires". Does it give you enough information to understand why a job ran or did not run?
Robert Peralta
Nathan Fish Wow that is perfect! This is definitely self explanatory imo and solves the case I'm trying to address.
F
Fil Maj
Nathan Fish looking pretty good! Does one get similar information if you hover over the '3 jobs not ran' branch? I assume those were not run because they require job1 to pass?
Nicholas Shaw
Nathan Fish the second image seems to show it requires the prior job to fail, is that correct?
I think the main use case is the automated tests example given, whether the job passes/fails, I don't care, I want a test report generated.
Instead of needing to do things like the following:
- run:
command: |
if [ "$TESTS_PASS" = true ]; then
exit 0
else
exit 1
fi
Nathan Fish
Fil Maj yes that's the intent.
Nathan Fish
Nicholas Shaw in this example that is scenario, requiring failure. If you didn't care about the job passing or failing you could certainly model that with requires with the "completed" keyword. See the example config below for more details on what that would look like.
Nathan Fish
in progress
F
Fil Maj
Nathan Fish yay!
Nathan Fish
We are set to start working on a solution for "more flexible requires" and are looking for some feedback from you all. The config syntax would look something like the following:
jobs:
- a
- b
- c:
requires:
- a # defaults to success
- b: completed # success or failed
- d:
requires:
- a: failed
- e:
requires:
- b: canceled
There is one caveat. Canceling workflows does not mean the canceled job logic is reached. Cancelling a workflow still cancels all remaining work in the workflow.
Thoughts or considerations we should take into account?
Augusto Xavier
Nathan Fish Sounds like what we are looking for!
F
Fil Maj
Nathan Fish looking good so far but for clarity, could you describe the various job states that could be specified as part of
requires
? I see success
, failed
, canceled
, completed
as options but it is unclear to me what canceled
actually means? Is that a job state or a workflow outcome?Nathan Fish
Fil Maj great question! Cancelled will be a new option where an "approval job" can be cancelled, not just approved. It will be possible to cancel an approval job via API or via the UI for approval jobs.
F
Fil Maj
Nathan Fish Thanks for clarifying, LGTM 🚀 :shipit:
J
Josh Empson
Numerous teams here could really use this functionality as well (even outside the fan-in/out workflow), for example the case of scaling up a platform in order to run tests, running the tests and then then scaling down again regardless of the success of the tests.
I understand why
requires
depending on a successful completion would originally have been sufficient, and keeping this as the default behaviour would be completely fine - however the continued lack of switch/parameter to simply depend on job completion via success or failure is incredibly frustrating.Just a simple:
workflows:
deploy_and_test:
jobs:
- deploy
- run_tests
requires:
- deploy
- do_thing
requires:
-run_tests
when: always
would be incalculably helpful, and prevent the need for ugly
waiter
/ listener
jobs that seems to have become the standard workaround.With 331 upvotes on this issue, I'm obviously not alone in this thinking.
Please sort it out, Circle!
M
Michael
CircleCI won't do this because they make money off people running these hacky waiter jobs.
S
Sam Livingston-Gray
+1. In case anyone is still paying attention to this: this seems like such an obvious feature that I had assumed it already existed.
I'm trying to do some custom instrumentation of a parallelized test suite, and while "only compile total stats if all tests pass" is... better than nothing, I guess... it's considerably less useful than "always compile stats".
Load More
→