106
Rerun failed Parallel Runs
under review
Activity
Newest
Oldest
D
Donald Tyler
This feature is desperately needed. Your customers are wasting SOOOOO many credits by re-running unnecessary nodes. My guess is that's why this hasn't been implemented yet, as doing so would lose you money. But this is definitely an anti-pattern and is toxic to your customers. Please fix!
A
Akbar [not provided]
+1 this feature will make your products over power
Hafizhuddin Wafi
+1 please!
bahrum saleh
+1 yuk bisa yuk besok
R
Reydi Sutandang
+1 can we just please have it by tomorrow
Benjamin Brinckerhoff
+1 to this. We have persistent issues with networking related to downloading Google Chrome via the CircleCI orb (https://github.com/CircleCI-Public/browser-tools-orb/issues/33).
Our jobs use 150 machines in parallel and recently we've seen about 145 work, while 5 have connectivity issues and fail the entire job.
Re-running doesn't help in this case, since it's likely that at least one machine fails to connect to Google.
It would be great if a machine that fails is discarded and a new machine is booted up to retry.
Ratko Veprek
+1 we'd appreciate this feature
B
Barbara Nichols
under review
Thank you for your feedback. We're exploring ways to solve this.
R
Reydi Sutandang
Barbara Nichols: Hi Barbara, is there any update on this?
B
Barbara Nichols
Reydi Sutandang: We are currently exploring what we can implement on our end and what the best approach is. It is a bit tricky and it is not a quick fix.
We're looking more from the perspective of rerunning tests versus rerunning the parallel run that failed.
Nikita Butenko
- When you have 30 parallel jobs and one have a flaky test failure it is a huge credits burn to re-run all of them.
- When re-running with ssh - all 30 jobs start again, but with, you guessed that, ssh enabled... This is extremely wasteful.
Prajno Malla
+++
Load More
→