On Bitrise, can I make two workflows independently update separate GitHub checks on the same PR? - bitrise

When I create a pull request on GitHub, my project kicks off a workflow in Bitrise that only exists to start two other workflows, then it finishes. What I would expect to see in the GitHub pull request checks dialog is three distinct Bitrise checks which are waiting for each workflow to finish (One for the initial short workflow, then two more for the two workflows that are started from this first one). In reality, I only see one check for this initial workflow. This one always succeeds after about 15 seconds because, as I've stated, it does no real work of its own. Is it possible to show distinct checks for all workflows?

This one always succeeds after about 15 seconds because, as I've stated, it does no real work of its own.
Move some of the work into this main workflow then add the Bitrise Wait for Build step to the end of the workflow ( https://devcenter.bitrise.io/builds/triggering-builds/trigger-multiple-workflows/ ). This way this main triggered workflow will report the final status back to GitHub and it doesn't have to finish in 15 seconds, it can do real work while it's waiting on the other workflows.

Related

Alter 'status' request interval of CloudBuild submit

I'm trying to setup the CI/CD setup of a mono repository using Google Cloud Build. We have a single Cloud Build trigger that starts a build on a new commit, it does some general steps and then then starts a build for every (micro)service in the mono repository using gcloud build submit.
This however means that if 4 or 5 people are push code to the repository roughly at the same time we can have around 50-70 concurrent builds running in cloud build. Which in itself isn't an issue for us. The only issues is that when this happens the following errors will popup:
{
“code”: 429,
“message”: “Quota exceeded for quota metric ‘Build and Operation Get requests’ and limit ‘Build and Operation Get requests per minute’ of service ‘cloudbuild.googleapis.com’ for consumer ‘project_number:<PROJECT_NUMBER>’.“,
“status”: “RESOURCE_EXHAUSTED”,
“details”: [{
“#type”: “type.googleapis.com/google.rpc.ErrorInfo”,
“reason”: “RATE_LIMIT_EXCEEDED”,
“domain”: “googleapis.com”,
“metadata”: {
“service”: “cloudbuild.googleapis.com”,
“consumer”: “projects/<PROJECT_NUMBER>”,
“quota_limit”: “GetRequestsPerMinutePerProject”,
“quota_metric”: “cloudbuild.googleapis.com/get_requests”
}
}]
}
In other words: We are running into quota limits. The quota only allows us to only make 900 operational requests per minute.
We already tried switching to private pools in the hope that the above quota limit was only there for when you don't use private pools, but this unfortunately still makes us hit the quota.
Now, I am trying to find out if I can decrease the amount of these operational requests.
A possible solution might be related to how I am using gcloud build submit. When you run gcloud build submit, it starts a new build, waits for the build to finish, and shows the output of the build. To achieve this, I presume that gcloud is making requests every few seconds to find out what the status of the build is. I suspect that these 'status' requests are why my Cloud Build quota limit is reached. Which is why I'm trying to see how I can lower the amount of these requests per minute.
One option is to simple decrease the amount of builds running in parallel, which is unfortunately not an option in my situation. If I execute them sequentially it simply takes more time than acceptable in my situation.
Another option would be to increase the time in between such 'status' requests. However, on this page I did unfortunately not find a CLI flag to alter this.
Note: I did find the --async flag, however that does NOT help me, since I still want the process to wait until the build has succeeded. And I also did find the --supress-logs, which also does NOT help me, since these requests presumably don't interact with Cloud Build but with the GCS bucket where the logs are stored.
The only option left that I can think off, is that I can start my builds with the --async flag and then manually request whether the build has succeeded using a longer interval. However I do feel like that is a lot of manual work that, for which I need to write some bash scripts that need to be maintained. This preferably isn't a path I would like to take unless really necessary.
Does anyone know of another way of achieving this?
If 4 or 5 people are push code to the repository
This shouldn't happen. The reason it shouldn't happen is because you should use the "push" trigger on the main branch, not on a development branch.
What do I mean by this?
I mean that building should occur on the main branch, which would correspond to joined effort of those five users and a responsible party in charge of unifying their changes.
So, really, your users should be pushing to the development branch, and pushes to main should be reserved for things that need to be built.
How can we work around this if we're only allowed one branch or are required to have updates visible on one branch?
My recommendation would be to use the tag filter, specifically filter the pushes by tag, as mentioned in the documentation. That way only the pushes person in charge of merging the changes will be built (assuming that this person pushes to the tag you've set)
TL;DR
Don't create push triggers for Cloud Build on a branch multiple people are working on. Either create it with a tag filter or have seperate development and main branches (people work on dev, builds are only made from pushes to main)

Disable jobs marked as failed if stuck for specific Gitlab runners

We have one Gitlab runner, which is intended for bench-marking purposes.
A job can take from few minutes to possibly few days.
This all works fine, until there are two jobs and one takes too long to complete.
The waiting job, after some time, complains that it is stuck.
Afterwards it is marked as failed, never to be executed at all.
This is very annoying. For our usual pipeline it makes sense, because either the runner is dead, or the job's .gitlab.ci is not set up properly.
However here the waiting job just has to wait more.
Can we disable this stuck->failed feature for this specific runner?
(The timeout of the job is set up correctly, so it is able to run that long, as explained here)
This is currently an open issue (https://gitlab.com/gitlab-org/gitlab/-/issues/19294).

How long will a workflow stay in a Status Reason of "Waiting" before it times out?

I'm wondering how long a Dynamics CRM workflow will stay in a Status Reason of "Waiting" before it times out/gets cancelled automatically?
I have a workflow for "Renewal" Opportunities with the following step: "Wait until Today's date >= 3 months before Renewal Date." Then, the record is updated. I'm worried that if the workflow has a status of "Waiting" for too long, it will cancel automatically. Will this be an issue? If yes, what is a better way to handle "Renewal Opportunities", so the Opportunity Name gets updated with the word "Renewal" 3 months before the date in the "Renewal Date" field?
Thanks!
It will wait indefinitely. But... as someone who has written products that rely on waiting workflows, I can say that there can be issues. Perhaps most prominent is the risk of the workflow getting cancelled before its trigger date - not "automatically" but by a user or user-defined process.
One client has routines that cancel waiting workflows on a regular basis. That broke everything all the time until we moved their scheduling out of workflows to an online scheduler.
In general it's fine to rely on waiting workflows that are scheduled months out, but it's also prudent to have a mechanism to confirm that they're operating and recover when they're not.
Aron did a good job of tackling the first part of your question.
If yes, what is a better way to handle "Renewal Opportunities", so the Opportunity Name gets updated with the word "Renewal" 3 months before the date in the "Renewal Date" field?
1) Create a procedure:
Often a manual procedure is more cost effective and reliable than developing automation.
- Create an Opportunities Pending Renewal view which shows all opportunities where the Renewal date is within X days of today.
- Create a Renew Opportunity workflow
- Put in place a process where by a user regularly (once a month/ once a week?) opens this view and runs the Renew Opportunity workflow.
This is a good option if the renewal does not need to occur on an exact date.
2) Have an external application launch the workflows:
You could write a lightweight scheduled application to carry out this operation. If you take this route, I recommend keeping as much of the configuration in CRM as possible by having the application execute over the results of a CRM view and kick off workflows to carry out the renewal action. That way when your business decides to change their rules (e.g. different renewal period) you just update the view criteria or workflow.
This is a good option if you have in-house dev power and if there are many such workflows that you can leverage your scheduled application to handle.
3) Have a plugin launch the workflows:
This is my personal preference. Same as Option 2 except rather than using a scheduled console application you let CRM host and schedule the job. Create a custom scheduled task entity, and set up a workflow which waits for some period (e.g. 24 hours) then creates a scheduled task record. Add plugin logic which fires on-create of scheduled task records, which carries out the same actions from option 2.
This is better than #2 for several reasons:
- Does not require external hosting, no integration concerns
- The job can be triggered manually simply by creating a scheduled task record
- You can add result logging to the scheduled task record
Other thoughts:
I won't pass judgement on whether the above options are "better" than waiting workflows, they all have different strengths and weaknesses. The Async service is much more reliable that it was historically, but I personally still try to avoid using workflow which wait for extended periods of time, primarily for system complexity and performance reasons. If you need automation and don't have in-house developers, then your best option probably is to set up waiting workflows.

TeamCity - Can I cancel currently running builds of the same configuration when starting a new build?

I have some long running integration tests that are automatically run by the TeamCity server when I commit to source control.
TeamCity allows me to prevent these tasks from taking up all the build agents concurrently by limiting the simultaneous builds, however I wonder if it's possible to have TeamCity cancel any currently running tasks of this configuration when a new one starts?
In this environment as soon as there is a new commit to source control, old runs of the integration tests are irrelevant, so I don't want the server to waste its time running tests for old versions.
I don't think this is possible, and I would say this is by design.
Imagine a world where this is allowed, you would never know which commit caused a test to start failing. If you had enough overlapping commits you could have 50 builds before you know that the final test to be run fails, and would have no idea whether it was the last commit or the one 49 before that which caused it to fail.
IMHO you would be better focusing efforts into making so that multiple runs can happen simultaneously on different servers, to get the speed up you want, not throwing the baby out with the bath water
UPDATE
Whilst I don't think this is supported out of the box, if I had to do this I think I would look at getting a notification when a build starts (seems there are no notifications for builds being queued, so you'll have to allow multiple builds to run concurrently for this to work) and then you can use the API to do cancel the other builds:.
you can get a list of builds using the API as well so should be able to cancel all the ones which are not the most recent
No, it's not possible. You can cancel it manually. Or you can add a quiet period (default is 60 seconds) so a build doesn't start immediately when something has been pushed. Then if some commits arrive after few seconds or minutes, they will be included in the TeamCity build.
My solution is similar to Sam's update but I would use a "preamble" configuration that is triggered by commits to source control. This job is solely responsible for checking to see if any of your integration test jobs are already running and stopping them with a REST API call, as needed.
The main integrations tests are run from a dedicated job configuration that uses a build finish trigger associated with the preamble configuration.
This setup makes it quite straightforward to query which jobs are running and may need to be cancelled if there is newer work to do. So the steps become:
Preamble - cancel any running integration test job, triggered by VCS
Integration test - triggered by completion of a "Preamble" build

Delayed_job going into a busy loop when lodging second task

I am running delayed_job for a few background services, all of which, until recently, run in isolation e.g. send an email, write a report etc.
I now have a need for one delayed_job, as its last step, to lodge another delayed_job.
delay.deploy() - when delayed_job runs this, it triggers a deploy action, the last step of which is to ...
delay.update_status() - when delayed_job runs this job, it will check the status of the deploy we started. If the deploy is still progressing, we call delay.update_status() again, if the deploy has stopped we write the final deploy status to a db record.
Step 1 works fine - after 5 seconds, delayed_job fires up the deploy, which starts the deployment, and then calls delay.update_status().
But here,
instead of update_status() starting up in 5 seconds, delayed_job goes into a busy loop, firing of a bunch of update_status calls, and looping really hard without pause.
I can see the logs filling up with all these calls, the server slows down, until the end-condition for update_status is reached (deploy has eventually succeeded or failed), and things get quiet again.
Am I using Delayed_Job::delay() incorrectly, am I missing a basic tenent of this use-case ?
OK it turns out this is "expected behaviour" - if you are already in the code running for a delayed_job, and you call .delay() again, without specifying a delay, it will run immediately. You need to add the parameter run_at:
delay(queue: :deploy, run_at: 10.seconds.from_now).check_status
See the discussion in google groups

Resources