how can set timeout and active multi running builder in teamcity - performance

at the TeamCity in my current queue after any push and merge request automatic start to check all test and push in NPM. so if I have a bug in some testing agent in TeamCity will be a wait to complete that test for a long time and my queue will be not moved.
so I want to make time out for this example after 4 min reject the
branch and move to next also I want run multi-agent for running
builder

look here, it will help you link

Related

Re-run Cypress test in Github Actions does not work

I have a cypress workflow in Github and it runs nicely. But, when the e2e tests fail for some reason and I want to re-run them using the re-run all jobs button (below), the following message appears:
The run you are attempting to access is already complete and will not accept new groups.
The existing run is: https://dashboard.cypress.io/projects/abcdef/runs
When a run finishes all of its groups, it waits for a configurable set of time before finally completing. You must add more groups during that time period.
The --tag flag you passed was:
The --group flag you passed was: core
What should I change in my configuration to make these possible? Sometimes the e2e fails because of a backend error that is fixed later.
I'd like to do this instead of a force e2e commit.
I was facing the same issue before.
I think you can try to pass GITHUB_TOKEN or add a custom build id. It fixed my issue. Hoep it helps.
https://github.com/cypress-io/github-action#custom-build-id
Check your Cypress Dashboard subscription plan. Mine got the free plan full (500 test for free and I was running in 3 different browsers 57 tests, so it got full pretty quick since this is 171 tests in one run) and after that it didn't allowed me to keep running or re running more parallel tests. Test kept running but in 1 machine out of 4 in the first browser and stages for the other 2 browsers started failing, I was able to allow the pipeline to not be failing by passing continueOnError: true in the configuration.
Quick edit, I don't remember where but I read that you could also add a delay to your pipeline and/or reduce the default wait on the Dashboard which is 60s(https://docs.cypress.io/guides/guides/parallelization#Run-completion-delay)

How to tell Octopus Deploy to wait until another deployment finishes on the same machine?

Sometimes it is preferred and/or required to host dozens of applications on a single server. Not saying this is "right" or "wrong," I'm only saying that it happens.
A downside to this configuration is the error message Waiting for the script in task [TASK ID] to finish as this script requires that no other Octopus scripts are executing on this target at the same time appears whenever more than one deployment to the same machine is running. It seems like Octopus Deploy is fighting itself.
How can I configure Octopus Deploy to wait for one deployment to completely finish before the next one is started?
Before diving into the answer, it is important to understand why that message is appearing in the first place. Each time a step is run on a deployment target, the tentacle will create a "Mutex" to prevent others projects from interfering with it. An early use case for this was updating the IIS metabase during a deployment. In certain cases, concurrent updates would cause random errors.
Option 1: Disable the Mutex
We've seen cases where the mutex is the cause of the delay. The mutex is applied per step, not per deployment. It is common to see a situation where it looks like Octopus is "jumping" between deployments. Depending on the number of concurrent deployments, that can slow down the deployment. The natural thought is to disable the mutex altogether.
It is possible to disable the mutex by adding the variable OctopusBypassDeploymentMutex and setting it to True. That variable can exist in either a specific project or in a variable set.
More details on what that variable does can be found in this document. If you do disable the mutex please test it and monitor for any failures. For the most part, we don't see issues disabling the mutex, but it has happened from time to time. It depends on a host of other factors such as application type and Windows version.
Option 2: Leverage Deploy a Release Step
Another option is to coordinate the projects using the deploy a release step. Typically this works best when the projects being deployed are part of the same application suite. In the example screenshot below I have five "deployment" projects:
Azure Worker IaC
Database Worker IaC
Kubernetes Worker IaC
Script Worker IaC
OctoStudy
The project Unleash the Kraken coordinates deployments for those projects.
It does this by using the Deploy a Release step. First it spins up all the infrastructure, then it deploys the application.
This won't work as well if the server is hosting 50 disparate applications.
Option 3: Leverage the API to check for running deployments
The final option is to include a step at the start of each project which hits the API to check for active releases to the deployment targets for the deployment target. If an active deployment is found then wait until it is done.
You can do this by hitting the endpoint https://[YOUR URL]/api/[SPACE ID]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[SPACE ID]&includeSystem=false. That will tell you all the active tasks being run for a specific machine.
You can get Machine Id by pulling the value from Octopus.Deployment.Machines. You can get Space Id by pulling the value from Octopus.Space.Id.
The pseudo code for this approach could look like this (I'm not including the actual code as your requirements could be very different).
activeDeployments = true
while (activeDeployments)
{
activeDeployments = false
foreach(machineId in Octopus.Deployment.Machines)
{
activeTasks = https://[YOUR URL]/api/[Octopus.Space.Id]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[Octopus.Space.Id]&includeSystem=false
if (activeTasks.Count > 0)
{
activeDeployments = true
}
}
if (activeDeployments = true)
{
Sleep for 5 seconds
}
}
I had this message hit me because I hit the Task Cap on the Octopus Server.
In Octopus\Configuration\Nodes change the task cap to 1 to have 1 deployment at a time even with agents on different servers. The message will display constantly
Or simply increase this value to prevent the message from occurring at all.

Pausing Teamcity builds that are running

I would like to have Teamcity build configuration that currently has 3 build steps:
Build an artifact to perform tests on & install on remote server
Kick off long running test job on remote server
Pause build awaiting external event (i.e. remote job finishing)
Retrieve results and record the report
I have had a look through the documentation and I can see how I can pause (step 3) the entire build configuration (which stops any additional builds running) ... but not just a single running build.
The Step 2 script that is running the external job has the various parameters passed to it, so that it can issue a REST call back to the teamcity server to resume the build job.
Basically I don't want to tie up a build agent waiting the entire hour the test takes to run.
I have googled and everything I can find points me at pausing the build configuration.
I am currently having to look at splitting the build configuration into two. The first will kick of the test job and finish. Then when the external test job finishes it will call teamcity to start a second job to retrieve and store the reports. But that feels disconnected to me in that I will not be able to show a single job with build/test/report.
At the moment (TeamCity v 2018.1) there is no direct way to pause the build, release the build agent, and later resume the execution.
What you described is the recommended workaround.
Also, please watch/vote for related issue: https://youtrack.jetbrains.com/issue/TW-30777

rebuild in batch with buildbot

I'm using buildbot for few months now and I'm really happy with it. It's plugged with github.
I'm working on a image processing software, and processing time is really important. Until recently, I was doing automatic build + tests, and I'm now monitoring time spend with the tests.
Because monitoring of processing time has been implemented just recently, I'd like to build all the previous commit since few months, so I can see some potential processing time drawback.
I can trigger manual build on a particular commit with the ForceScheduler, but is there an easy way to do that on the last 500 commits for instance ?
Assuming you have PBSource configured as a changesource, you can simply write a script to call buildbot sendchange on the command line.
buildbot sendchange --master {MASTERHOST}:{PORT} --auth {USER}:{PASS}
--who {USER} {FILENAMES..}
see: http://buildbot.readthedocs.org/en/latest/manual/cmdline.html

Dynamically identify a Jenkins build

Currently, I am initiating a build by posting a few parameters to Jenkins from a shell script. I need to check whether the build succeeded or failed and I was wanting to avoid using the post build Jenkins script calls (I don't want Jenkins to initiate the running of any scripts on my server), so the idea was to post to Jenkins every 10 seconds or so (while building != false) in order to get the JSON object with the various build parameters. While this is working fine if I know the build number of the build I want to check on, I can't seem to see a good way to dynamically keep track of the current build number and make sure my script is checking on the build it just initiated and not some other build currently running.
Potentially, there could be multiple builds initiated within a short period of time, so posting to jenkins/job/my_build_job/lastBuild/api/json just after starting the build and checking the number that way doesn't seem appropriate given problems with race situations.
How can I keep track of a particular build dynamically from a script on my server in order to check the build success or failure of a build initiated from a post called by cron? Is there perhaps a way to name a build so I could initiate it with BUILD_NAME and then post to jenkins/job/my_build_job/BUILD_NAME/api/json?
There are a couple of different API calls you can make:
jenkins/job/my_build_job/api/json?tree=lastBuild[number]
will give you either the last completed build or the current build in progress
jenkins/job/my_build_job/api/json?tree=nextBuildNumber
will give you the next build number - this includes builds that are queued up waiting for resources.
There is already an issue filed in Jenkins to return the build number in the Jenkins remote API call: https://issues.jenkins-ci.org/browse/JENKINS-12827. Please add comments there so it can be worked on as soon as possible.

Resources