Dynamically identify a Jenkins build - bash

Currently, I am initiating a build by posting a few parameters to Jenkins from a shell script. I need to check whether the build succeeded or failed and I was wanting to avoid using the post build Jenkins script calls (I don't want Jenkins to initiate the running of any scripts on my server), so the idea was to post to Jenkins every 10 seconds or so (while building != false) in order to get the JSON object with the various build parameters. While this is working fine if I know the build number of the build I want to check on, I can't seem to see a good way to dynamically keep track of the current build number and make sure my script is checking on the build it just initiated and not some other build currently running.
Potentially, there could be multiple builds initiated within a short period of time, so posting to jenkins/job/my_build_job/lastBuild/api/json just after starting the build and checking the number that way doesn't seem appropriate given problems with race situations.
How can I keep track of a particular build dynamically from a script on my server in order to check the build success or failure of a build initiated from a post called by cron? Is there perhaps a way to name a build so I could initiate it with BUILD_NAME and then post to jenkins/job/my_build_job/BUILD_NAME/api/json?

There are a couple of different API calls you can make:
jenkins/job/my_build_job/api/json?tree=lastBuild[number]
will give you either the last completed build or the current build in progress
jenkins/job/my_build_job/api/json?tree=nextBuildNumber
will give you the next build number - this includes builds that are queued up waiting for resources.

There is already an issue filed in Jenkins to return the build number in the Jenkins remote API call: https://issues.jenkins-ci.org/browse/JENKINS-12827. Please add comments there so it can be worked on as soon as possible.

Related

Re-run Cypress test in Github Actions does not work

I have a cypress workflow in Github and it runs nicely. But, when the e2e tests fail for some reason and I want to re-run them using the re-run all jobs button (below), the following message appears:
The run you are attempting to access is already complete and will not accept new groups.
The existing run is: https://dashboard.cypress.io/projects/abcdef/runs
When a run finishes all of its groups, it waits for a configurable set of time before finally completing. You must add more groups during that time period.
The --tag flag you passed was:
The --group flag you passed was: core
What should I change in my configuration to make these possible? Sometimes the e2e fails because of a backend error that is fixed later.
I'd like to do this instead of a force e2e commit.
I was facing the same issue before.
I think you can try to pass GITHUB_TOKEN or add a custom build id. It fixed my issue. Hoep it helps.
https://github.com/cypress-io/github-action#custom-build-id
Check your Cypress Dashboard subscription plan. Mine got the free plan full (500 test for free and I was running in 3 different browsers 57 tests, so it got full pretty quick since this is 171 tests in one run) and after that it didn't allowed me to keep running or re running more parallel tests. Test kept running but in 1 machine out of 4 in the first browser and stages for the other 2 browsers started failing, I was able to allow the pipeline to not be failing by passing continueOnError: true in the configuration.
Quick edit, I don't remember where but I read that you could also add a delay to your pipeline and/or reduce the default wait on the Dashboard which is 60s(https://docs.cypress.io/guides/guides/parallelization#Run-completion-delay)

Laravel: How to detect if code is being executed from within a queued job, as opposed to manually run from the CLI

I found this similar question How to check If the current app process is running within a queue environment in Laravel
But actually this is the opposite of what I want. I want to be able to distinguish between code being executed manually from an artisan command launched on the CLI, and when a job is being run as a result of a POST trigger via a controller, or a scheduled run
Basically I want to distinguish between when a job is being run via the SYNC driver, manually triggered by the developer with eyes on the CLI output, and otherwise
app()->runningInConsole() returns true in both cases so it is not useful to me
Is there another way to detect this? For example is there a way to detect the currently used queue connection? Keeping in mind that it's possible to change the queue connection at runtime so just checking the value of the env file is not enough

Pausing Teamcity builds that are running

I would like to have Teamcity build configuration that currently has 3 build steps:
Build an artifact to perform tests on & install on remote server
Kick off long running test job on remote server
Pause build awaiting external event (i.e. remote job finishing)
Retrieve results and record the report
I have had a look through the documentation and I can see how I can pause (step 3) the entire build configuration (which stops any additional builds running) ... but not just a single running build.
The Step 2 script that is running the external job has the various parameters passed to it, so that it can issue a REST call back to the teamcity server to resume the build job.
Basically I don't want to tie up a build agent waiting the entire hour the test takes to run.
I have googled and everything I can find points me at pausing the build configuration.
I am currently having to look at splitting the build configuration into two. The first will kick of the test job and finish. Then when the external test job finishes it will call teamcity to start a second job to retrieve and store the reports. But that feels disconnected to me in that I will not be able to show a single job with build/test/report.
At the moment (TeamCity v 2018.1) there is no direct way to pause the build, release the build agent, and later resume the execution.
What you described is the recommended workaround.
Also, please watch/vote for related issue: https://youtrack.jetbrains.com/issue/TW-30777

Jenkins/Hudson upstream job does not get the status "ball" color of the downstream jobs

I have a job upstream that executes 4 downstream jobs.
If the upstream job finish successfully the downstream jobs start their execution.
The upstream job, since it finish successfully, gets a blue ball (build result=stable), but even tough the downstream jobs fail (red ball) or are unstable (yellow ball), the upstream job maintain its blue color.
Is there anyway to get the result of the upstream job dependent on the downstream jobs?, i mean, if three downstream jobs get a stable build but one of them get an unstable build, the upstream build result should be unstable.
I found the solution. There is a plugin called Groovy Postbuild pluging that let you execute a Groovy script in the post build phase.
Addind a simple code to the downstream jobs you can modify the upstream overall status.
This is the code you need to add:
upstreamBuilds = manager.build.getUpstreamBuilds();
upstreamJob = upstreamBuilds.keySet().iterator().next();
lastUpstreamBuild = upstreamJob.getLastBuild();
if(lastUpstreamBuild.getResult().isBetterThan(manager.build.result)) {
lastUpstreamBuild.setResult(manager.build.result);
}
You can find more info in the entry of my blog here.
Another option that might work for you is to use the parametrised build plugin. It allows you to have your 4 "downstream" builds as build steps. This means that your "parent" build can fail if any of the child builds do.
We do this when we want to hide complexity for the build-pipeline plugin view.
We had a similar sort of issue and haven't found a perfect solution. A partial solution is to use the Promoted Builds Plugin. Configure it for your upstream project to include some visual indicator when the downstream job finishes. It doesn't change the overall job status, but it does notify us when the downstream job fails.
Perhaps this plugin does what you are looking for?
Jenkins Prerequisite build step Plugin
the work around for my project is to create a new job, which is the down stream of the down streams. We set a post build step "Trigger parameterized build on other projects " in all three of the original downstream jobs. The parameter that parse into the new job depends on the three jobs' status and the parameter will causes the new job react accordingly.
1. Create new job which contains one simple class and one simple test. Both parameters dependens, i.e. class fail if parameter "status" = fail, class pass but test fail if parameter "status"=unstable, etc.
2. Set Trigger parameterized build on other projects for the three original downstream jobs with relevant configurations.
3. Set notification of the new job accordingly.

How to trigger a hudson job by another job which is in a different hudson

I have job A in Hudson A and Job B in Hudson B. I want to trigger job A by Job B.
In your job B configuration, check the Trigger builds remotely (e.g., from scripts) checkbox and provide a token.
The help text there shows you the URL you can call to trigger a build from remote scripts (e.g. from a shell script in Hudson job A).
However, that would trigger job B no matter what the result of job A is.
Morechilli's answer is probably the best solution.
I haven't used Hudson but I would guess your simplest approach would be to use the URL trigger:
http://wiki.hudson-ci.org/display/HUDSON/URL+Change+Trigger
I think there is a latest build url that could be used for this.
In the latest versions of Hudson, the lastSuccessfultBuild/ HTML page will contain the elapased time since it was built, which will be different for each call. This causes the URL Change Trigger to spin.
One fix is to use the xml, json, or python APIs to request only a subset of the information. Using the 'tree' request parameter, the following URL will return an XML document containing only the build number of the last successful build.
http://SERVER:PORT/job/JOBNAME/lastSuccessfulBuild/api/xml?tree=number
Using this URL restored the behavior I expected from the URL Change Trigger.
Personally, I find the easiest way to do this is to watch the build timestamp:
PROJECT_NAME/lastSuccessfulBuild/buildTimestamp
I'm using wget to trigger the build:
wget --post-data 'it-just-need-to-be-a-POST-request'
--auth-no-challenge --http-user=myuser --http-password=mypassword
http://jenkins.xx.xx/xxx/job/A/build?delay=0sec
There's other ways how you can trigger a build, see the REST and other APIs of jenkins.
But this works great on unix.

Resources