Is there a way that I can auto trigger job B exactly 1 hour after triggering job A, here the issue is job A would have not finished its work in mid of the job itself it has to trigger job B that too exactly after an hour or the other option would be to skip to build script 2 exactly after an hour of execution in script 1 , is there any way to do this ?
Thanks in advance
I cannot offer a good practice as a solution, but I can suggest two possible workarounds:
1. Build Pause
You can add a 'Command Line' shell pause as the last build step of project A or the first build step of project B. That pause must be set to one hour:
sleep 1h
You need to reconfigure the default build timeout for this or the job will fail.
2. Strict Scheduling
If you have some flexibility on the time where A can or should be triggered, you can use the 'Schedule Trigger' to schedule both A and B, e.g. if you schedule project A to 1 pm and project B to 2 pm, you make sure that there is at least one hour between those two. This can be scheduled as often as necessary.
I don't think what you are proposing is a good way to go about setting up the deployment, but I can think of a few workarounds that might help if you are forced in this direction.
In configuration A, add build step which adds a scheduled build trigger to configuration B for an hours time (using the API). In configuration B, add a build step to the end of the configuration to remove this scheduled trigger. This feels like a really horrible hack which should be avoided, but more details here.
Outside of TeamCity make use of any pub/sub mechanism so the deployment to the VM can create an event when it has completed. Subscribe to this event and trigger a follow on build using the TeamCity API. For example, if you are using AWS you could have an SNS topic with a lambda function as a subscriber. This lambda function would call the API to queue configuration B when the environment is in a suitable state.
There are probably much nicer solutions if you share what deployment software you are using.
Related
I have this GitHub Action which should work only after three other Actions have completed. I have configured it this way:
on:
workflow_run:
workflows: ["Extrae usuario y repo","Nombre contenedor Docker","Crea artefacto con configuraciĆ³n"]
The problem is I have noticed that it triggers after every one of them has run, not when all three of them have completed. Is there any way to create this graph-based hooks in GitHub Actions? Alternatively, I guess I could just skip running until the goods (some artifacts) have already been created.
This is not possible out of the box. You need kind of orchestration for that, like outside stateful app which is aware what was already run, and once you condition passed fire another workflow.
With workflow_run your workflow every time one of the listed workflow completes..
I have a number of different projects, with Jenkins CI jobs configured for each of them to run tests. When I create a new release, I have a second job that coordinates between a number of different jobs that go over each of the modules in the projects and updates the versions and the dependencies in the pom.xml's. I would like to make the "update" job conditional on the status of all the CI jobs - meaning that if one of the CI jobs is not green, then the update job will not run at all.
I had a look at the Run Condition Plugin as well as the Conditional BuildStep Plugin, however it does not seem possible do configure them to be dependent on the status of another Jenkins job.
you could hit the other jobs via the API at [JOB_URL]/lastCompletedBuild/api/json and verify the result for each.
to mess around with this:
curl `[JOB_URL]/lastCompletedBuild/api/json` | jq '.result'
you probably want result to say SUCCESS.
this is not fancy, but you don't want fancy in CI; you want something that is not likely to break when you upgrade jenkins. :)
Have a [https://wiki.jenkins.io/display/JENKINS/Multijob+Plugin] ["Multijob Plugin"] ,
In your case, you can add a job in first step and configure in that step, at which result condition of first step, you want to run second step.
Again, in second step, you can configure one/many jobs and can also configure if you want to run them in parallel.
I need to trigger custom logic (e.g. shell script) once the TC job fails. How can I do that?
I already found that. It can be achieved by adding a build step that will be executed even if any of the previous steps failed or were stopped.
I've just started using wercker and I'd like a job to run regularly (e.g. daily, hourly). I realize this may be an anti-pattern, but is it possible? My intent is not to keep the container running indefinitely, just that my workflow is executed on a particular interval.
You can use a call to the Wercker API to trigger a build for any project which is set up already in Wercker.
So maybe set up a cron job somewhere that uses curl to make the right API call?
I have a job upstream that executes 4 downstream jobs.
If the upstream job finish successfully the downstream jobs start their execution.
The upstream job, since it finish successfully, gets a blue ball (build result=stable), but even tough the downstream jobs fail (red ball) or are unstable (yellow ball), the upstream job maintain its blue color.
Is there anyway to get the result of the upstream job dependent on the downstream jobs?, i mean, if three downstream jobs get a stable build but one of them get an unstable build, the upstream build result should be unstable.
I found the solution. There is a plugin called Groovy Postbuild pluging that let you execute a Groovy script in the post build phase.
Addind a simple code to the downstream jobs you can modify the upstream overall status.
This is the code you need to add:
upstreamBuilds = manager.build.getUpstreamBuilds();
upstreamJob = upstreamBuilds.keySet().iterator().next();
lastUpstreamBuild = upstreamJob.getLastBuild();
if(lastUpstreamBuild.getResult().isBetterThan(manager.build.result)) {
lastUpstreamBuild.setResult(manager.build.result);
}
You can find more info in the entry of my blog here.
Another option that might work for you is to use the parametrised build plugin. It allows you to have your 4 "downstream" builds as build steps. This means that your "parent" build can fail if any of the child builds do.
We do this when we want to hide complexity for the build-pipeline plugin view.
We had a similar sort of issue and haven't found a perfect solution. A partial solution is to use the Promoted Builds Plugin. Configure it for your upstream project to include some visual indicator when the downstream job finishes. It doesn't change the overall job status, but it does notify us when the downstream job fails.
Perhaps this plugin does what you are looking for?
Jenkins Prerequisite build step Plugin
the work around for my project is to create a new job, which is the down stream of the down streams. We set a post build step "Trigger parameterized build on other projects " in all three of the original downstream jobs. The parameter that parse into the new job depends on the three jobs' status and the parameter will causes the new job react accordingly.
1. Create new job which contains one simple class and one simple test. Both parameters dependens, i.e. class fail if parameter "status" = fail, class pass but test fail if parameter "status"=unstable, etc.
2. Set Trigger parameterized build on other projects for the three original downstream jobs with relevant configurations.
3. Set notification of the new job accordingly.