how to link two workflows in a single coordinator - hadoop

i have two Workflow XML files such as WF1 WF2. Can I link all these workflows in a single co-ordinator for the below actions? WF1 is Time dependent WF2 is No Dependency of anything.i want to execute after completion of one workflow next one to be execute in a single coordinator

The best way to go forward is to use a subworkflow in Oozie; for details see: Sub-workflow_Action.
A similar question was also asked here.

Related

I want to trigger multiple task runs that differ by their input parameters in a Databricks job

I am trying to run a job including a task that needs to multiple run in parallel using different parameter values.
I understand that this is possible based on this post:
https://docs.databricks.com/data-engineering/jobs/jobs.html#maximum-concurrent-runs
But I can't figure out how.
To create trigger on multiple jobs using Databricks UI, follow below path
Workflows > Jobs > Create
Here give Task name and select Type, Source and Path.
You can Add parameters as shown in the screenshot below.
In Advanced options you can Add dependent libraries, Edit email notifications, Edit retry policy, Edit timeout.

Defer triggering a GitHub action until all prerequisites have completed

I have this GitHub Action which should work only after three other Actions have completed. I have configured it this way:
on:
workflow_run:
workflows: ["Extrae usuario y repo","Nombre contenedor Docker","Crea artefacto con configuraciĆ³n"]
The problem is I have noticed that it triggers after every one of them has run, not when all three of them have completed. Is there any way to create this graph-based hooks in GitHub Actions? Alternatively, I guess I could just skip running until the goods (some artifacts) have already been created.
This is not possible out of the box. You need kind of orchestration for that, like outside stateful app which is aware what was already run, and once you condition passed fire another workflow.
With workflow_run your workflow every time one of the listed workflow completes..

Auto Trigger B job after triggering A job in TeamCity

Is there a way that I can auto trigger job B exactly 1 hour after triggering job A, here the issue is job A would have not finished its work in mid of the job itself it has to trigger job B that too exactly after an hour or the other option would be to skip to build script 2 exactly after an hour of execution in script 1 , is there any way to do this ?
Thanks in advance
I cannot offer a good practice as a solution, but I can suggest two possible workarounds:
1. Build Pause
You can add a 'Command Line' shell pause as the last build step of project A or the first build step of project B. That pause must be set to one hour:
sleep 1h
You need to reconfigure the default build timeout for this or the job will fail.
2. Strict Scheduling
If you have some flexibility on the time where A can or should be triggered, you can use the 'Schedule Trigger' to schedule both A and B, e.g. if you schedule project A to 1 pm and project B to 2 pm, you make sure that there is at least one hour between those two. This can be scheduled as often as necessary.
I don't think what you are proposing is a good way to go about setting up the deployment, but I can think of a few workarounds that might help if you are forced in this direction.
In configuration A, add build step which adds a scheduled build trigger to configuration B for an hours time (using the API). In configuration B, add a build step to the end of the configuration to remove this scheduled trigger. This feels like a really horrible hack which should be avoided, but more details here.
Outside of TeamCity make use of any pub/sub mechanism so the deployment to the VM can create an event when it has completed. Subscribe to this event and trigger a follow on build using the TeamCity API. For example, if you are using AWS you could have an SNS topic with a lambda function as a subscriber. This lambda function would call the API to queue configuration B when the environment is in a suitable state.
There are probably much nicer solutions if you share what deployment software you are using.

How to enqueue more than one build of the same configuration?

We are using two teamcity servers (one for builds and one for GUI tests). The gui tests are triggered from http GET in the last step of the build. (as in http://confluence.jetbrains.com/display/TCD65/Accessing+Server+by+HTTP)
The problem is, that there is only one configuration of the same kind in the queue at the same time. Is there a way to enable multiple starts of the same configuration? Can I use some workaround like sending a dummy id?
At the bottom the section "Triggering a Custom Build" here: http://confluence.jetbrains.com/display/TCD65/Accessing+Server+by+HTTP, you can find information about passing custom parameters to the build.
Just define some unused configuration parameter, like "BuildId", and pass, for example, current date (guid will work as well) to it
...&buildId=12/12/23 12:12:12

How to trigger a hudson job by another job which is in a different hudson

I have job A in Hudson A and Job B in Hudson B. I want to trigger job A by Job B.
In your job B configuration, check the Trigger builds remotely (e.g., from scripts) checkbox and provide a token.
The help text there shows you the URL you can call to trigger a build from remote scripts (e.g. from a shell script in Hudson job A).
However, that would trigger job B no matter what the result of job A is.
Morechilli's answer is probably the best solution.
I haven't used Hudson but I would guess your simplest approach would be to use the URL trigger:
http://wiki.hudson-ci.org/display/HUDSON/URL+Change+Trigger
I think there is a latest build url that could be used for this.
In the latest versions of Hudson, the lastSuccessfultBuild/ HTML page will contain the elapased time since it was built, which will be different for each call. This causes the URL Change Trigger to spin.
One fix is to use the xml, json, or python APIs to request only a subset of the information. Using the 'tree' request parameter, the following URL will return an XML document containing only the build number of the last successful build.
http://SERVER:PORT/job/JOBNAME/lastSuccessfulBuild/api/xml?tree=number
Using this URL restored the behavior I expected from the URL Change Trigger.
Personally, I find the easiest way to do this is to watch the build timestamp:
PROJECT_NAME/lastSuccessfulBuild/buildTimestamp
I'm using wget to trigger the build:
wget --post-data 'it-just-need-to-be-a-POST-request'
--auth-no-challenge --http-user=myuser --http-password=mypassword
http://jenkins.xx.xx/xxx/job/A/build?delay=0sec
There's other ways how you can trigger a build, see the REST and other APIs of jenkins.
But this works great on unix.

Resources