single Rundeck job with 2 scripts running on 2 different nodes - yaml

I have a Rundeck job with 2 scripts in it's workflow and can't figure out the ruleset to run each script individually in the job.
My 2 nodes:
Server1 and Server2
My 2 scripts are simple and examples are to check if the services are running on the servers.
Script1: gsv -name service1
Script2: gsv -name service2
Only Server1 has service1 and should only execute script1 vice versa for Server2 and script2
Right now when the job runs, it will run script1 and script2 on both servers and I'm unable to get the workflow strategy to run only specifically on each. I would like to keep this as one job to verify services on both, eventually scaling to more nodes.

You can try something like Job reference step to call multiple existing jobs in a parent job.
Using nodes at step level is not possible. You need to define 2 jobs each calling respective scripts on specific node and use a parent job to finally trigger both the steps.

Related

Ansible Tower/AWX bug? Job task runs serially instead of parallel

I have a very generic playbook with no hard coded info whatsoever. Everything in the playbook is a variable and filled out by supplying extra vars, even host names for connections. There are no inventory files in use since the host this is run against is random usually.
On a command line in linux, I can run my ansible playbook multiple times with different variables passed and they will all run at the same time.
ansible-playbook cluster_check_build.yml -e {"host": "host1"...}
ansible-playbook cluster_check_build.yml -e {"host": "host2"...}
In tower however, if I create a job template and use the same playbook then things run serially. I call that job template multiple times using the API and passing the data as JSON. Each time I called the API to launch the job task I am supplying new extra_vars so the job task is running against different hosts. I see the jobs run serially and not parallel like from the command line.
I have 400+ hosts that need to have the same playbook run against them at random times. This playbook can take an hour or so to complete. There can be times where the playbook needs to run against 20 or 30 random hosts. Time is crucial and serial Job processing is a non starter.
Is it possible to run the same job template against different hosts in parallel? IF the answer is no then what are my options? Hopefully not creating 400+ job templates. That seems like it defeats the purpose of a very generic playbook.
I feel like an absolute fool. In the bottom right of my job template is a tiny check box that says "ENABLE CONCURRENT JOBS" <---this was the fix.
Yes, you can run templates/playbooks against multiple hosts in parallel in Tower/AWX.
These are the situations where your template will run serially:
"forks" set to 1 in your template
SERIAL=1 within your playbook
Your Tower/AWX instance is setup with only 1 fork
Your Instance is set with >1 forks but other jobs are running at the same time.

In Oozie, how I'd be able to use script output

I have to create a cron-like coordinator job and collect some logs.
/mydir/sample.sh >> /mydir/cron.log 2>&1
Can I use simple oozie wf, which I use for any shell command?
I'm asking because I've seen that there are specific workflows to execute .sh scripts
Sure, you can execute Shell action (On any node in the Yarn cluster) or use the Ssh action if you'd like to target specific hosts. You have to keep in mind that the "/mydir/cron.log" file will be created on the host the action is executed on and the generated file might no be available for other Oozie actions.

Oozie fork running only 2 forks parallely

I am running an oozie workflow job which has a fork node. Fork node directs the workflow to 4 different sub-workflows which in turn are calling shell scripts.
Ideally all 4 shell scripts were suppose to execute parallely but for me only 2 shell scripts are executing parallely.
Could someone help me to address this issue?

How to execute bashscript on multiple ec2 instances at the same time

I have written a bash-script. Just by performin ./script.sh I can execute it at the moment on one node.
But it's need to be executed on multiple nodes. How can I execute one script on multiple nodes at the same time?
At the moment I'm using this:
for ip in $(<ALL_SERVERS_IP); do ...
But this is performing the installation not at the the same time. It's finished the first node and start to the second etc. I'm working on centos7
you can try putting a & after your command.
for ip in $(<ALL_SERVERS_IP); do YOUR_COMMAND_OR_SCRIPT & done
Ampersand at the end will put your script in background, not waiting for script to end.

DATASTAGE: how to run more instance jobs in parallel using DSJOB

I have a question.
I want to run more instance of same job in parallel from within a script: I have a loop in which I invoke jobs with dsjob and without option "-wait" and "-jobstatus".
I want that jobs completed before script termination, but I don't know how to verify if job instance terminated.
I though to use wait command but it is not appropriate.
Thanks in advance
First,you should assure job compile option "Allow Multiple Instance" choose.
Second:
#!/bin/bash
. /home/dsadm/.bash_profile
INVOCATION=(1 2 3 4 5)
cd $DSHOME/bin
for id in ${INVOCATION[#]}
do
./dsjob -run -mode NORMAL -wait test demo.$id
done
project -- test
job -- demo
$id -- invocation id
the two line in shell scipt:guarantee the environment path can work.
Run the jobs like you say without the -wait, and then loop around running dsjob -jobinfo and parse the output for a job status of 1 or 2. When all jobs return this status, they are all finished.
You might find, though, that you check the status of the job before it actually starts running and you might pick up an old status. You might be able to fix this by first resetting the job instance and waiting for a status of "Not running", prior to running the job.
Invoke the jobs in loop without wait or job-status option
after your loop , check the jobs status by dsjob command
Example - dsjob -jobinfo projectname jobname.invocationid
you can code one more loop for this also and use sleep command under that
write yours further logic as per status of the jobs
but its good to create Job Sequence to invoke this multi-instance job simultaneously with the help of different invoaction-ids
create a sequence job if these are in same process
create different sequences or directly create different scripts to trigger these jobs simultaneously with invocation- ids and schedule in same time.
Best option create a standard generalized script where each thing will be getting created or getting value as per input command line parameters
Example - log files on the basis of jobname + invocation-id
then schedule the same script for different parameters or invocations .

Resources