Need help to develop CHEF recipe to execute 4 bash scripts on 4 servers in sequence and verify if each script executed successfully - bash

I need help to develop CHEF recipe to automate application installation.
Here is my requirement -
Well, I have 4 different bash scripts and need to execute on 4 different severs in sequence like script 'A' to execute on server '1' and verify that script executed successful and then script 'B' to execute on server '2' and again verify execution. similarly for other 2 scripts to execute on 2 different servers.
Any ideas to develop recipe for above requirement??
Thanks in advance.

Nodes are usually independent from each other. If you still want to do this I would recommend using a data bag that stores the outcome of the scripts or alternatively a node tag about the outcome that you can query in the chef client runs before executing the script.

Related

Running python script in parallel with Ansible

I am managing 6 machines, or more, at AWS with Ansible. Those machines must run a python script that runs forever (the script has a while True).
I call the python script via command: python3 script.py
But just 5 machines run the script, the others doesn't. I can't figure out what I am doing wrong.
(Before the script call everything works fine for all machines like echo, ping, etc)
I already found the awnser.
The fork in ansible restrict to 5 machines as default. You must add an fork to the configuration file with a greater number, but your machine with Ansible must have power to manage that.
I'll let the question because to me was pretty hard to find the awnser.

Oozie fork running only 2 forks parallely

I am running an oozie workflow job which has a fork node. Fork node directs the workflow to 4 different sub-workflows which in turn are calling shell scripts.
Ideally all 4 shell scripts were suppose to execute parallely but for me only 2 shell scripts are executing parallely.
Could someone help me to address this issue?

How to execute bashscript on multiple ec2 instances at the same time

I have written a bash-script. Just by performin ./script.sh I can execute it at the moment on one node.
But it's need to be executed on multiple nodes. How can I execute one script on multiple nodes at the same time?
At the moment I'm using this:
for ip in $(<ALL_SERVERS_IP); do ...
But this is performing the installation not at the the same time. It's finished the first node and start to the second etc. I'm working on centos7
you can try putting a & after your command.
for ip in $(<ALL_SERVERS_IP); do YOUR_COMMAND_OR_SCRIPT & done
Ampersand at the end will put your script in background, not waiting for script to end.

Chain dependent bash commands

I'm trying to chain together two commands:
The first in which I start up postgres
The second in which I run a command meant for postgres(a benchmark, in this case)
As far as I know, the '||', ';', and '&/&&' operators all require that the first command terminate or exit somehow. This isn't the case with a server that's been started, so I'm not sure how to proceed. I can't run the two completely in parallel, as the server has to be started.
Thanks for the help!
I would recommend something along the lines of the following in a single bash script:
Start the Postgres server via a command like /etc/init.d/postgresql start or similar
Sleep for a period of time to give the server time to startup; perhaps a minute or two
Then run a psql command that connects to the server to test its up-ness
Tie that command together with your benchmark via &&, so it completes only if the psql command completes (depending on the exact return codes from psql, you may need to inspect the output from the command instead of the return code). The command run via psql would best be a simple query that connects to the server and returns a simple value that can be cross-checked.
Edit in response to comment from OP:
It depends on what you want to benchmark. If you just want to benchmark a command after the server has started, and don't want to restart the server every time, then I would tweak the code to run the psql up-ness test in a separate block, starting the server if not up, and then afterward, run the benchmark test command unconditionally.
If you do want to start the server up fresh each time (to test cold-start performance, or similar), then I would add another command after the benchmarked command to shutdown the server, and then sleep, re-running the test command to check for up-ness (where this time no up-ness is expected).
In other case you should be able to run the script multiple times.
A slight aside: If your test is destructive (that is, it writes to the DB), you may want to consider dumping a "clean" copy of the DB -- that is, the DB in its pre-test state -- and then creating a fresh DB, with a different name from the original, using that dump with each run of the script, dropping it beforehand.

DATASTAGE: how to run more instance jobs in parallel using DSJOB

I have a question.
I want to run more instance of same job in parallel from within a script: I have a loop in which I invoke jobs with dsjob and without option "-wait" and "-jobstatus".
I want that jobs completed before script termination, but I don't know how to verify if job instance terminated.
I though to use wait command but it is not appropriate.
Thanks in advance
First,you should assure job compile option "Allow Multiple Instance" choose.
Second:
#!/bin/bash
. /home/dsadm/.bash_profile
INVOCATION=(1 2 3 4 5)
cd $DSHOME/bin
for id in ${INVOCATION[#]}
do
./dsjob -run -mode NORMAL -wait test demo.$id
done
project -- test
job -- demo
$id -- invocation id
the two line in shell scipt:guarantee the environment path can work.
Run the jobs like you say without the -wait, and then loop around running dsjob -jobinfo and parse the output for a job status of 1 or 2. When all jobs return this status, they are all finished.
You might find, though, that you check the status of the job before it actually starts running and you might pick up an old status. You might be able to fix this by first resetting the job instance and waiting for a status of "Not running", prior to running the job.
Invoke the jobs in loop without wait or job-status option
after your loop , check the jobs status by dsjob command
Example - dsjob -jobinfo projectname jobname.invocationid
you can code one more loop for this also and use sleep command under that
write yours further logic as per status of the jobs
but its good to create Job Sequence to invoke this multi-instance job simultaneously with the help of different invoaction-ids
create a sequence job if these are in same process
create different sequences or directly create different scripts to trigger these jobs simultaneously with invocation- ids and schedule in same time.
Best option create a standard generalized script where each thing will be getting created or getting value as per input command line parameters
Example - log files on the basis of jobname + invocation-id
then schedule the same script for different parameters or invocations .

Resources