How to execute bashscript on multiple ec2 instances at the same time - bash

I have written a bash-script. Just by performin ./script.sh I can execute it at the moment on one node.
But it's need to be executed on multiple nodes. How can I execute one script on multiple nodes at the same time?
At the moment I'm using this:
for ip in $(<ALL_SERVERS_IP); do ...
But this is performing the installation not at the the same time. It's finished the first node and start to the second etc. I'm working on centos7

you can try putting a & after your command.
for ip in $(<ALL_SERVERS_IP); do YOUR_COMMAND_OR_SCRIPT & done
Ampersand at the end will put your script in background, not waiting for script to end.

Related

Waiting for completion of parallel running shell scripts

I have an array and using that array I need to run the shell scripts in parallel as
for i in arr
do
sh i.sh &
done
wait
I need to wait for the completion of their execution before proceeding to the next step.
I think that your script doesn't do what you want it to do for a different reason than you're expecting. sh i.sh & is trying to run a file called i.sh. It's not using the variable i. To fix it, simply add $ before the i. it is waiting for commands to complete. Just not the ones you're expecting it to. It's actually trying to run the same script that doesn't exist a bunch of times.
for i in arr
do
sh $i.sh &
done
wait

How to run two node.js commands independently in windows?

In windows, I run 2 commands like this
node watcher.js && node server.js
The first runs a watcher script, and the second runs a server. The problem is both are persistent and don't actually end. So the running server part never happens because the watcher script still runs.
Is there a way I can say to run both but don't care about finishing a script?
Thanks
try
start node watcher.js && start node server.js
this will start two cmd programs independently.

start multiple docker containers with a single command line shell script (without docker-compose)

I've got 3 containers that will run on a single server, which we'll call: A,B,C
Each server has a script on the host that has the commands to start docker:
A_start.sh
B_start.sh
C_start.sh
I'm trying to create a swarm script to start them all, but not sure how.
ABC_start.sh
UPDATE:
this seems to work, with the first being output to the terminal, cntrl+C exits out of them all.d
./A_start.sh & ./B_start.sh & ./C_start.sh
swarm will not help you start them at all..., it is used to distribute the work amongst docker machines that are part of the cluster.
there is no good reason not to use docker-compose for that use case, its main purpose is to link containers properly, and bring them up, so your collection of scripts could end up being a single docker-compose up command.
In bash,
you can do this:
nohup A_start.sh &
nohup B_start.sh &
nohup C_start.sh &

Chain dependent bash commands

I'm trying to chain together two commands:
The first in which I start up postgres
The second in which I run a command meant for postgres(a benchmark, in this case)
As far as I know, the '||', ';', and '&/&&' operators all require that the first command terminate or exit somehow. This isn't the case with a server that's been started, so I'm not sure how to proceed. I can't run the two completely in parallel, as the server has to be started.
Thanks for the help!
I would recommend something along the lines of the following in a single bash script:
Start the Postgres server via a command like /etc/init.d/postgresql start or similar
Sleep for a period of time to give the server time to startup; perhaps a minute or two
Then run a psql command that connects to the server to test its up-ness
Tie that command together with your benchmark via &&, so it completes only if the psql command completes (depending on the exact return codes from psql, you may need to inspect the output from the command instead of the return code). The command run via psql would best be a simple query that connects to the server and returns a simple value that can be cross-checked.
Edit in response to comment from OP:
It depends on what you want to benchmark. If you just want to benchmark a command after the server has started, and don't want to restart the server every time, then I would tweak the code to run the psql up-ness test in a separate block, starting the server if not up, and then afterward, run the benchmark test command unconditionally.
If you do want to start the server up fresh each time (to test cold-start performance, or similar), then I would add another command after the benchmarked command to shutdown the server, and then sleep, re-running the test command to check for up-ness (where this time no up-ness is expected).
In other case you should be able to run the script multiple times.
A slight aside: If your test is destructive (that is, it writes to the DB), you may want to consider dumping a "clean" copy of the DB -- that is, the DB in its pre-test state -- and then creating a fresh DB, with a different name from the original, using that dump with each run of the script, dropping it beforehand.

start and end shellscript for multiple programs

Following problem:
3 programs:
one Java application which is started via a existing sh script
one node application
one grunt server
I want to write 2 shell scripts, the first should start all 3 programs. The second should end them. For the first script I simply call the starting commands. But for the second, which should be a standalone script(as the first should be), I have to know all process Ids for killing them. But even if I know these Ids, what if they started sub processes. I would just kill these parent processes, wouldn't I?
What's the approach here?
Thanks in advance!
Try pkill -P -KILL [parentid]. This should kill processes with the designated parent ID.

Resources