How to Synchronize the bash shell - shell

I'm trying to make some code that start docker container automately.
So i already make shell script to run the container.
and now I'm making the script run the all script i made.
like this
IP=$1
WEBPORT=$2
STREAMPORT=$3
NAME=$4
ID=$5
PASSWARD=$6
RTSP=$7
#create Container
bash ./docker_script/create_container.sh $IP $WEBPORT $STREAMPORT $NAME
#start Container
bash ./docker_script/start_container.sh $IP $NAME
#setting for using kerberos
#1 Setup user and PASSWARD
bash ./kerberos_script/setup_account_script.sh $IP $WEBPORT $ID $PASSWARD
But for making container and run some time is take.
So i want to make to run each script to be run after the script befored was done.
Is there anyway to run the code synchronize?

Related

Detect if docker ran successfully within the same script

My script.sh:
#/bin/sh
docker run --name foo
(Just assume that the docker command works and the container name is foo. Can't make the actual command public.)
I have a script that runs a docker container. I want to check that it ran successfully and echo the successful running status on the terminal.
How can I accomplish this using the container name? I know that I have to use something like docker inspect but when I try to add that command, it only gets executed after I ^C my script probably because docker has the execution.
In this answer, the docker is executed in some other script so it doesn't really work for my use case.
The linked answer from Jules Olléon works on permanently running services like webservers, application servers, database and similar software. In your example, it seems that you want to run a container on-demand, which is designed to do some work and then exit. Here, the status doesn't help.
When running the container in foreground mode as your example shows, it forwards the applications return code to the calling shell. Since you didn't post any code, I give you a simple example: We create a rc.sh script returning 1 as exit-code (which normally indicates some failure):
#!/bin/sh
echo "Testscript failed, returning exitcode 1"
exit 1
It got copied and executed in this Dockerfile:
FROM alpine:3.7
COPY rc.sh .
ENTRYPOINT [ "sh", "rc.sh" ]
Now we build this image using docker build -t rc-test . and execute a short living container:
$ docker run --rm rc-test
Testscript failed, returning exitcode 1
Bash give us the return code in $?:
$ echo $?
1
So we see that the container failed and could simply check them e.g. inside some bash script with an if-condition to perform some action when it fails:
#!/bin/bash
if ! docker run --rm rc-test; then
echo "Docker container failed with rc $?"
fi
After running your docker run command you can check this way if your docker container is still up and running:
s='^foo$'
status=$(docker ps -qf "name=$s" --format='{{.Status}}')
[[ -n $status ]] && echo "Running: $status" || echo "not running"
You just need to execute it with "-d" to execute the container in detached mode. With this, the solutions provided in the other post or the solution provided by #anubhava are both good solutions.
docker run -d -name some_name mycontainer
s='^some_name$'
status=$(docker ps -qf "name=$s" --format='{{.Status}}')
[[ -n $status ]] && echo "Running: $status" || echo "not running"

using a .sh script for docker healthchecks

sitting on that problem for like 2 hours now and iam getting crazy
here is the example bash script:
#!/bin/bash
exit 0;
here is the dockerfile:
HEALTHCHECK --interval=2s CMD HealthCheckTest.sh || exit 1
I still get always unhealthy.
Want i want to do is have some logic inside my bash script to determine if the container is healthy or not.
You can also use Compose health check if you use Docker Compose:
https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
Also you can define your health checks in a bash script which can be called using ENTRYPOINT in Dockerfile, e.g.:
https://github.com/ledermann/docker-rails/blob/develop/docker/wait-for-services.sh

Exit from SSH using a sh script

I have a RHEL box (bash) and I have SSH'd to an ESXi (sh) from it.
Now on ESXi I have created a simple script
#!/bin/sh
echo hello
exit
This only exits the script. I want to exit the script + exit the ESXi shell and return to my original RHEL bash.
Thanks much.
If you are only SSHing in for the purpose of running this command, then instead you could just have the ssh run the command for you:
[RHEL]$ ssh user#ESXi '/tmp/myscript.sh'
...and if you needed to interact with the script, or watch it's output, add the -t switch:
[RHEL]$ ssh -t user#ESXi '/tmp/mysctipt.sh'
Remove the shebang ie do
echo hello && exit
save it as script and then source the script like
. script

In my bash loop over a list of some servers the script exits after execution a perl script with ssh

I have a problem similiar to this one:
in my bash loop over a list of some servers, if the ssh connects the bash script exits
Unfortunately ssh is called from a perl script I can't edit (so I won't be able to add -n to ssh commad).
What else could be done?
put a fake ssh in your path that delegates the call to the real ssh and adds -n
I did:
my_script < /dev/null
and it works just fine.

Auto SSH and execute script

I have roughly 12 computers that each have the same script on them. This script merely pings all the other machines, and prints out whether the machine is "reachable" or "unreachable". However, it is inefficient to login to each machine manually using ssh to execute this script.
Suppose I'm logged into node 1. Is there any way to for me to login to node 2-12 automatically using SSH, execute the ping script, pipe the results to a file, logout and proceed to the next machine? Some kind of bash shell script?
I'm afraid I'm at a loss here since I haven't had experience with shell-scripting before.
Since the script is on the other machines, you can just have ssh run the command for you there:
ssh $hostname my_script >> results_file
When you specify a command like that, it's executed instead of the login shell.
I'll leave it up to you to figure out how to loop over hostnames!
One trick you'll need to use is setting up pre-authorized keys for each host. Then you can run a script on one host, running something like 'ssh hostname command > log.hostname'
This script might be what you are looking for: It allows you to execute one command (which can be your script) on multiple remote machines via ssh. It's a simple script with bash source available, so you should be able to customize it to your needs:
http://www.heinzi.at/projects/upgradebest.sh/
Yes you can
You need actually 2 small scripts as following:
remote_ssh.sh ( which takes as first argument the name of the machine and the rest of the arguments are your script that you want to execute with his own arguments)
Example : remote_ssh.sh node5 "echo hello world"
remote_ssh.sh as following:
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
REST_ARG=${ALL_ARG##$FST_ARG}
echo "Executing REMOTE COMMAND ON $FST_ARG"
/usr/bin/ssh $FST_ARG bash execute_ssh_command.sh $FST_ARG pwd $REST_ARG
execute_ssh_command.sh as following :
#!/bin/bash
ALL_ARG=$#
FST_ARG=$1
DIR_ARG=$2
REM_ARG="$1 $2"
REST_ARG=${ALL_ARG##$REM_ARG}
cd $DIR_ARG
$REST_ARG
of course you have to get this 2 scripts in your path of all your nodes ( maybe ~/bin/ )
Hope that it's helpful

Resources