Issue unable to execute all bash script within bash script - bash

I have following bash script which runs two process in parallel (two bash scripts internally), I need two bash scripts two run in parallel once both are finished I need total time execution. But the issue is the first bash script ./cloud.sh doesn't run but when I run it individually it runs successfully, and I am running main test bash script with sudo rights.
Test
#!/bin/bash
start=$(date +%s%3N)
./cloud.sh &
./client.sh &
end=$(date +%s%3N)
echo "Time: $((duration=end-start))ms."
Client.sh
#!/bin/bash
sudo docker build -t testing .'
Cloud.sh
#!/bin/bash
start=$(date +%s%3N)
ssh kmaster#192.168.101.238 'docker build -t testing .'
end=$(date +%s%3N)
echo "cloud: $((duration=end-start)) ms"

A background process won't be able to get keyboard input from you. As soon as it tries so, it will receive a SIGTTIN signal which will stop it (until it is brought back to the foreground).
I suspect that one or both of your scripts asks you to enter something, typically a password.
Solution 1: configure sudo and ssh in order to make them password-less. With ssh this is easy (ssh key), with sudo this is a security risk. If docker build needs you to enter something, you are doomed.
Solution 2: make only the ssh script (Cloud.sh) password-less and keep the sudo script (Client.sh) in foreground. Here again, if the remote docker build needs you to enter something, this won't work.
How to wait for your background processes? Just use the wait builtin (help wait).
An example with solution 2:
#!/bin/bash
start=$(date +%s%3N)
./cloud.sh &
./client.sh
wait
end=$(date +%s%3N)
echo "Time: $((duration=end-start))ms."

Related

source a bash script with anacron

I am learning to automate tasks using anacron by following this anacron guide. My task is to remove the saved ssh keys every day. I know this is possible using the --timeout argument, but I wanted to use a bash script and do it manually.
remove-keys.sh :
SERVICE="ssh-agent"
if pgrep -x "$SERVICE" >/dev/null
then
/usr/bin/ssh-add -D
else
:
fi
anacrontab config:
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
START_HOURS_RANGE=18-20
1 5 remove-keys source $HOME/.local/etc/cron.daily/remove-keys.sh
When I execute source remove-keys.sh, all identities are removed. I have given the necessary file permissions to execute. Anacron syntax test was also successful. I have used source, so the that commands executed in the script are executed as part of the current bash shell (or bash session).
I tested anacron with the following command:
anacron -fn -t $HOME/.local/etc/anacrontab -S $HOME/.var/spool/anacron
But when I look up ssh-add -L, all identities are still present.
What am I doing wrong?
EDIT 1:
Context:
I am using Ubuntu-20.04 on WSL2. Also, I am persisting the identities by using keychain to reuse ssh-agent (this is necessary when using more than one shell at a time). So, technically, the identities have an infinite timeout until I shut down WSL.
The identities in your SSH agent are specific to your login session. I don't think there is a sane way to use ssh-agent from a cron job.
Trying to manipulate your interactive environment from cron seems doomed anyway. It will fail if you are not logged in when the job runs, and have weird failure modes if you are logged in more than once.
Perhaps instead create a simple wrapper script which runs an endless loop with (say) a five-minute sleep between iterations from your desktop environment's login hooks.

How to run Bash Script on startup and keep monitoring the results on the terminal

Due to some issues I wont elaborate here to not waste time, I made a bash script which will ping google every 10 minutes and if there is a response it will keep the loop running and if not then the PC will restart. After a lot of hurdle I have been able to make the script and also make it start on bootup. However the issue is that i want to see the results on the terminal, meaning I want to keep monitoring it but the terminal does not open on bootup. But it does open if I run it as ./net.sh.
The script is running on startup, that much I know because I use another script to open an application and it works flawlessly.
My system information
NAME="Linux Mint"
VERSION="18.3 (Sylvia)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 18.3"
VERSION_ID="18.3"
HOME_URL="http://www.linuxmint.com/"
SUPPORT_URL="http://forums.linuxmint.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/linuxmint/"
VERSION_CODENAME=sylvia
UBUNTU_CODENAME=xenial
The contents of my net.sh bash script are
#! /bin/bash
xfce4-terminal &
sleep 30
while true
do
ping -c1 google.com
if [ $? == 0 ]; then
echo "Ping Sucessful. The Device will Continue Operating"
sleep 600
else
systemctl reboot
fi
done
I have put the scripts in /usr/bin and inserted the scripts for startup at boot in /etc/rc.local
So I did some further research and with help from reddit I realized that the reason I couldnt get it to show on terminal was because the script was starting on bootup but I needed it to start after user login. So I added the script on startup application (which can be found searching on start menu if thats whats it called). But it was still giving issues so I divided the script in two parts.
I put the net.sh script on startup and directed that script to open my main script which i named net_loop.sh
This is how the net.sh script looks
#! /bin/bash
sleep 20
xfce4-terminal -e usr/bin/net_loop.sh
And the net_loop.sh
#! /bin/bash
while true
do
ping -c1 google.com
if [ $? == 0 ]; then
echo "Ping Sucessful. The Device will Continue Operating"
sleep 600
else
systemctl reboot
fi
done
The results are the results of the net_loop.sh script are open in another terminal.
Note: I used help from this thread
If minute interval is usable why not use "cron" to start your?
$> crontab –e
or
$> sudo crontab –e

Bash script to run a detached loop that sequentially starts backgound processes

I am trying to run a series of tests on a remote Linux server to which I am connecting via ssh.
I don't want to have to stay logged in the ssh session during the runs -> nohup(?)
I don't want to have to keep checking if one run is done -> for loop(?)
Because of licensing issues, I can only run a single testing process at a time -> sequential
I want to keep working while the test set is being processed -> background
Here's what I tried:
#!/usr/bin/env bash
# Assembling a list of commands to be executed sequentially
TESTRUNS="";
for i in `ls ../testSet/*`;
do
MSG="running test problem ${i##*/}";
RUN="mySequentialCommand $i > results/${i##*/} 2> /dev/null;";
TESTRUNS=$TESTRUNS"echo $MSG; $RUN";
done
#run commands with nohup to be able to log out of ssh session
nohup eval $TESTRUNS &
But it looks like nohup doesn't fare too well with eval.
Any thoughts?
nohup is needed if you want your scripts to run even after the shell is closed. so yes.
and the & is not necessary in RUN since you execute the command with &.
Now your script builds the command in the for loop, but doesn't execute it. It means you'll have only the last file running. If you want to run all of the files, you need to execute the nohup command as part of your loop. BUT - you can't run the commands with & because this will run commands in the background and return to the script, which will execute the next item in the loop. Eventually this would run all files in parallel.
Move the nohup eval $TESTRUNS inside the for loop, but again, you can't run it with &. What you need to do is run the script itself with nohup, and the script will loop through all files one at a time, in the background, even after the shell is closed.
You could take a look at screen, an alternative for nohup with additional features. I will replace your testscript with while [ 1 ]; do printf "."; sleep 5; done for testing the screen solution.
The commands screen -ls are optional, just showing what is going on.
prompt> screen -ls
No Sockets found in /var/run/uscreens/S-notroot.
prompt> screen
prompt> screen -ls
prompt> while [ 1 ]; do printf "."; sleep 5; done
# You don't get a prompt. Use "CTRL-a d" to detach from your current screen
prompt> screen -ls
# do some work
# connect to screen with batch running
prompt> screen -r
# Press ^C to terminate the batch (script printing dots)
prompt> screen -ls
prompt> exit
prompt> screen -ls
Google for screenrc to see how you can customize the interface.
You can change your script into something like
#!/usr/bin/env bash
# Assembling a list of commands to be executed sequentially
for i in ../testSet/*; do
do
echo "Running test problem ${i##*/}"
mySequentialCommand $i > results/${i##*/} 2> /dev/null
done
Above script can be started with nohup scriptname & when you do not use screen or simple scriptname inside the screen.

Shell script not waiting

ssh user#myserver.com<<EOF
cd ../../my/path/
sh runscript.sh
wait
cd ../../temp/path
sh secondscript.sh
EOF
The first script runs and asks me the questions in that script, but before i'm even able to start typing to answer them the second script starts running. From what I'm reading this shouldn't be happening even without the wait.

How to make ssh to kill remote process when I interrupt ssh itself?

In a bash script I execute a command on a remote machine through ssh. If user breaks the script by pressing Ctrl+C it only stops the script - not even ssh client. Moreover even if I kill ssh client the remote command is still running...
How can make bash to kill local ssh client and remote command invocation on Crtl+c?
A simple script:
#/bin/bash
ssh -n -x root#db-host 'mysqldump db' -r file.sql
Eventual I found a solution like that:
#/bin/bash
ssh -t -x root#db-host 'mysqldump db' -r file.sql
So - I use '-t' instead of '-n'.
Removing '-n', or using different user than root does not help.
When your ssh session ends, your shell will get a SIGHUP. (hang-up signal). You need to make sure it sends that on to all processes started from it. For bash, try shopt -s huponexit; your_command. That may not work, because the man page says huponexit only works for interactive shells.
I remember running into this with users running jobs on my cluster, and whether they had to use nohup or not (to get the opposite behaviour of what you want) but I can't find anything in the bash man page about whether child processes ignore SIGHUP by default. Hopefully huponexit will do the trick. (You could put that shopt in your .bashrc, instead of on the command line, I think.)
Your ssh -t should work, though, since when the connection closes, reads from the terminal will get EOF or an error, and that makes most programs exit.
Do you know what the options you're passing to ssh do? I'm guessing not. The -n option redirects input from /dev/null, so the process you're running on the remote host probably isn't seeing SIGINT from Ctrl-C.
Now, let's talk about how bad an idea it is to allow remote root logins:
It's a really, really bad idea. Have a look at HOWTO: set up ssh keys for some suggestions how to securely manage remote process execution over ssh. If you need to run something with privileges remotely you'll probably want a solution that involves a ssh public key with embedded command and a script that runs as root courtesy of sudo.
trap "some_command" SIGINT
will execute some_command locally when you press Ctrl+C . help trap will tell you about its other options.
Regarding the ssh issue, i don't know much about ssh. Maybe you can make it call ssh -n -x root#db-host 'killall mysqldump' instead of some_command to kill the remote command?
What if you don't want to require using "ssh -t" (for those as forgetful as I am)?
I stumbled upon looking at the parent PID, because CTRL/C from the initiating session results in the ssh-launched process on the remote process exiting, although its child process continues. By way of example, here's my script that is on the remote server.
#!/bin/bash
Answer=(Alive Dead)
Index=0
while [ ${Index} -eq 0 ]; do
if ! kill -0 ${PPID} 2> /dev/null ; then Index=1; fi
echo "Parent PID ${PPID} is ${Answer[$Index]} at $(date +%Y%m%d%H%M%S%Z)" > ~/NowTime.txt
sleep 1
done
I then invoke it with "ssh remote_server ./test_script.sh"
"watch cat ~/NowTime.txt" on the remote server shows the timestamp in the file increasing and declaring that the parent process is alive; once I hit CTRL/C in the launching process, the script on the remote server notes that its parent process has died, and the script exits.

Resources